I recently upgraded my 8-year-old Sandy Bridge system to a modern Ryzen 3000 platform and also swapped my GeForce GTX 1070 for the new Navi-based AMD Radeon RX 5700 XT video card. I specifically upgraded for better VR performance as the Core i5-2500K I was running on was going on its last legs in current VR games. I watched tons of YouTube videos and read reviews/benchmarks before taking the plunge, and I was a bit concerned about AMD's performance in VR since benchmarks or even first-hand experience reports by users are hard to come by, especially for Navi-based video cards since they are so new. I thought I might give some experience of my own back to the VR community so VR-heads looking at these new cards and CPUs can get a better idea of their performance.
My "benchmarking" was not scientific at all. Apart from running 3DMark and VRMark, I used OculusTrayTool with the Oculus performance HUD to look at frame-rates and headroom to judge performance in a handful of games. I also looked at Intel XTU, Windows Task Manager, AMD Ryzen Master and WattMan to figure out CPU and GPU loads.
The first question my case study can answer is a pretty controversial one, in my opinion: does a new CPU platform make that big of a difference in VR? Are RAM speeds, core/thread count and IPC (instructions per cycle) rate actually important for VR performance or is it all down to the GPU?
The answer, after my upgrading process, is a clear YES, yes it does make a difference and quite a bit so! If you are on an older system, say Ivy Bridge or even Haswell, and you have been wondering if upgrading gets you more performance now, then I'll say go for it.
I first built the new platform and tested with my old GPU (GTX 1070) to see the CPU/RAM difference, then installed the new GPU (RX 5700 XT) and repeated the tests.
Oculus Home SS 1.0, high quality
Old system: headroom 20-50% (around 45%), straight line with minor spikes. Drops frames every second or so, even without visible spikes. Big spikes when moving controllers around. New system (GTX): headroom 40-50%, straight line, no frame drops. Previously visible spikes when moving controllers around are gone, but CPU load does not seem to increase when doing so. This means tracking is way more efficient on the new platform. New system (Radeon): headroom 60-65%
Robo Recall: HQ, opening stance (elevator)
SS 1.6, high quality, MSAA 2x, Planar Reflections ON, Indirect Shadows ON
Old system: headroom 0-10%, CPU 50-60%, GPU 100% New system (GTX): headroom 0%, CPU 20-30%, GPU 100%
New system (Radeon): headroom 0-5%, CPU <20%, GPU 70% (SS 1.8 brought GPU to 100%)
On old system, headroom was very spikey, big spikes even on low graphics settings with ~50% headroom, which led to dropped frames. Also single dropped frames without visible spikes. Headroom did not change with SS changes, which made it seem that the engine is CPU-bound.
On new system, headroom is bigger with the same GPU. Game runs with half the CPU load while apparently using more threads (four cores loaded 30-60%, four more less loaded, SMT threads almost idle). Headroom has no spikes at all and dropped frames are gone. Running at SS 1.6 does not drop frames, even with headroom at 0%.
Headroom now changes with SS --> old CPU was bottlenecking engine (as headroom was always close to 0%). We can assume that the headroom approaches 0% when a bottleneck appears, even if the GPU or CPU still has reserves. Project CARS 2 will prove this further down. Interestingly, CPU load went down a bit after installing the Radeon GPU. This did not happen with other games, but maybe Unreal likes the AMD CPU/GPU combo.
The Climb: Zen Bay #1, opening stance
Shadows low, SSDO OFF, LOD ON, AA OFF
Old system, SS 1.0: headroom 20-30%, CPU 75-85%, GPU 40-50%
Old system, SS 1.25: headroom (-15) to (+15%), CPU 75-85%, GPU 50-70%
New system (GTX), SS 1.25: headroom 10-30%, CPU 20-30%, GPU 70% New system (Radeon), SS 1.25: headroom 35-50%, CPU 20-30%, GPU 45-55%
On old system, this game was CPU-bottlenecked (CPU usage close to the limit). I'm showing two SS settings here to demonstrate that headroom increases with lower SS, which it shouldn't since the CPU is the limiting factor, but somehow CryEngine is different from other games. Performance was spikey with occasional dropped frames depending on where I looked.
On new system, headroom still increases with lower SS while CPU load stays the same. CPU load is more than halved over old system! Game runs with minor negative headroom spikes on the Nvidia GPU, but no dropped frames. Headroom is more uniform and CPU load is much lower while using more threads. Thread usage looks similar to Robo Recall. The Radeon GPU upped the headroom considerably because of lower GPU usage.
Apex Construct: safehouse, opening stance
Old system, SS 1.0: headroom 0-20%, CPU 55-65%, GPU 55%
Old system, SS 1.4: headroom (-10) to (+10%), CPU 60-70%, GPU 90% New system (GTX), SS 1.0: headroom 20-30%, CPU 20%, GPU 55%
New system (GTX), SS 1.4: headroom 0%, CPU 20%, GPU 90% New system (Radeon), SS 1.4: headroom >20%, CPU 20-25%, GPU 70%
On old system, headroom was spikey with smaller and some larger spikes, resulting in some dropped frames, but less severe than in Robo Recall. CPU load actually changed with different SS settings, which is why I showed them above. This behaviour might be unique to Unity and did not repeat on the new system.
On new system, GPU load was exactly the same until I swapped the GPU out for the Radeon card. The game loads the Ryzen's eight cores in descending fashion from first to last, but also slightly loads the SMT threads. The engine seems to benefit more from SMT than Unreal and CryEngine. Headroom is uniform with no spikes or dropped frames.
Project CARS 2: Long Beach, lone practice, in pits & test lap
Old system, SS 1.0: headroom (-25) to (+5%), CPU 80-90%, GPU 80%
Old system, SS 1.2: headroom (-25) to (+5%), CPU 80-90%, GPU 95% New system (GTX), SS 1.0: headroom 0-30%, CPU 25-30%, GPU 80%
New system (GTX), SS 1.2: headroom 0-10%, CPU 25-30%, GPU 95%
New system (Radeon), SS 1.2: headroom 0-10%, CPU 25-30%, GPU 70%
New system (Radeon), SS 1.6: headroom <0%, CPU 25-30%, GPU 100%
This is an especially interesting case:
On old system, the frame-rate was between 90 and 70 all the time, with dips into 50s on back and front straights. Adding 19 AI cars to the session resulted in even more dips into 50s. Game was heavily CPU-bound, but used most of my GPU as well. Headroom did not change with SS because the CPU was at the limit.
On new system, frame-rate was 90 all the time even with the old GTX 1070. Practice with 19 AI cars displayed no visible difference in load or performance. Headroom still does not change with lower SS settings where it was 0% before, but does increase where it was over 0% (this didn't happen on old system). With SS at 1.2 on my old GPU, the game dropped to ASW (45 FPS) as soon as the headroom got to 0%, turning the headroom negative (-25). With ASW disabled the game stayed at 90 FPS with headroom consistently hovering around 0%. So in this case forcing ASW off via OculusTrayTool might be a good thing! Adding the Radeon GPU to the new system did not change the headroom because CPU load was the same and the limiting factor. However, the new GPU still has reserves left and can be used for higher SS, and gave out only at SS 1.6.
With this game, cores and threads are loaded the same as with Unreal and CryEngine (four cores highly loaded, four more less loaded, SMT threads almost idle). Unfortunately this game addresses only four cores properly AFAIK, and Windows' task scheduler is responsible for managing the threads across eight cores. Even with the new CPU this limited thread usage can become problematic as two cores were almost 100% loaded at times, effectively still bottlenecking the game. A faster CPU might help here, but that might be found over in Intel land. That said, I never experienced dropped frames even with these high core loads.
VRMark and 3DMark results
I'm reciting from memory here, but since these benchmarks are aware of CPUs with high core and thread counts, CPU performance considerably increased the scores. The Time Spy CPU test went from 9 FPS (old system) to 25 FPS (new system) at the end while the Fire Strike CPU test tripled frame-rate from 25 to 75. GPU-bound tests were not changing much with the Ryzen platform until I also swapped out the old GPU for the new Radeon card. This gave ~30% more performance in these tests, which mirrors the increase in the games I tested.
VRMark was a slightly different story as the Cyan Room benchmark, which was right at the "almost 90 FPS" mark on my old system, experienced an 80 FPS increase with the new Radeon GPU (the platform change made it run smoother and reduced dropped frames, but did not increase frame-rate with the old GPU). However the Blue Room 4K benchmark only saw an increase of perhaps 20%. So VRMark is not very representative of real-world VR performance.
Here's the TL;DR: upgrading the old Sandy Bridge platform to Ryzen 3000 massively increased reliability and smoothness in all games tested, even when the old GPU was kept in the new system and was at its limits. The new CPU and RAM make it far easier for the VR system to manage tracking and compositing which results in smoother frame-times and reduces frame drops.
The TL;DR on the AMD Radeon RX 5700 XT (Navi) GPU: I had zero problems running all the games I played with an Nvidia GPU before, and the card proved very capable with performance increases of about 30% over my old GTX 1070 in all games tested. This means I can increase super-sampling and graphical details to improve clarity. Games did look good with the GTX 1070 but SS was always tricky to set, with 1.2 usually being the limit. Now I can comfortably go to at least 1.4, with some games even allowing 1.6 and running well. I'd say the RX 5700 XT is on par with a GTX 1080 Ti here, which is still the go-to card for VR.
A few more words on CPU performance:
The biggest problem by far that I faced in two years of VR with a Sandy Bridge system was micro-stutters, dropped frames and generally unpredictable performance. Oftentimes, people will say "you need a better GPU for demanding VR games" but forget that the CPU platform is just as important for VR. I did not upgrade my i5-2500K in eight years because it can still handle modern games on flat screens fine, with only occasional bottlenecks, which are easy to ignore at 60 FPS on a monitor. In VR these bottlenecks manifest much faster even with less complex graphics and even when you don't see it on your CPU load or the performance HUD.
I was practically convinced that it "must be the Oculus software or the games not optimized properly", since the headroom did not spike when a game dropped frames, and my CPU was often not 100% loaded either. It took me a while to realize that CPU and RAM capability do make a difference in VR even when you think the old platform can handle it, because there is more to a CPU than just clock speed. Also, a lower CPU load doesn't mean it can handle the calculations fast enough to provide 90 FPS without dropped frames. Modern instruction sets, higher IPC rate and general efficiency make all the difference here, so much so that my Ryzen 3700X running at a lower clock speed than my old i5 is performing better with headroom to spare.
That being said, I was hoping for a bit more thread usage on the Ryzen CPU. The OS always puts a load on all eight cores, but mostly ignores the SMT threads, and the core load varies from core to core, which in case of Project CARS 2 can still mean a potential bottleneck when only one or two cores hit 100%. We need better multithreading awareness in VR engines and the Oculus runtime could perhaps also run on its own thread (not sure if it already does). VRMark loads all threads equally and performs beautifully and I'm hopeful that new VR games will slowly adapt to these more capable CPUs of today.
I hope this report helps some VR fans who want to upgrade for a better experience, and I also hope you enjoyed reading (even if it was a bit messy up there). Enjoy your Rift, VR-heads!