cancel
Showing results for 
Search instead for 
Did you mean: 

Can we establish the real facts about PC hardware for VR?

LogicalIncremen
Honored Guest
Hi, VR devs!

As a PC hardware enthusiast, I'm trying to ensure that my PC is optimally powered for VR, and I'm in a position where I give build advice to large numbers of other PC gamers, many of whom want their PCs to be ready for Oculus CV1. Because I'm in this position, I've been getting into lots of debates, mainly related to the Oculus minimum hardware specs and the likelihood of multi-GPU support in the near future. I've spent a lot of time talking to fellow gamers, when really I should be talking to actual developers. In that spirit, I hope that my questions are welcome on this forum. Please let me know if I should post this elsewhere. Trust me when I say that if you can help me settle this debate, you will be helping many other PC gamers as a result.

I'd like to focus this thread on the minimum specs quoted by Oculus: a GTX 970 (or R9 290) and an i5-4590.

Just looking at the numbers, the raw graphical power you need to display game images on a CV1 screen is enormous. Taking into account the 1.4x 'eye buffer' on the 2160x1200 resolution, you get a true rendering resolution of 3024x1680 for a VR headset. (Correct me if I'm wrong about the eye buffer -- I assume Oculus and Vive are the same, and I know Vive has a 1.4x render resolution.)

Trying to target 90 FPS/Hz at 3024x1680 requires roughly 90% as much graphical power as rendering 4K (3840x2160) at 60 FPS/Hz. As Oculus have said themselves, it's also more than 3 times the graphical power required to display a game at 1080p at 60 FPS/Hz.

As everyone probably knows, rendering modern games at 60 FPS with 4K resolution requires a huge amount of graphical power. Looking at a modern PC game like Fallout 4, even a GTX 980 Ti averages only 46 FPS at 4K on Ultra settings. That would translate into about 77 FPS on an Oculus. A GTX 970 averages 33 FPS in Fallout 4 at 4K. That's about 55 FPS on an Oculus, assuming it's directly translatable.

If Fallout 4 were to have launched with VR support, you would need a much more powerful GPU than a GTX 970 in order to play it at 90 FPS on the Oculus. In fact, you'd probably need a multi-GPU system, but from my understanding, multi-GPU support for VR is still under development. (It seems to me that multi-GPU support will absolutely need to work for AAA gaming at 90 FPS to happen.)

My concern is that many gamers are going to be building PCs with GTX 970s on the promise that it will run games at 90 FPS. But based on the math, many new AAA games will need much more power than a GTX 970 to run at high framerates.

When debating this, some people have told me that games designed specifically for Oculus will indeed run at 90 FPS on a GTX 970 because that's the hardware that the developers are targeting. I hope they are right, but I'm still skeptical. You cannot escape the fact that you're trying to pump out 457 million pixels per second (3024x1680x90). Based on my knowledge, a GTX 970 simply cannot perform at that level unless the game is only moderately intensive on graphics. You sacrifice looks, or you sacrifice framerates.

A flagship VR game like EVE Valkyrie looks just as visually impressive as Fallout 4, if not more so. I'll be extremely impressed if gamers are able to achieve 90 FPS with a GTX 970 without significantly dialing down the visual quality of the game. I really, really hope they're able to. But I'm going to remain skeptical until I see it happen.

I imagine two broad classes of games that will be released with VR support:

1. AAA games with really high-end graphics. These will have the resources to optimize for a variety of PC configurations, and will support SLI/CrossFire, because that will be the only way to experience the games at their full potential. These games will require much more graphical power than a GTX 970 can provide.

2. Less expensive games where the focus is on gameplay or story rather than graphics. These games will be less graphically demanding, so it won't matter if they support SLI at all, because even a GTX 970 will be more than enough.

My concern, once again, is that many first-generation adopters are building PCs with a GTX 970 (as suggested) and anticipating that they'll be running the flashiest games at 90 FPS when their Oculus comes out. That's certainly the perception that I see in discussions among the general gaming population.

Can someone shed light on this concern? Am I completely off-base, or will the first generation of games designed specifically for Oculus just have to look very simple graphically compared to today's modern PC games if we want to run them on 970s?
25 REPLIES 25

owenwp
Expert Protege
Pretty much. The increase in cost is more than what you stated, because everything needs to be rendered twice every frame. Just looking at the pixel cost is only half the story. The number of API draw calls per second and vertices per second goes up by 200% compared to normal rendering at 60fps, before you consider the increase in FOV putting more objects in view at one time.

So, depending on how the game is bottlenecked you are looking at double to triple the cost of 4k 60fps, and that is assuming a perfect world where reducing latency has no effect on framerate. And you have to hit that 90fps target every single time. You can't get away with some parts of the game going a little over and slowing down. That means realistically you need to put in a bit of a safety buffer so you don't get sick every time someone someone sends you an IM.

So no, current AAA games just won't work in VR with existing hardware without cutting out a lot of detail. And there will never be a time when this is not true, no matter what. When a developer makes a game, they design it to look as good as possible and run on current hardware. If the game could run in a third of the time, they would have put in three times as much detail.

When you design a game for VR specifically, you do whatever you need to hit those specs. It is not hard, even on older hardware. You just need to remove things until it works, same as any platform.

cybereality
Grand Champion
As far as I know, all the made-for-VR games coming to the consumer Oculus Rift will run well on the recommended spec machine (meaning GTX 970-class hardware). Of course, running in above-HD resolution, at 90Hz, and in stereo 3D, is technically demanding. Developers will need to work to optimize their games in order to meet these requirements.

BalorNG
Honored Guest
Er, those are, like, 'minimum system specs'. You know how this goes - it would allow you to run the games at playable framerates at MINIMUM settings. Of course, 'minimum' settings might be too distracting for VR, so, say, medium will be 'good enough' and yet allow you to play at recommended FPS. Expecting being able to set Ultra settings at minimum suggested specs is a bit optimistic. For some games you'll certainly need 980x2, or preferably some dual-GPU 1090 future Nvidia card.

Anyway, let me hijack the thread a little.

I'm seem to be getting a good deal on two lightly used 780 GTX for a price of 410 total, rendering power wise 780 is almost identical to 970. But this is Kepler, not Maxwell, so I am missing on Multi-Res Shading.

So, it it worth it? Can somebody point me at some REAL tests of Multi-Res shading?
Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?

kojack
MVP
MVP
"BalorNG" wrote:
Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?

VR SLI requires programs to be specifically written for it. No existing rift software would support it. Future stuff might, depending on the engine they use.

LogicalIncremen
Honored Guest
"kojack" wrote:
"BalorNG" wrote:
Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?

VR SLI requires programs to be specifically written for it. No existing rift software would support it. Future stuff might, depending on the engine they use.


Why is VR SLI support not more common yet? To a general audience non-developer like myself, SLI support seems like a no-brainer for VR. I'd love to better understand the complexities behind accomplishing that, if you can point me to anything.

kojack
MVP
MVP
"LogicalIncrements" wrote:
Why is VR SLI support not more common yet? To a general audience non-developer like myself, SLI support seems like a no-brainer for VR. I'd love to better understand the complexities behind accomplishing that, if you can point me to anything.

1 - it only just came out (nvidia have been advertising it for a while, but have only just released it recently)
2 - sli has been recommended by oculus as something to avoid for a couple of years now (it's bad for vr), it will take time for people to switch their mindset.
3 - Not every developer has sli to test with (dangerous to add support for something you can't directly test).
4 - 95% of vr developers are using Unity or Unreal. They can't just add vr sli support, they need to wait for new releases of Unity/Unreal that support it.

MichaelNikelsky
Honored Guest
VR SLI is quite different from normal AFR-SLI. AFR-SLI does not work with VR or any application that requires a sync-point.
The way AFR-SLI works is by emitting all the commands to render a frame to one GPU and as soon as all the commands are emitted it switches to the next GPU for the next frame. If you introduce a sync-point, like it is done for VR to control latency, you would basically wait for the first GPU to finish rendering the frame before continuing with the next frame. This makes AFR-SLI pretty useless since the first GPU is idle after that anyways so you don´t need to use the second GPU at all. Even when managing to render the first eye on one GPU and the second eye on the other GPU you would still be limited by the syncing since you need to wait for the second GPU to finish rendering. However, not using syncing is even worse since you might end up with different frames displayed on the left and right eye.

VR-SLI works (assumingly) completely different since it basically sents the rendering commands to both GPUs in parallel and just gives them different viewports/cameras/what ever you call it. This works even when using sync-points (although I would argue that it is not longer necessary).But unlike AFR-SLI, which pretty much works without doing anything in the application (unless the application does something stupid of course...) the application needs to be aware of the VR-SLI feature so it only does one rendering pass for both eyes and feed the GPU with all the necessary buffers.

Hope this makes it a bit clearer.

gunair
Explorer
Hello from Germany,

a few days ago, nvidia released their Gameworks VR and Designwoks VR SDK - Final versions 1.0!

And Epic Games has announced support for GameWorks VR features (with SLI VR !!!) in an upcoming version of the Unreal Engine 4.3.

I hope, Unity3d (I'm working with) is doing the same very soon!

🙂

LogicalIncremen
Honored Guest
"MichaelNikelsky" wrote:
VR SLI is quite different from normal AFR-SLI. AFR-SLI does not work with VR or any application that requires a sync-point.
The way AFR-SLI works is by emitting all the commands to render a frame to one GPU and as soon as all the commands are emitted it switches to the next GPU for the next frame. If you introduce a sync-point, like it is done for VR to control latency, you would basically wait for the first GPU to finish rendering the frame before continuing with the next frame. This makes AFR-SLI pretty useless since the first GPU is idle after that anyways so you don´t need to use the second GPU at all. Even when managing to render the first eye on one GPU and the second eye on the other GPU you would still be limited by the syncing since you need to wait for the second GPU to finish rendering. However, not using syncing is even worse since you might end up with different frames displayed on the left and right eye.

VR-SLI works (assumingly) completely different since it basically sents the rendering commands to both GPUs in parallel and just gives them different viewports/cameras/what ever you call it. This works even when using sync-points (although I would argue that it is not longer necessary).But unlike AFR-SLI, which pretty much works without doing anything in the application (unless the application does something stupid of course...) the application needs to be aware of the VR-SLI feature so it only does one rendering pass for both eyes and feed the GPU with all the necessary buffers.

Hope this makes it a bit clearer.


Thank you for this awesome explanation, Michael! It makes much more sense to me now.