cancel
Showing results for 
Search instead for 
Did you mean: 

Can we establish the real facts about PC hardware for VR?

LogicalIncremen
Honored Guest
Hi, VR devs!

As a PC hardware enthusiast, I'm trying to ensure that my PC is optimally powered for VR, and I'm in a position where I give build advice to large numbers of other PC gamers, many of whom want their PCs to be ready for Oculus CV1. Because I'm in this position, I've been getting into lots of debates, mainly related to the Oculus minimum hardware specs and the likelihood of multi-GPU support in the near future. I've spent a lot of time talking to fellow gamers, when really I should be talking to actual developers. In that spirit, I hope that my questions are welcome on this forum. Please let me know if I should post this elsewhere. Trust me when I say that if you can help me settle this debate, you will be helping many other PC gamers as a result.

I'd like to focus this thread on the minimum specs quoted by Oculus: a GTX 970 (or R9 290) and an i5-4590.

Just looking at the numbers, the raw graphical power you need to display game images on a CV1 screen is enormous. Taking into account the 1.4x 'eye buffer' on the 2160x1200 resolution, you get a true rendering resolution of 3024x1680 for a VR headset. (Correct me if I'm wrong about the eye buffer -- I assume Oculus and Vive are the same, and I know Vive has a 1.4x render resolution.)

Trying to target 90 FPS/Hz at 3024x1680 requires roughly 90% as much graphical power as rendering 4K (3840x2160) at 60 FPS/Hz. As Oculus have said themselves, it's also more than 3 times the graphical power required to display a game at 1080p at 60 FPS/Hz.

As everyone probably knows, rendering modern games at 60 FPS with 4K resolution requires a huge amount of graphical power. Looking at a modern PC game like Fallout 4, even a GTX 980 Ti averages only 46 FPS at 4K on Ultra settings. That would translate into about 77 FPS on an Oculus. A GTX 970 averages 33 FPS in Fallout 4 at 4K. That's about 55 FPS on an Oculus, assuming it's directly translatable.

If Fallout 4 were to have launched with VR support, you would need a much more powerful GPU than a GTX 970 in order to play it at 90 FPS on the Oculus. In fact, you'd probably need a multi-GPU system, but from my understanding, multi-GPU support for VR is still under development. (It seems to me that multi-GPU support will absolutely need to work for AAA gaming at 90 FPS to happen.)

My concern is that many gamers are going to be building PCs with GTX 970s on the promise that it will run games at 90 FPS. But based on the math, many new AAA games will need much more power than a GTX 970 to run at high framerates.

When debating this, some people have told me that games designed specifically for Oculus will indeed run at 90 FPS on a GTX 970 because that's the hardware that the developers are targeting. I hope they are right, but I'm still skeptical. You cannot escape the fact that you're trying to pump out 457 million pixels per second (3024x1680x90). Based on my knowledge, a GTX 970 simply cannot perform at that level unless the game is only moderately intensive on graphics. You sacrifice looks, or you sacrifice framerates.

A flagship VR game like EVE Valkyrie looks just as visually impressive as Fallout 4, if not more so. I'll be extremely impressed if gamers are able to achieve 90 FPS with a GTX 970 without significantly dialing down the visual quality of the game. I really, really hope they're able to. But I'm going to remain skeptical until I see it happen.

I imagine two broad classes of games that will be released with VR support:

1. AAA games with really high-end graphics. These will have the resources to optimize for a variety of PC configurations, and will support SLI/CrossFire, because that will be the only way to experience the games at their full potential. These games will require much more graphical power than a GTX 970 can provide.

2. Less expensive games where the focus is on gameplay or story rather than graphics. These games will be less graphically demanding, so it won't matter if they support SLI at all, because even a GTX 970 will be more than enough.

My concern, once again, is that many first-generation adopters are building PCs with a GTX 970 (as suggested) and anticipating that they'll be running the flashiest games at 90 FPS when their Oculus comes out. That's certainly the perception that I see in discussions among the general gaming population.

Can someone shed light on this concern? Am I completely off-base, or will the first generation of games designed specifically for Oculus just have to look very simple graphically compared to today's modern PC games if we want to run them on 970s?
25 REPLIES 25

LogicalIncremen
Honored Guest
"gunair" wrote:
Hello from Germany,

a few days ago, nvidia released their Gameworks VR and Designwoks VR SDK - Final versions 1.0!

And Epic Games has announced support for GameWorks VR features (with SLI VR !!!) in an upcoming version of the Unreal Engine 4.3.

I hope, Unity3d (I'm working with) is doing the same very soon!

🙂


All very positive news! I've been reading up on the NVIDIA releases and it sounds like they're making good progress. 😄

galopin
Heroic Explorer
I do not understand nVidia on this one ! AFR SLI is here for years, with a very limited set of extension API to drive it in a DX11 application. DX12 arrives and bring explicit multi GPU so no more crap with all we need to SFR properly. And now nVidia release a totally different SLI extension for DX11 to more or less emulate DX12 on this one but still awkward, what the hell, and of course, not compatible with AMD !

If you want performance, DX12 is the way to go now, VR application are the perfect candidates as their target minimum spec computers are powerful enough and likely to be on windows 10 anyway (dx12 does not means you need a dx12 gpu).

The multi viewport mask is interesting, but could have been also exposed with a dx12 extension that would even be cleaner to expose (target/vp id output semantic is already available outside of the GS) !

MichaelNikelsky
Honored Guest
Well, I guess the reasoning from nvidia is quite simple: It is much easier to use an additional extension on a DirectX 11 based engine than rewriting the engine for DirectX 12. Considering that developing a game title usually takes a few years it will take quite some time before we see DirectX 12 in every newly released game.
Also Windows 10 currently has about 8% market share, most userss are still using windows 7. So at the moment it doesn´t make any sense for a company to release a DirectX12 only title since it would massively hurt sales.

galopin
Heroic Explorer
Rift/Morpheus/Vive will arrive in 2016. 2017 is the least reasonable time for game that start their development now. We are talking about AAA games here.

DX12 will also get a boost from Xbox one development and it will be more of a burden to maintain a DX11 build soon. And i do not know a single graphic programmer that would not sell her mother to give away for free DX11 and say farewell.

And about hurting sells, we are talking about VR, the market is still to be proven, and PC version of AAA games are more for glory than anything else, fallout 4 is an ovni that did 500k online player on PC the first two weeks ( and already drop to 200K ). The large majority of AAA games sells dozens of times more on console.

Anecdote of 2 weeks ago, nvidia driver crash in D3D11CreateDevice -> Windows 7 without SP1 specific -> will not fix...

shadowfrogger
Heroic Explorer
Optimizing for 1st gen VR games will be totally different then a traditional 4k game. The Level of detail (LOD) will be completely different, Since the effect resolution is above > 720p & < 1080p and every developer knows the exact resolution and minimum power each PC has. You can save a lot from the LOD and textures, there are probably a lot of areas that devs can extract performance when building a VR game. There will be constraints with first gen VR, Devs, Devs are going to choose the type of game carefully and not have as much freedom as traditional games, They will however have VR freedom which is powerful. Even movement speed in VR FPS will be slower, I don't know what latency savings you can get from that but it could be possible.

Every made for VR game will probably aim for the Oculus store but there is nothing stopping Devs from targeting a more powerful system with the CV1 in mind in upcoming years.

It would be better if they knew everyone had dx12 since you'll be able to save a lot of latency there.

MichaelNikelsky
Honored Guest
"galopin" wrote:

And about hurting sells, we are talking about VR, the market is still to be proven


Exactly! That´s why we won´t see too many VR-only titles in the near future but quite some "also supports VR" stuff in games, which will most likely be using DirectX 11. That´s why a DirectX 12 only VR-SLI solution doesn´t make any sense for Nvidia/AMD.

CogSimGuy
Protege
"LogicalIncrements" wrote:
"MichaelNikelsky" wrote:
VR SLI is quite different from normal AFR-SLI. AFR-SLI does not work with VR or any application that requires a sync-point.
The way AFR-SLI works is by emitting all the commands to render a frame to one GPU and as soon as all the commands are emitted it switches to the next GPU for the next frame. If you introduce a sync-point, like it is done for VR to control latency, you would basically wait for the first GPU to finish rendering the frame before continuing with the next frame. This makes AFR-SLI pretty useless since the first GPU is idle after that anyways so you don´t need to use the second GPU at all. Even when managing to render the first eye on one GPU and the second eye on the other GPU you would still be limited by the syncing since you need to wait for the second GPU to finish rendering. However, not using syncing is even worse since you might end up with different frames displayed on the left and right eye.

VR-SLI works (assumingly) completely different since it basically sents the rendering commands to both GPUs in parallel and just gives them different viewports/cameras/what ever you call it. This works even when using sync-points (although I would argue that it is not longer necessary).But unlike AFR-SLI, which pretty much works without doing anything in the application (unless the application does something stupid of course...) the application needs to be aware of the VR-SLI feature so it only does one rendering pass for both eyes and feed the GPU with all the necessary buffers.

Hope this makes it a bit clearer.


Thank you for this awesome explanation, Michael! It makes much more sense to me now.


I'm not sure this explanation is true...if you read the nVidia white paper on VR SLI they talk of broadcasting the draw calls to both GPUs simultaneously however they also talk of the requirement to blit one "eye" frame back over to the other card before pushing to the VR device which imposes a slowdown.

I have to hedge this post by saying it's a while since I read the white paper but that's what is in my memory...

MichaelNikelsky
Honored Guest
"CogSimGuy" wrote:

I'm not sure this explanation is true...if you read the nVidia white paper on VR SLI they talk of broadcasting the draw calls to both GPUs simultaneously however they also talk of the requirement to blit one "eye" frame back over to the other card before pushing to the VR device which imposes a slowdown.


I am not sure what part of my explanation should be wrong!? Yes, you need to blit the result from one card to the other for displaying it. So ok, there is also a sync necessary, but it is not a full sync in the sense that the CPU has to wait until the GPU has finished its job. The sync can made on the GPUs only so they are much more lightweight. And between those sync points you keep both GPUs busy at 100%, something that is not possible with AFR-SLI.

About the slowdown.....well, for normal stereo rendering we are seeing a near perfect factor of 2 performance increase with no additional perceived latency, so the required blit is very cheap.

galopin
Heroic Explorer
"MichaelNikelsky" wrote:

About the slowdown.....well, for normal stereo rendering we are seeing a near perfect factor of 2 performance increase with no additional perceived latency, so the required blit is very cheap.


This is not totally true, in a SFR context, there is likely things that you duplicate to not depends on the slow PCIe bus for transfer ( this is where the backbuffer while goes, in a 4K AFR it is 2ms extra latency every other frames, not free), if we call S the shadows, on a single GPU, you do S+L+R, and on a sli configuration you will do S+L and S+R. And S is not the only thing you may have to duplicate to not pay sync point and balance issues of the workload, of course, it will depends of the game graphic features.

The final reprojection is also likely to always run on the display adapter to keep the orientation reading iso on the two views.

MichaelNikelsky
Honored Guest
"galopin" wrote:

This is not totally true,


Are you really trying to tell me that what I can see and measure in our application is not true? I would rather say it all depends a lot on the application, doesn´t it? For us, it is not always necessary to rebuild shadow maps each frame (which by the way can also be rendered in SLI in case you have 2,4,6,...2N shadow maps).

Still, I don´t understand what you are trying to say. Fact is: AFR-SLI does not work with VR due to the very nature AFR-SLI operates, it is even slower than a single GPU for whatever reason when trying to do it. That´s why they introduced VR-SLI, which does work great for stereo rendering.