Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums will be revoked at the discretion of Oculus staff.
New to the forums? Click here to read the How To guide. -- Developers click here.

Can we establish the real facts about PC hardware for VR?

edited December 2015 in PC Development
Hi, VR devs!

As a PC hardware enthusiast, I'm trying to ensure that my PC is optimally powered for VR, and I'm in a position where I give build advice to large numbers of other PC gamers, many of whom want their PCs to be ready for Oculus CV1. Because I'm in this position, I've been getting into lots of debates, mainly related to the Oculus minimum hardware specs and the likelihood of multi-GPU support in the near future. I've spent a lot of time talking to fellow gamers, when really I should be talking to actual developers. In that spirit, I hope that my questions are welcome on this forum. Please let me know if I should post this elsewhere. Trust me when I say that if you can help me settle this debate, you will be helping many other PC gamers as a result.

I'd like to focus this thread on the minimum specs quoted by Oculus: a GTX 970 (or R9 290) and an i5-4590.

Just looking at the numbers, the raw graphical power you need to display game images on a CV1 screen is enormous. Taking into account the 1.4x 'eye buffer' on the 2160x1200 resolution, you get a true rendering resolution of 3024x1680 for a VR headset. (Correct me if I'm wrong about the eye buffer -- I assume Oculus and Vive are the same, and I know Vive has a 1.4x render resolution.)

Trying to target 90 FPS/Hz at 3024x1680 requires roughly 90% as much graphical power as rendering 4K (3840x2160) at 60 FPS/Hz. As Oculus have said themselves, it's also more than 3 times the graphical power required to display a game at 1080p at 60 FPS/Hz.

As everyone probably knows, rendering modern games at 60 FPS with 4K resolution requires a huge amount of graphical power. Looking at a modern PC game like Fallout 4, even a GTX 980 Ti averages only 46 FPS at 4K on Ultra settings. That would translate into about 77 FPS on an Oculus. A GTX 970 averages 33 FPS in Fallout 4 at 4K. That's about 55 FPS on an Oculus, assuming it's directly translatable.

If Fallout 4 were to have launched with VR support, you would need a much more powerful GPU than a GTX 970 in order to play it at 90 FPS on the Oculus. In fact, you'd probably need a multi-GPU system, but from my understanding, multi-GPU support for VR is still under development. (It seems to me that multi-GPU support will absolutely need to work for AAA gaming at 90 FPS to happen.)

My concern is that many gamers are going to be building PCs with GTX 970s on the promise that it will run games at 90 FPS. But based on the math, many new AAA games will need much more power than a GTX 970 to run at high framerates.

When debating this, some people have told me that games designed specifically for Oculus will indeed run at 90 FPS on a GTX 970 because that's the hardware that the developers are targeting. I hope they are right, but I'm still skeptical. You cannot escape the fact that you're trying to pump out 457 million pixels per second (3024x1680x90). Based on my knowledge, a GTX 970 simply cannot perform at that level unless the game is only moderately intensive on graphics. You sacrifice looks, or you sacrifice framerates.

A flagship VR game like EVE Valkyrie looks just as visually impressive as Fallout 4, if not more so. I'll be extremely impressed if gamers are able to achieve 90 FPS with a GTX 970 without significantly dialing down the visual quality of the game. I really, really hope they're able to. But I'm going to remain skeptical until I see it happen.

I imagine two broad classes of games that will be released with VR support:

1. AAA games with really high-end graphics. These will have the resources to optimize for a variety of PC configurations, and will support SLI/CrossFire, because that will be the only way to experience the games at their full potential. These games will require much more graphical power than a GTX 970 can provide.

2. Less expensive games where the focus is on gameplay or story rather than graphics. These games will be less graphically demanding, so it won't matter if they support SLI at all, because even a GTX 970 will be more than enough.

My concern, once again, is that many first-generation adopters are building PCs with a GTX 970 (as suggested) and anticipating that they'll be running the flashiest games at 90 FPS when their Oculus comes out. That's certainly the perception that I see in discussions among the general gaming population.

Can someone shed light on this concern? Am I completely off-base, or will the first generation of games designed specifically for Oculus just have to look very simple graphically compared to today's modern PC games if we want to run them on 970s?
Content Manager for logicalincrements.com

Comments

  • owenwpowenwp Posts: 668 Oculus Start Member
    Pretty much. The increase in cost is more than what you stated, because everything needs to be rendered twice every frame. Just looking at the pixel cost is only half the story. The number of API draw calls per second and vertices per second goes up by 200% compared to normal rendering at 60fps, before you consider the increase in FOV putting more objects in view at one time.

    So, depending on how the game is bottlenecked you are looking at double to triple the cost of 4k 60fps, and that is assuming a perfect world where reducing latency has no effect on framerate. And you have to hit that 90fps target every single time. You can't get away with some parts of the game going a little over and slowing down. That means realistically you need to put in a bit of a safety buffer so you don't get sick every time someone someone sends you an IM.

    So no, current AAA games just won't work in VR with existing hardware without cutting out a lot of detail. And there will never be a time when this is not true, no matter what. When a developer makes a game, they design it to look as good as possible and run on current hardware. If the game could run in a third of the time, they would have put in three times as much detail.

    When you design a game for VR specifically, you do whatever you need to hit those specs. It is not hard, even on older hardware. You just need to remove things until it works, same as any platform.
    Sanzaru - Programmer
  • cyberealitycybereality Posts: 26,156 Oculus Staff
    As far as I know, all the made-for-VR games coming to the consumer Oculus Rift will run well on the recommended spec machine (meaning GTX 970-class hardware). Of course, running in above-HD resolution, at 90Hz, and in stereo 3D, is technically demanding. Developers will need to work to optimize their games in order to meet these requirements.
    AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i
    Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
  • Er, those are, like, 'minimum system specs'. You know how this goes - it would allow you to run the games at playable framerates at MINIMUM settings. Of course, 'minimum' settings might be too distracting for VR, so, say, medium will be 'good enough' and yet allow you to play at recommended FPS. Expecting being able to set Ultra settings at minimum suggested specs is a bit optimistic. For some games you'll certainly need 980x2, or preferably some dual-GPU 1090 future Nvidia card.

    Anyway, let me hijack the thread a little.

    I'm seem to be getting a good deal on two lightly used 780 GTX for a price of 410 total, rendering power wise 780 is almost identical to 970. But this is Kepler, not Maxwell, so I am missing on Multi-Res Shading.

    So, it it worth it? Can somebody point me at some REAL tests of Multi-Res shading?
    Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?
  • kojackkojack Posts: 4,648 Volunteer Moderator
    BalorNG wrote:
    Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?
    VR SLI requires programs to be specifically written for it. No existing rift software would support it. Future stuff might, depending on the engine they use.
  • kojack wrote:
    BalorNG wrote:
    Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?
    VR SLI requires programs to be specifically written for it. No existing rift software would support it. Future stuff might, depending on the engine they use.

    Why is VR SLI support not more common yet? To a general audience non-developer like myself, SLI support seems like a no-brainer for VR. I'd love to better understand the complexities behind accomplishing that, if you can point me to anything.
    Content Manager for logicalincrements.com
  • kojackkojack Posts: 4,648 Volunteer Moderator
    Why is VR SLI support not more common yet? To a general audience non-developer like myself, SLI support seems like a no-brainer for VR. I'd love to better understand the complexities behind accomplishing that, if you can point me to anything.
    1 - it only just came out (nvidia have been advertising it for a while, but have only just released it recently)
    2 - sli has been recommended by oculus as something to avoid for a couple of years now (it's bad for vr), it will take time for people to switch their mindset.
    3 - Not every developer has sli to test with (dangerous to add support for something you can't directly test).
    4 - 95% of vr developers are using Unity or Unreal. They can't just add vr sli support, they need to wait for new releases of Unity/Unreal that support it.
  • MichaelNikelskyMichaelNikelsky Posts: 62
    Brain Burst
    VR SLI is quite different from normal AFR-SLI. AFR-SLI does not work with VR or any application that requires a sync-point.
    The way AFR-SLI works is by emitting all the commands to render a frame to one GPU and as soon as all the commands are emitted it switches to the next GPU for the next frame. If you introduce a sync-point, like it is done for VR to control latency, you would basically wait for the first GPU to finish rendering the frame before continuing with the next frame. This makes AFR-SLI pretty useless since the first GPU is idle after that anyways so you don´t need to use the second GPU at all. Even when managing to render the first eye on one GPU and the second eye on the other GPU you would still be limited by the syncing since you need to wait for the second GPU to finish rendering. However, not using syncing is even worse since you might end up with different frames displayed on the left and right eye.

    VR-SLI works (assumingly) completely different since it basically sents the rendering commands to both GPUs in parallel and just gives them different viewports/cameras/what ever you call it. This works even when using sync-points (although I would argue that it is not longer necessary).But unlike AFR-SLI, which pretty much works without doing anything in the application (unless the application does something stupid of course...) the application needs to be aware of the VR-SLI feature so it only does one rendering pass for both eyes and feed the GPU with all the necessary buffers.

    Hope this makes it a bit clearer.
  • gunairgunair Posts: 72
    Hiro Protagonist
    Hello from Germany,

    a few days ago, nvidia released their Gameworks VR and Designwoks VR SDK - Final versions 1.0!

    And Epic Games has announced support for GameWorks VR features (with SLI VR !!!) in an upcoming version of the Unreal Engine 4.3.

    I hope, Unity3d (I'm working with) is doing the same very soon!

    :)
    LocomotionVR.de - Virtual reality in eMotion

    http://www.locomotionVr.de

    Organizer of HANNOVER VIRTUAL REALITY MEETUP & Member of COLOGNE VIRTUAL REALITY MEETUP

    DK2 Status: OWNER SINCE 04.AUG.14 Ordered: Mar 19, 2014 09:52 AM PDT
  • VR SLI is quite different from normal AFR-SLI. AFR-SLI does not work with VR or any application that requires a sync-point.
    The way AFR-SLI works is by emitting all the commands to render a frame to one GPU and as soon as all the commands are emitted it switches to the next GPU for the next frame. If you introduce a sync-point, like it is done for VR to control latency, you would basically wait for the first GPU to finish rendering the frame before continuing with the next frame. This makes AFR-SLI pretty useless since the first GPU is idle after that anyways so you don´t need to use the second GPU at all. Even when managing to render the first eye on one GPU and the second eye on the other GPU you would still be limited by the syncing since you need to wait for the second GPU to finish rendering. However, not using syncing is even worse since you might end up with different frames displayed on the left and right eye.

    VR-SLI works (assumingly) completely different since it basically sents the rendering commands to both GPUs in parallel and just gives them different viewports/cameras/what ever you call it. This works even when using sync-points (although I would argue that it is not longer necessary).But unlike AFR-SLI, which pretty much works without doing anything in the application (unless the application does something stupid of course...) the application needs to be aware of the VR-SLI feature so it only does one rendering pass for both eyes and feed the GPU with all the necessary buffers.

    Hope this makes it a bit clearer.

    Thank you for this awesome explanation, Michael! It makes much more sense to me now.
    Content Manager for logicalincrements.com
  • gunair wrote:
    Hello from Germany,

    a few days ago, nvidia released their Gameworks VR and Designwoks VR SDK - Final versions 1.0!

    And Epic Games has announced support for GameWorks VR features (with SLI VR !!!) in an upcoming version of the Unreal Engine 4.3.

    I hope, Unity3d (I'm working with) is doing the same very soon!

    :)

    All very positive news! I've been reading up on the NVIDIA releases and it sounds like they're making good progress. :D
    Content Manager for logicalincrements.com
  • galopingalopin Posts: 355
    Nexus 6
    I do not understand nVidia on this one ! AFR SLI is here for years, with a very limited set of extension API to drive it in a DX11 application. DX12 arrives and bring explicit multi GPU so no more crap with all we need to SFR properly. And now nVidia release a totally different SLI extension for DX11 to more or less emulate DX12 on this one but still awkward, what the hell, and of course, not compatible with AMD !

    If you want performance, DX12 is the way to go now, VR application are the perfect candidates as their target minimum spec computers are powerful enough and likely to be on windows 10 anyway (dx12 does not means you need a dx12 gpu).

    The multi viewport mask is interesting, but could have been also exposed with a dx12 extension that would even be cleaner to expose (target/vp id output semantic is already available outside of the GS) !
  • MichaelNikelskyMichaelNikelsky Posts: 62
    Brain Burst
    Well, I guess the reasoning from nvidia is quite simple: It is much easier to use an additional extension on a DirectX 11 based engine than rewriting the engine for DirectX 12. Considering that developing a game title usually takes a few years it will take quite some time before we see DirectX 12 in every newly released game.
    Also Windows 10 currently has about 8% market share, most userss are still using windows 7. So at the moment it doesn´t make any sense for a company to release a DirectX12 only title since it would massively hurt sales.
  • galopingalopin Posts: 355
    Nexus 6
    Rift/Morpheus/Vive will arrive in 2016. 2017 is the least reasonable time for game that start their development now. We are talking about AAA games here.

    DX12 will also get a boost from Xbox one development and it will be more of a burden to maintain a DX11 build soon. And i do not know a single graphic programmer that would not sell her mother to give away for free DX11 and say farewell.

    And about hurting sells, we are talking about VR, the market is still to be proven, and PC version of AAA games are more for glory than anything else, fallout 4 is an ovni that did 500k online player on PC the first two weeks ( and already drop to 200K ). The large majority of AAA games sells dozens of times more on console.

    Anecdote of 2 weeks ago, nvidia driver crash in D3D11CreateDevice -> Windows 7 without SP1 specific -> will not fix...
  • shadowfroggershadowfrogger Posts: 502
    Trinity
    Optimizing for 1st gen VR games will be totally different then a traditional 4k game. The Level of detail (LOD) will be completely different, Since the effect resolution is above > 720p & < 1080p and every developer knows the exact resolution and minimum power each PC has. You can save a lot from the LOD and textures, there are probably a lot of areas that devs can extract performance when building a VR game. There will be constraints with first gen VR, Devs, Devs are going to choose the type of game carefully and not have as much freedom as traditional games, They will however have VR freedom which is powerful. Even movement speed in VR FPS will be slower, I don't know what latency savings you can get from that but it could be possible.

    Every made for VR game will probably aim for the Oculus store but there is nothing stopping Devs from targeting a more powerful system with the CV1 in mind in upcoming years.

    It would be better if they knew everyone had dx12 since you'll be able to save a lot of latency there.
    Visit my amateur homegrown indie game company website!
    http://www.gaming-disorder.com/
  • MichaelNikelskyMichaelNikelsky Posts: 62
    Brain Burst
    galopin wrote:
    And about hurting sells, we are talking about VR, the market is still to be proven

    Exactly! That´s why we won´t see too many VR-only titles in the near future but quite some "also supports VR" stuff in games, which will most likely be using DirectX 11. That´s why a DirectX 12 only VR-SLI solution doesn´t make any sense for Nvidia/AMD.
  • CogSimGuyCogSimGuy Posts: 19
    NerveGear
    VR SLI is quite different from normal AFR-SLI. AFR-SLI does not work with VR or any application that requires a sync-point.
    The way AFR-SLI works is by emitting all the commands to render a frame to one GPU and as soon as all the commands are emitted it switches to the next GPU for the next frame. If you introduce a sync-point, like it is done for VR to control latency, you would basically wait for the first GPU to finish rendering the frame before continuing with the next frame. This makes AFR-SLI pretty useless since the first GPU is idle after that anyways so you don´t need to use the second GPU at all. Even when managing to render the first eye on one GPU and the second eye on the other GPU you would still be limited by the syncing since you need to wait for the second GPU to finish rendering. However, not using syncing is even worse since you might end up with different frames displayed on the left and right eye.

    VR-SLI works (assumingly) completely different since it basically sents the rendering commands to both GPUs in parallel and just gives them different viewports/cameras/what ever you call it. This works even when using sync-points (although I would argue that it is not longer necessary).But unlike AFR-SLI, which pretty much works without doing anything in the application (unless the application does something stupid of course...) the application needs to be aware of the VR-SLI feature so it only does one rendering pass for both eyes and feed the GPU with all the necessary buffers.

    Hope this makes it a bit clearer.

    Thank you for this awesome explanation, Michael! It makes much more sense to me now.

    I'm not sure this explanation is true...if you read the nVidia white paper on VR SLI they talk of broadcasting the draw calls to both GPUs simultaneously however they also talk of the requirement to blit one "eye" frame back over to the other card before pushing to the VR device which imposes a slowdown.

    I have to hedge this post by saying it's a while since I read the white paper but that's what is in my memory...
  • MichaelNikelskyMichaelNikelsky Posts: 62
    Brain Burst
    CogSimGuy wrote:
    I'm not sure this explanation is true...if you read the nVidia white paper on VR SLI they talk of broadcasting the draw calls to both GPUs simultaneously however they also talk of the requirement to blit one "eye" frame back over to the other card before pushing to the VR device which imposes a slowdown.

    I am not sure what part of my explanation should be wrong!? Yes, you need to blit the result from one card to the other for displaying it. So ok, there is also a sync necessary, but it is not a full sync in the sense that the CPU has to wait until the GPU has finished its job. The sync can made on the GPUs only so they are much more lightweight. And between those sync points you keep both GPUs busy at 100%, something that is not possible with AFR-SLI.

    About the slowdown.....well, for normal stereo rendering we are seeing a near perfect factor of 2 performance increase with no additional perceived latency, so the required blit is very cheap.
  • galopingalopin Posts: 355
    Nexus 6
    About the slowdown.....well, for normal stereo rendering we are seeing a near perfect factor of 2 performance increase with no additional perceived latency, so the required blit is very cheap.

    This is not totally true, in a SFR context, there is likely things that you duplicate to not depends on the slow PCIe bus for transfer ( this is where the backbuffer while goes, in a 4K AFR it is 2ms extra latency every other frames, not free), if we call S the shadows, on a single GPU, you do S+L+R, and on a sli configuration you will do S+L and S+R. And S is not the only thing you may have to duplicate to not pay sync point and balance issues of the workload, of course, it will depends of the game graphic features.

    The final reprojection is also likely to always run on the display adapter to keep the orientation reading iso on the two views.
  • MichaelNikelskyMichaelNikelsky Posts: 62
    Brain Burst
    galopin wrote:
    This is not totally true,

    Are you really trying to tell me that what I can see and measure in our application is not true? I would rather say it all depends a lot on the application, doesn´t it? For us, it is not always necessary to rebuild shadow maps each frame (which by the way can also be rendered in SLI in case you have 2,4,6,...2N shadow maps).

    Still, I don´t understand what you are trying to say. Fact is: AFR-SLI does not work with VR due to the very nature AFR-SLI operates, it is even slower than a single GPU for whatever reason when trying to do it. That´s why they introduced VR-SLI, which does work great for stereo rendering.
  • galopingalopin Posts: 355
    Nexus 6
    I do not argue about AFR, it is a way to parallize that is bad for latency. In a classical context, if there is no dependencies between frames or very limited latency, it will be the way to achieve the best frame rate, sacrificing latency. Instead, rendering an eye on each GPU is definitely the best solution.

    What i am saying is that the nearly perfect *2 is probably an illusion. in a split frame context, even if you have zero dependencies between the frames, you have one eye to transfer, this is a 1080x1200, or if we follow recommendation 1.5 more for a 1620x1800x32bits. A buffer that size, on a PCIe 3x that run at 16GB/s need 0.7ms. A vr application running at 90fps has a budget of 11.11ms, that's means at least a hole of 6%. And you are of course not alone on the bus, dynamic buffers, texture streaming, ...

    In practice i can imagine a 10-15% of waste so 1-1.5ms in the case of a SLI context, that is far enough to say the nearly perfect * 2 does not exist. And what about a 4way sli that would split an eye between two GPUs...

    If you want to see what happen under the hood, you can use GPUView, part of the microsoft performance toolkit.

    My favorite way to trigger it is by code :
    Start the capture :
    std::thread(&#91;&#93; { ShellExecute(nullptr, "open", "C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\gpuview\\log.cmd", NULL, NULL, SW_HIDE);} ).detach();
    
    Stop the capture, the log2.cmd is a copy of log.cmd with a "SET TLOG=NORMAL" in it :
    std::thread(&#91;&#93; { ShellExecute(nullptr, "open", "C:\\Program Files (x86)\\Windows Kits\\10\\Windows Performance Toolkit\\gpuview\\log2.cmd", NULL, NULL, SW_HIDE);} ).detach();
    

    This will generate a merged.etl file in the app folder, to open with gpuview.exe, you will see the command queues, hardware queues and everything ( sadly more than everything, that tools is arsh ). This is the only tool you can see the effect of SetMaxFrameLatency for example.
  • galopingalopin Posts: 355
    Nexus 6
    You can see gpu view with the sli vr sample. i run that on a titan x sli. The DK2 refresh at 75Hz, the blue lines are the desktop 60Hz refresh and you can see the associated flip queue.

    An eye take 1.2ms to render, they run in parallel. Then we have an almost 3ms copy, it is definitely higher than expected but match the sample on screen timing report. I saw in the past 10ms copy in 4K, but in AFR, i was supposing that the driver decide to lower the bandwidth priority in favor to other system, as the buffer is only needed 16ms later anyway.

    We also can see a small footer to the frame from the oculus driver that has to run on the same adapter. it is a 0.6ms overhead for my GPU.

    So far from the 11.11ms budget to run at 90fps, nvidia+oculus take 3.3ms, that's quite awful. To note that the next nvidia gpu generation is supposed to have more bandwidth for cross adapter communication, something like 60GB/s because yes, this is the bottle neck right now.

    At the bottom of the screen, you can see the queuing from the app, it is where you will see the SetMaxFrameLatency api from the dxgi device behave, to lower the latency means less queuing and get the risk to have the gpu starve for data at some point, by default, dxgi can let the CPU run ahead up to 3 frames.


    PatbMGc.png
  • MichaelNikelskyMichaelNikelsky Posts: 62
    Brain Burst
    I think you are missing one point: In contrast to AFR-SLI not only the GPU-load is reduced but also the CPU-load is pretty much cut in half. Especially on scenes with lots of drawcalls/bindings this can make a huge difference. That is probably why we see the near perfect scaling with VR-SLI (our scenes are usually pretty huge, something around 25Mio visible triangles per frames in several thousand geometries and hundreds of materials).

    Also: I talked about normal stereo rendering for good reason: The sync you do in VR hurts performance a lot. In that case the costs for the copy are relevant, probably the reason why we only see about factor 1.8 performance increase when using the oculus.
    However, if you don´t need to sync (like in normal stereo rendering) the costs are irrelevant. Yes, your latency before you see the final image does increase but since you can do other stuff meanwhile the perceived performance will not be impacted.

    4-way SLI is a completely different story however (won´t even start with 3-way SLI). Guess you would need to come up with something quite different for such a configuration to get a benefit (geometry splitting? depth compositing?). But since you can probably count the number of people with a 4-way SLI system on one hand I would not worry too much about that at the moment.
  • galopingalopin Posts: 355
    Nexus 6
    Oh i missed the not vr stereo info and 1.8 only vr scale, oups, possilble that the driver does something smarter, you know the tool to see by yourself now anyway :)

    The CPU cost is irrelevant since DX12 cut it in crazy proportions, and also allow to implement graphic pipelines that can run far more logic on the gpu side. Yes DX12 does not have the broadcasting, but i do not think it is a problem.

    And 3/4 way sli is a bonus, i work on games that does not target 3D in the first place, the next one will do SLI SFR to lower the latency for sure, each GPU will render a sub rectangle of the view, probably with some overlapping margin in order to do some down sample/blur without a need for copy. If you can do 2, you can do 3 or four :)

    Using this extension will also cut you from tools like renderdoc or vsgd ( maybe nsight will support it one day ), on complex games, losing graphic debug tools is enough of a no go zone.
  • AntDX3162AntDX3162 Posts: 839
    Trinity
    kojack wrote:
    BalorNG wrote:
    Otherwise, I think I can expect pretty much 2x scaling with two GPUs due to VR SLI, right?
    VR SLI requires programs to be specifically written for it. No existing rift software would support it. Future stuff might, depending on the engine they use.


    probably requires software Oculus SDK 1.0.0
    facebook.com/AntDX316
  • owenwpowenwp Posts: 668 Oculus Start Member
    VR SLI can be implemented right now by anyone, NVidia has released the SDK for it. It has nothing to do with the Oculus SDK.
    Sanzaru - Programmer
Sign In or Register to comment.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories