cancel
Showing results for 
Search instead for 
Did you mean: 

Tricks to acheive high framerate

raidho36
Explorer
So it's a known problem already that 60 fps is way too low, this alone causes serious problems with perception such as smearing and strobing (and judder in between of them), and solving one of them only ramps up severity of another one, so really the solution here is to get physically higher update rate, above 300 fps preferably.

But there's more problems here:
First off, there's no display on the market with this high update rate yet. But this is acheivable, with enough development money.
The second problem is GPU powers, if we want to render a Rift stereo scene in 300 fps we need to cut down the graphics severely, to something like year 2003 level. This is acceptable, of course, but in 2013 we're expecting there to be modern graphics, really. We can't just expect everyone to stock up on 4x Titans.
And the biggest problem is bandwidth - FullHD 120 fps already takes about 8 Gbps throughput (correct me if I'm wrong). Getting that much higher bandwidth requires developing a whole new hardware standards. But of course getting GPU manufacturers to develop VR-enabled GPUs standart is the way to go, so that we'd have VR GPUs on the market that use special ultra-bandwidth cables, allow for extreme framerates, preferably with option to use tricks (such as explained below) to get high FPS with reasonably good picture in expense of slight amount of artifacts, and with option to choose between low FPS and rich graphics for i.e. slow paced adventure games, and extreme FPS and poorer graphics where FPS is #1 major priority such as racing simulation games. Of course an option for SLI-stacking them is pretty much mandatory, too. But obviously this doesn't happen until Rift gets massive adoption, I expect GPU manufacturers to come into motion with this when Rift sells several tens of millions units worldwide.

But having it in high framerate is really that crucial though. What I think of is using tricks. Particularry, using current GPUs and cables, we can render the scene at 60 fps, and generate 4 bit pixel displacement map at the same time, and send them to the Rift one after another at 120 fps. Internally, the Rift will interpolate the image using displacement map 4 times between frames (to even benefit from displacement map as opposed to plain 120 fps rendering), so every 4 physical frames there's graphics syncing and therefore it shouldn't yield much artifacts, at least that's what John Carmack implies in his article about acheiving low latency. I see the time plot as following:
|graphics scanout |displacement scanout|
|old+int2|old+int3|new |new+int1 |

So this way we're getting 240 physical fps at only 60 GPU fps. Of course that makes for 7 ms lag, but I think that applying Carmack's techniques can solve this as well. If we're only to use two-byte displacement map, we can send two displacement maps between frames generated in meantime on GPU, which yields better precision or could make up for higher physical update rate.

What do you guys think of it?
2 REPLIES 2

Vin
Explorer
HDMI 2.0 is rated for what, 18 Gb/s? I think that'll take away some of the bandwidth issues. I think post-Titan generation GPUs will bring some of that power down to a more easy to obtain cost, too.

Do you have a prototype of an implementation that shows what the GPU timings might be?

raidho36
Explorer
18 Gb/s makes for 20 megabytes per 120 fps frame, this is barely enough for 2.5k display and not enough by 50% for 4k display, so improvements have to be made, especially if we're talking about >300 real fps at this resolution.

I haven't programmed anythng in this aspect yet, but I think first step is using something like triplebuffer. You constantly render stuff to buffer 1, and every frame the buffer 2 and 3 swap around, where buffer 2 is previous frame and buffer 3 is it's displacement map. When you swap render buffers, you swap buffers 1 and 2, then you build a displacement map and swap buffer 1 with buffer 3. This way you're getting your new frame to buffer 2 and displacement map to buffer 3, while having buffer 1 for rendering. But I don't know whether it's even possible to make the GPU work that way - to constantly swap around buffer 2 and 3 even while the rendering is in progress. I'm positive the hardware is capable of it, there's no technical difficulty with XORring 2 values out of set of 3 when invoked by interrupt, but it needs special driver if there's no such support.

I can come up with something like this in emulation mode I guess, but not before my working shift is over, it should be a week, but might be three. For one thing there should be this kind of triple-buffering emulation, that's easy part, and for other thing there should be displacement shader, and this is more complicated, I don't think I have enough qualification to write a shader that wouldn't produce insane amounts of artifacts due to lack of initial data. But we've all seen such techniques and it worked - the anti-shake camera filter is easiest example.