When I first played with the Oculus Dev Kit two years ago, I came to the conclusion that one of the best VR experience will be to watch a video in VR with the full ability to move and rotate your head, giving you the exact sense of depth required for a complete immersion. So I spent the last two years (as a side project) to develop the tools to capture, encode and playback a volumetric video
. The first version of the Dragonfly player
is available on the Oculus Store. It showcases a small number of volumetric videos captured from Unreal Engine 4. You can contact me to obtain an access key
The capture is actually done in UE4 using a special plug-in, but this technology could be integrated into any game/rendering engine as long as the color and depth buffers are available. It also requires some small changes related to the camera position in the shaders.
The size of the media depends of the capture. It increases with the number of occlusions, the number of dynamic objects and the frame rate of the capture. For an image, the file size is usually between 4MB and 10MB. For a video, the bitrate varies from 20MB to 40MB per second. All video samples presented with the Dragonfly player are captured at 45 FPS except Fight Scene which is captured at 90 FPS.
The capture suffers from some limitations:
- only computer-generated (CG) graphics are supported.
- screen-space effects are not captured. Screen-space effects are generally not well suited to be captured in 3D and to playback in VR.
- transparency is not supported but it could be addressed later.
- any camera dependent geometry (like billboards) should be avoided (LOD is not a problem).
- the captured resolution could be improved.
I am looking for some partners to develop this technology. Please contact me if you are interested to collaborate,