Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

Rendering webcam images in the Rift using OpenCV and OpenGL

jhericojherico Posts: 1,419
Nexus 6
edited March 2016 in PC Development
I've been asked about OpenCV with the Rift a few times, so I've made the example code for doing Rift rendering of captured OpenCV images the subject of my latest video.



A walkthrough of an application that pulls images from a live Rift mounted webcam and renders them to the display.

Example source code

Full example repository

Book Website
Brad Davis - Developer for High Fidelity
Co-author of Oculus Rift in Action

Comments

  • pixelminerpixelminer Posts: 177
    Art3mis
    Great video!

    Love the idea with the skybox as background. Have to add it to my own application.

    Regarding the lens distortion. I use a simple fragment shader to take care of that:
    #version 110
    // Camera rectification fragment shader
    
    uniform sampler2D textureSampler; // Texture to be rectified
    uniform vec2 imageSize; // Used to re-project pixel space to uv-space
    
    // Camera calibration information
    uniform vec2 opticalCenter; // Optical center in pixel space
    uniform vec2 focalLength; // Focal length in pixel space
    uniform vec2 radialDistortion; // Coefficients 
    uniform vec2 tangentialDistortion; // Coefficients 
    
    varying vec2 textureCoord; // From vertex shader
    
    void main()
    {
        vec2 opticalCenterUV = opticalCenter / imageSize;
        vec2 focalLengthUV = focalLength / imageSize;
        vec2 lensCoordinates = (textureCoord - opticalCenterUV) / focalLengthUV;
    
        float radiusSquared = dot(lensCoordinates, lensCoordinates);
        float radiusQuadrupled = radiusSquared * radiusSquared;
    
        float radialCoeff = radialDistortion.x * radiusSquared + radialDistortion.y * radiusQuadrupled;
    
        float dx = tangentialDistortion.x * 2.0 * lensCoordinates.x * lensCoordinates.y 
                 + tangentialDistortion.y * (radiusSquared + 2.0 * lensCoordinates.x * lensCoordinates.x);
        float dy = tangentialDistortion.x * (radiusSquared + 2.0 * lensCoordinates.x * lensCoordinates.x) 
                 + tangentialDistortion.y * 2.0 * lensCoordinates.x * lensCoordinates.y;
        
        vec2 tangentialCoeff = vec2(dx, dy);
        
        vec2 distortedUV = ((lensCoordinates + lensCoordinates * radialCoeff + tangentialCoeff) * focalLengthUV) + opticalCenterUV;
    
        gl_FragColor = texture2D(textureSampler, distortedUV);
    }
    
    
    OpenSceneGraph integration of the Oculus Rift:
    http://github.com/bjornblissing/osgoculusviewer
  • cyberealitycybereality Posts: 26,156 Oculus Staff
    Nice job.
    AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i
    Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV
  • jhericojherico Posts: 1,419
    Nexus 6
    pixelminer wrote:
    Regarding the lens distortion. I use a simple fragment shader to take care of that:

    Thanks for the pointer. I'll have to take a look at that, as I really only have a basic understanding of the way OpenCV represents lens distortion. If I can add it to my example I'll do so.

    On the other hand, I do kind of prefer the mesh based approach to correcting for distortion, since a shader based approach can push portions of the image off the geometry, or leave black borders at the edges, depending on the type of distortion. A mesh based distortion accounts for either case.
    Brad Davis - Developer for High Fidelity
    Co-author of Oculus Rift in Action

  • cg439cg439 Posts: 65
    This looks awesome! Can't wait to try this out, have a few small app ideas for video summarization using VR which would be based off this code! Can't wait to check it out later when(if) my DK2 arrives!
    Ordered: August 18, 2014
    Processing: October 19, 2014
    Shipped: ???
    Delivered: November 6, 2014
    Playing: NOW!
  • hkubotahkubota Posts: 7
    NerveGear
    Hi,

    This here looks very close what I intend to do: get 2 or more webcams, and either create a 3D picture or a wide FOV 2D picture.

    But before the fun, there's the hard time of learning coding on Visual Studio...and that's where it currently stops:

    I did the Git download, did the cmake, found a project file, could open it with VS2012, and I can compile all libraries. But the examples I cannot.

    1>c:\downloads\oculusvr\oculusriftinaction\examples\cpp\common\Font.h(139): error C2065: 'NAN' : undeclared identifier
    The other file which creates errors is GlMesh.h:
    1>c:\downloads\oculusvr\oculusriftinaction\examples\cpp\common\GlMesh.h(17): error C2473: 'color' : looks like a function definition, but there is no parameter list.

    when trying to build the Example_2_1_SDK.
    What (probably) noob error do I do? Or is Font.h and GlMesh.h broken?

    I might add: I usually program on Linux. Old style. With manually maintained Makefiles. I never programmed in Visual Studio before, but I could immediately compile the Rift examples. So I would not be surprised if I miss something really basic with Windows and/or Visual Studio here. In case anyone wonders why then I don't use Linux: it is because the Rift is connected to that Windows PC and I figured that the tool chain cannot be that bad and the programming itself will be very similar.

    Harald
  • hkubotahkubota Posts: 7
    NerveGear
    Replying to my own message:
    hkubota wrote:
    Hi,
    What (probably) noob error do I do? Or is Font.h and GlMesh.h broken?

    The noob error was using Visual Studio 2012 which does not seem to know NAN (in Font.h) nor this new C++ construct (in GlMesh.h). No issues with Visual Studion 2013.

    Harald
  • Great stuff!
    Are there differences in camera quality performance wise? Are more expensive cameras "faster"?
    To battle the latency? Or is it only dependent on the resolution and the performance of your machine?
    Thanks,
    Daniel
  • Thanks for the video!

    I am looking to implement similar code to render my mobile camera's live feed to the Oculus. The live feed from my cellular device can be accessed through a local server IP address.

    Your code simply accesses the 0th webcam in the system. How can I modify that to access a feed from a live IP address? Also, how can the output of this app be accessed? Ideally, for me, I'd like the output of this app to be transmitted back to a server.
  • MecrofMecrof Posts: 31
    Thanks for the video! Have to add the concept in my own application.

    It may be a stupid question but you flip the texture with OpenCV on the CPU. Why do not just flip the OpenGL texture coordinates and let the fragment shader does "automatically" the flipping? It would be faster and reduce the camera latency. Am I wrong ? :)
  • Hi Will you please tell me how you prepare setup for this demo "Oculus Rift In Action"

    I have already cloned all git repository and created a build for that using cmake ..

    but its still have a problems with build solutions

    if you provide steps to run this examples that will be very helpful for me

    I am using VS 2012 as platform
    looking forward to kind help from you


    Thank you
  • Mecrof wrote:
    Thanks for the video! Have to add the concept in my own application.

    It may be a stupid question but you flip the texture with OpenCV on the CPU. Why do not just flip the OpenGL texture coordinates and let the fragment shader does "automatically" the flipping? It would be faster and reduce the camera latency. Am I wrong ? :)


    Hi Will you please tell me how you prepare setup for this demo "Oculus Rift In Action"

    I have already cloned all git repository and created a build for that using cmake ..

    but its still have a problems with build solutions

    if you provide steps to run this examples that will be very helpful for me

    I am using VS 2012 as platform
    looking forward to kind help from you


    Thank you
  • DoZo1971 wrote:
    Great stuff!
    Are there differences in camera quality performance wise? Are more expensive cameras "faster"?
    To battle the latency? Or is it only dependent on the resolution and the performance of your machine?
    Thanks,
    Daniel


    Will you please tell me which Visual Studio version you used 2012 or 2013 ? or else
  • OeroOero Posts: 5
    Thanks for the guide and a well written book (bought it last week). I'm new to Oculus programming as a whole, but for some reason I cant manage to get the FPS to show over the webcam-stream. I'm able to get everything else to run. Any suggestions?
  • psvvardhanpsvvardhan Posts: 1
    NerveGear
    edited November 2016
    I am trying to run this webcam streaming application on my Oculus DK2. But since the development is done on the older version of Oculus SDK, the application is only running on the PC but NOT on the oculus itself. What could I be doing wrong?

    Note: I installed appropriate Oculus runtime (0.5 for this application) and included Oculus SDK (v0.4). I see that after installing Oculus runtime v0.5, the Oculus tracker is getting disabled and HMD stops responding. Is there something I missed?
  • aktamaroaktamaro Posts: 4
    NerveGear
    edited February 9
    I know it's an old thread but I have also bought this book last week and It's not even updated! it should not be sold anymore imo. But business is business. I feel I have been scammed! Shame on you
  • kojackkojack Posts: 4,827 Volunteer Moderator
    I guess technically the book is still valid if you have a DK1 or DK2 headset and use an sdk of 0.5 or below (the code examples were built against 0.5).

    But for CV1 owners and the current sdk, things are extremely different.
Sign In or Register to comment.