Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

Split Post Processing

getnamogetnamo Posts: 85
Brain Burst
edited March 2015 in Unreal Development
I am working on integrating a leap motion pass-through effectinto UE4 and I was wondering if anyone knows how to achieve a disparate view for post-processing so that I can feed left/right eye images from the leap before geometry is rendered (or even after), or if anyone has had success in putting two quads in front of the camera to pull this off accurately.
Current Project: Skycall

Comments

  • opampopamp Posts: 326
    Hiro Protagonist
    The trick is to realise that the CameraController/Camera remains in its own transform and the POV is updated directly by the rift Plugin relative to the camera location(The rift never updates the cameras transform).

    so if you want the rifts location in world space you would add the two vectors together.
    ie. cameramanager->camera->WorldLocation + HMDgetOrientationAndPosition->Location.

    I beleive HMDOrientationandPosition->Rotation is in world space.
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • getnamogetnamo Posts: 85
    Brain Burst
    That makes sense, the question is now more along the lines of how to actually feed the two images as a render layer or a post process which is different for each eye.

    According to epic this isn't currently possible, but I wonder if there isn't a way around this?
    Current Project: Skycall
  • opampopamp Posts: 326
    Hiro Protagonist
    as youve got the location and rotation of the rift in world space you could always have 2 quads positioned in front of the rift
    at the correct relative locations and the correct distances and update there transforms every tick.
    A bit hacky but it would work.

    I take it you've managed to figure out how to get these images updated to Utexture2d's without stalling the game thread?

    Im not to clued up on the leap motion but wouldnt it be easier to just have a couple of your own skeletal meshes for the arms and update the bone transforms via an anim blueprint rarther than using some images generated by the leap SDK?
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • getnamogetnamo Posts: 85
    Brain Burst
    edited December 2014
    Thanks for the feedback opamp, though its not finger/arm location tracking I'm going for, I've got that working quite well in the plugin

    just doing this
    5fJtEWY.gif

    gives you this
    HWVaeid.gif

    In addition the plugin already supports real-time image forwarding to UE's Texture2d.
    687474703a2f2f692e696d6775722e636f6d2f715268504844432e706e67

    What I'm trying to achieve is a pass-through effect (see initial video). Essentially it gives the Rift AR capabilities by allowing you to see the world around you in IR. Its absolutely good enough for typing on a keyboard or drinking with the rift on. A very compelling experience, which is currently working in unity because they just render straight to the cameras and have easy render layer support, whereas the UE Rift implementation is much weirder (opaque).

    Maybe 2 well placed 3d quads might work, if they can be somehow be applied to only the specific eye and if it can be passed through rendering before everything else (so you can overlay 3d objects on top of the real world). I worry that this way will be hard to calibrate for a proper 1:1 control even if you can do split-eye feeding in the vanilla UE engine and why rendering a full 2d texture directly in each eye before the scene would be ideal.
    Current Project: Skycall
  • andrewtekandrewtek Posts: 976
    Art3mis
    Is there any way to flag an object so that it is only left/right eye rendered? If not, does the rendering pipeline have any flags to indicates which eye is being rendered?
  • opampopamp Posts: 326
    Hiro Protagonist
    getnamo wrote:
    Thanks for the feedback opamp, though its not finger/arm location tracking I'm going for, I've got that working quite well in the plugin...

    As Ive spent the evening bashing my head against a wall in regards to one of my own projects I thought I'd give myself a break and have a go at a per-eye postprocessing shader.

    The below shader should get you started although it will only look correct in the rift.

    http://i.imgur.com/AlGoWIo.png
    AlGoWIo.png
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • opampopamp Posts: 326
    Hiro Protagonist
    andrewtek wrote:
    Is there any way to flag an object so that it is only left/right eye rendered? If not, does the rendering pipeline have any flags to indicates which eye is being rendered?

    I dont believe so,

    There is'nt any sort of camera layer system with UE4.
    The only one that exists is bOwnerNoSee. I really think they need to implement something like that.
    I use to love the system they've got with Unity as all sorts of cool tricks where possible.

    As for flags for eye rendering you'd have to look into the unreal plugin source code(something I really havnt had the time to do). But I doubt it.
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • andrewtekandrewtek Posts: 976
    Art3mis
    opamp wrote:
    The below shader should get you started although it will only look correct in the rift.
    ...

    Thanks for the blueprint. That is very cool! Something like this could definitely be used to do some interesting things. I tried this trick to create a material that would put a red texture in the right eye, and a gray-scale version of that texture in the left eye. The result was interesting. The brain combines the resulting colors for a grayish/red texture. I could definitely see some inversion puzzle mechanic uses for this.

    uc?id=0ByrtvXdsXpmcU3pJYzhJYzB2aTg&export=view
  • andrewtekandrewtek Posts: 976
    Art3mis
    Getnamo, would you be willing to share the steps you took to integrate with Leap Motion? Thanks!
  • opampopamp Posts: 326
    Hiro Protagonist
    andrewtek wrote:
    Thanks for the blueprint. That is very cool! Something like this could definitely be used to do some interesting things. I tried this trick to create a material that would put a red texture in the right eye, and a gray-scale version of that texture in the left eye. The result was interesting. The brain combines the resulting colors for a grayish/red texture. I could definitely see some inversion puzzle mechanic uses for this.

    I didnt even think to try this on a surface material.
    I just tried this with 2 extremly different textures and the effect was very strange.
    Probably not a good thing to look at for too long!

    I wonder if anything interesting can be done with normal maps?

    P.S Just be aware that you are only rendering the oposite half of each texture(thats why i tiled them horizontally in my example).
    To do the postprocess correctly I believe you would start with 2 eye textures of the correct size and use a ScreenAlignPixeltoPixelUV node.
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • andrewtekandrewtek Posts: 976
    Art3mis
    opamp wrote:
    I didnt even think to try this on a surface material.
    I just tried this with 2 extremly different textures and the effect was very strange.
    Probably not a good thing to loo at for too long!

    Agreed. Very different textures are not comfortable. There are probably some interesting optical illusions you could do, but the resulting headache would not be worth it :D.
  • getnamogetnamo Posts: 85
    Brain Burst
    opamp wrote:
    getnamo wrote:
    Thanks for the feedback opamp, though its not finger/arm location tracking I'm going for, I've got that working quite well in the plugin...

    As Ive spent the evening bashing my head against a wall in regards to one of my own projects I thought I'd give myself a break and have a go at a per-eye postprocessing shader.

    The below shader should get you started although it will only look correct in the rift.

    http://i.imgur.com/AlGoWIo.png
    AlGoWIo.png

    That's amazingly simple! Will have a look at extending the shader with proper leap warping to see if I can get the images to be 1:1.

    I take it you would then set this as a custom blendable?
    andrewtek wrote:
    Getnamo, would you be willing to share the steps you took to integrate with Leap Motion? Thanks!

    Absolutely, the source has been available in the plugin since I forked it (around October) at https://github.com/getnamo/leap-ue4

    The readme is quite extensive in how to use it, browse that to understand how it works. The example LeapRiggedCharacter (shown above) is available as optional content in the plugin found in the same repo. So it is as simple as downloading the plugin, dragging it into your project root and setting your Pawn to LeapRiggedCharacter (and setting VRController if you're using an HMD).

    As for how it is done, UE plugin documentation can be found at this link. It may be sparse but, it explains the general plugin system used in UE.

    In more detail: UE uses C# to specify build rules, then typically you arrange and specify additional code that gets included in the plugin source. A class sub-classed from IModuleInterface defines your plugin entry and exit points, this would be typically where you link the libraries and cleanup any memory (unless like leap it has one million classes, where I set everything directly in the class implementation).

    Typically you bind to the SDK library using Headers, .libs and .dll files for windows. Then using UObjects with Blueprint type categories you can expose that through wrappers you write yourself. If you make an ActorComponent sub-class with that data, you can then provide this functionality to any blueprint where a developer might want the plugin behavior exposed and with an interface and some smart ticking you can let that blueprint receive data in an event driven fashion. All my plugins support this setup and you can browse their source code for specific examples of implementation.

    If you're looking for more examples of plugin integration see the Hydra plugin or the Myo plugin. Custom Input Mapping plugin gives an example of how to use blueprint libraries to expose functions globally, instead the eventdriven component + interface structure.
    Current Project: Skycall
  • andrewtekandrewtek Posts: 976
    Art3mis
    getnamo wrote:
    Absolutely, the source has been available in the plugin since I forked it (around October) at https://github.com/getnamo/leap-ue4

    The readme is quite extensive in how to use it, browse that to understand how it works. The example LeapRiggedCharacter (shown above) is available as optional content in the plugin found in the same repo. So it is as simple as downloading the plugin, dragging it into your project root and setting your Pawn to LeapRiggedCharacter (and setting VRController if you're using an HMD).

    Very cool! Thanks!
  • getnamogetnamo Posts: 85
    Brain Burst
    @opamp Just wanted to say that your method worked beautifully. Currently have passthrough images perfectly scaled for each eye running at 75fps, and I can easily fade them in/out using a scalar parameter in the material.
    Just need to add some shader warping and see if that 1:1 can be achieved.

    Thanks again for you help, will post a gif when I have it working 1:1!
    Current Project: Skycall
  • opampopamp Posts: 326
    Hiro Protagonist
    getnamo wrote:
    @opamp Just wanted to say that your method worked beautifully. Currently have passthrough images perfectly scaled for each eye running at 75fps, and I can easily fade them in/out using a scalar parameter in the material.
    Just need to add some shader warping and see if that 1:1 can be achieved.

    Thanks again for you help, will post a gif when I have it working 1:1!

    Glad I could help, I was just as suprised as to how simple it was.
    Epic/Oculus really need to explain clearly whats happening with the rendering pipeline at some point.

    The recent Leap game jam and your plugin is tempting me to buy a leap but I also need to upgrade my CPU due to bottlenecking and I can only afford to buy one item this month and the other in a few months time.
    Id much rarther get the leap to play with over Xmas.
    But Im concerned that the leap is a bit of a CPU hog which will make my bottleneck worse if I purchase it before the CPU upgrade.

    Could anyone with a leap tell me If its true that the leap will hog a whole 4 cores?
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • getnamogetnamo Posts: 85
    Brain Burst
    This is my usage in editor, and its the same story in play standalone or packaged. This is on a i5-3570k (4 cores, no HT) with UE 4.6 and Leap SDK 2.2.

    yFexzXj.png

    UE4 is single threaded, but may have some side services on other threads, it will typically occupy 22-25% when standalone (roughly one core for me). The editor may have more services and can pass one core usage to something like 30%.

    Leap will use about 1% when idle, then typically 13-14% when tracking hands, up to 19% temporary peak when using passthrough images. I have never noticed the impact on the system personally, there is more than enough headroom.

    On the passthrough, the math to properly warp it seems pretty intricate to me, so it may take some time for me to push an update with the feature enabled. I believe you first obtain the leap distortion-free image (FOV 150), and then warp it to fit the oculus FOV(110?). Would need to convert this beauty into HLSL or recreate it using material nodes.
    Current Project: Skycall
  • opampopamp Posts: 326
    Hiro Protagonist
    getnamo wrote:
    This is my usage in editor, and its the same story in play standalone or packaged. This is on a i5-3570k (4 cores, no HT) with UE 4.6 and Leap SDK 2.2.

    Leap will use about 1% when idle, then typically 13-14% when tracking hands, up to 19% temporary peak when using passthrough images. I have never noticed the impact on the system personally, there is more than enough headroom.

    Thanks for thank looks like i'll be getting a leap for Xmas! ;-)
    getnamo wrote:
    On the passthrough, the math to properly warp it seems pretty intricate to me, so it may take some time for me to push an update with the feature enabled. I believe you first obtain the leap distortion-free image (FOV 150), and then warp it to fit the oculus FOV(110?). Would need to convert this beauty into HLSL or recreate it using material nodes.

    Looking at the shader im guessing the largest part of it is trying to deybayer a raw image into a color one?
    But im not entirley sure TBH.

    I had a little look at the basic leap example link you posted previously, and it looks like there using the red and green components of the Calibration/Distortion Map as a UV lookup table for the distorted image.

    so the distortion rectifcation part should go something like this...

    UZdqzJjl.png
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • getnamogetnamo Posts: 85
    Brain Burst
    opamp wrote:
    Looking at the shader im guessing the largest part of it is trying to deybayer a raw image into a color one?
    But im not entirley sure TBH.

    I had a little look at the basic leap example link you posted previously, and it looks like there using the red and green components of the Calibration/Distortion Map as a UV lookup table for the distorted image.

    so the distortion rectifcation part should go something like this...

    UZdqzJjl.png

    This is what I love about collaboration you learn new ways to do some things. Like your mix and matching HLSL code with graph nodes, simplifies logic greatly!

    I believe the color information is just to show which pixels fall out of the corrected range for reference, in my graph I have a corrected leap image (I hope :D) using the distortion values as uv's like you did and just ignored pixels outside the range (creates some interesting effect there), but I like the cleanliness of your setup. Now the correct leap->oculus is the weird part for me, the final image should be a grayscale image warped from a 150 fov pincushion to something oculus ready. Will try to reach out to someone from leap to explain a bit of the logic behind the shader as I think I'm missing some of the details.

    Interestingly enough if you pass the raw images into the rift, it will appear inside like you have a wide FOV and it is already kind of usable, but quite nauseating after a while hehe. At least its more bearable than a wildly spinning control input gone wrong, isn't VR dev fun? :lol:
    Current Project: Skycall
  • opampopamp Posts: 326
    Hiro Protagonist
    getnamo wrote:
    Now the correct leap->oculus is the weird part for me, the final image should be a grayscale image warped from a 150 fov pincushion to something oculus ready.

    I might be wrong but maybe its just a case of cropping the images to the correct FOV?

    If it is then something like this function would crop the images by a percentage.

    NYgFAYX.png

    If thats the case then you would end up with something like this,

    EzZzG3u.png

    But I might be barking up the wrong tree...
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • getnamogetnamo Posts: 85
    Brain Burst
    You're right maybe it is as simple as cropping to the right FOV, since that reduces the FOV. Currently traveling, but will hopefully have a chance to test this out soonish (tm). Either way will check with some of the leap guys to get a confirmation on how they have it implemented in Unity.
    Current Project: Skycall
  • opampopamp Posts: 326
    Hiro Protagonist
    You should PM leapmotion_alex over on the oculus reddit subforum.
    he recently mention something about scaling up the image by 1.55 which cropping would effectivly do in this thread

    http://www.reddit.com/r/oculus/comments/2puass/random_guys_leap_motion_impressions/
    DK2. Phenom 2 x4 4.2GHz,Asrock Extreme 3 970,8GB DDR3 1600, R9 270x 1180/1400.
  • alexcolganalexcolgan Posts: 162
    Art3mis
    Responded on reddit, but wanted to keep the discussion going here as well. The 1.55 scaling doesn't apply to the images, but to world objects like the hand. This ensures that the virtual hand aligns with the real hand shown in the images -- necessary because the Leap cameras are separated by 4cm, while the average human eyespan is 6.4cm.
    Head writer @ Leap Motion
  • getnamogetnamo Posts: 85
    Brain Burst
    @alex
    Thanks for the clarification. With artyom's oculus changes we will finally be able to scale the camera if we want the VR geometry to fit the passthrough. Still wondering what crop factor is needed on the corrected image for image passthrough to appear correct if anyone of your technical guys could get back to us on that, it would be very useful!

    @opamp
    Your cropping code almost worked out of the box, thanks for that! Had to fiddle with it for a bit so it would look right for the 200% zoom factor. I came up with this

    ilQ0PeZ.png
    (open image to see full graph, oculus should really resize images in their forums...)

    It uses the rendered scene as a final blend, which allows for intensity parameter based blending of the passthrough effect. The 0.25/0.75 as center points for each image is only valid for the 0.5/0.5 crop factor, unsure how to make a formula so that the center of the images always scale correctly (I assume that is what your function was trying to do, but TexCoord as UVs may have messed with that since we modify it to the 2 images).


    Then there is a separate problem with the way the distortion is calculated in the plugin at the moment, e.g.

    KJo6z7m.png

    The left image is the distortion used for the lookup and the right is a regular grid texture warped. It seems to bias the left/bottom corner.

    I wonder if this is due to downscaling the distortion from 32bit to 8bit (couldn't get 32bit per channel textures to work in UE), but I think it is much more likely due to scaling the float range to 0-255 when some values are above 1.0 in the float range and these are supposed to be dropped. Will adjust the plugin code and see if I can't get this to work right.
    Current Project: Skycall
  • getnamogetnamo Posts: 85
    Brain Burst
    After some pointers from epic and leap I finally got the 8bit passthrough to look alright!

    Apparently textures by default use gamma correction which is used to widen perceived colors. This applies a power curve on your values which is why I got the UV center skew earlier. Simply calling
    utexture2DPointer->SRGB = 0;
    
    will turn this off.

    With that correction done the crop was based on the oculus DK2 projection matrix (@artyom btw if we have access to this as a raw value, shaders which depend hmd FOVs would be adaptable to any newer hmds with different FOVs e.g. DK1 vs DK2)

    678368813db4f9a2d0aee62ab27b17f0151b87f9fe76.png

    which gives

    H = 0.25/0.929789 = 26.89%
    V = 0.25/0.752283 = 33.23%

    where the percentages are the percent of the pre-warp image that should be retained around the image centers for each axis.

    With all that done, the 8bit per channel passthrough was ready to use. Below is an example of it applied for a uniform grid source (NB: open these images in a separate tab to see them fully, some may be cropped due to how the oculus forum works)

    JaZnO1y.png

    Some squiggles in the lines are apparent and these are likely due to the 8bit downsampling, more on this later. At this stage I was curious of the performance impact of this setup, so I ran both the GPU and CPU profilers during rendering.

    LQ5uLFQ.png
    The GPU profiler showed that the LeapPassthrough shader took 0.19ms per eye to complete, a very tiny and acceptable fraction of the total post process cost.

    hhOvNV6.png
    From the CPU profiler you can see two timings 2.54ms and 1.77ms they correspond to the total time the CPU spent on the plugin tick, out of which most is spent on copying image and distortion pointers. Initially I used a CPU function that culled outside ranges and inverted values. Simply by swapping the channel inversion to the shader and ignoring values outside the range (they won't be shown anyway due to cropping) we saved about 0.8ms which is more than the entire GPU cost of this post process.

    The final material (UE visual shader) ended up looking like this

    gzwJaGR.png

    with the always horizontally centered cropping material function given by
    hyJ6f4J.png

    Which is 66 instructions for the optimized version. Since we're already in the post process I went ahead and tried a simple depth blend.

    kMbxN8z.gif

    It's like replacing your office space with an open roof, it had the feeling of a future AR style office, quite a few interesting things you can do from here.

    The 8bit distortion map still suffers from line warping which causes some of the pixels to align incorrectly when using the HMD. The next step now is to get that 32bits of distortion per channel working and maybe finally this will be incorporated into the plugin :smile:

    If you know anything about float channel textures in UE, drop me a line, formats are still pretty poorly documented :(
    Current Project: Skycall
  • getnamogetnamo Posts: 85
    Brain Burst
    Finally got the final 32bit per channel texture version working. Thanks a lot for your help opamp!

    you can now try it by grabbing the plugin update and setting your character to the convenience content one included in the plugin

    D7ifDlj.gif

    and then you can transition from VR to AR with a simple gesture

    ozbhr3E.gif
    Current Project: Skycall
Sign In or Register to comment.