cancel
Showing results for 
Search instead for 
Did you mean: 

Split Post Processing

getnamo
Honored Guest
I am working on integrating a leap motion pass-through effectinto UE4 and I was wondering if anyone knows how to achieve a disparate view for post-processing so that I can feed left/right eye images from the leap before geometry is rendered (or even after), or if anyone has had success in putting two quads in front of the camera to pull this off accurately.
25 REPLIES 25

opamp
Protege
"andrewtek" wrote:

Thanks for the blueprint. That is very cool! Something like this could definitely be used to do some interesting things. I tried this trick to create a material that would put a red texture in the right eye, and a gray-scale version of that texture in the left eye. The result was interesting. The brain combines the resulting colors for a grayish/red texture. I could definitely see some inversion puzzle mechanic uses for this.


I didnt even think to try this on a surface material.
I just tried this with 2 extremly different textures and the effect was very strange.
Probably not a good thing to look at for too long!

I wonder if anything interesting can be done with normal maps?

P.S Just be aware that you are only rendering the oposite half of each texture(thats why i tiled them horizontally in my example).
To do the postprocess correctly I believe you would start with 2 eye textures of the correct size and use a ScreenAlignPixeltoPixelUV node.

andrewtek
Expert Protege
"opamp" wrote:

I didnt even think to try this on a surface material.
I just tried this with 2 extremly different textures and the effect was very strange.
Probably not a good thing to loo at for too long!


Agreed. Very different textures are not comfortable. There are probably some interesting optical illusions you could do, but the resulting headache would not be worth it :D.

getnamo
Honored Guest
"opamp" wrote:
"getnamo" wrote:
Thanks for the feedback opamp, though its not finger/arm location tracking I'm going for, I've got that working quite well in the plugin...


As Ive spent the evening bashing my head against a wall in regards to one of my own projects I thought I'd give myself a break and have a go at a per-eye postprocessing shader.

The below shader should get you started although it will only look correct in the rift.

http://i.imgur.com/AlGoWIo.png


That's amazingly simple! Will have a look at extending the shader with proper leap warping to see if I can get the images to be 1:1.

I take it you would then set this as a custom blendable?

"andrewtek" wrote:
Getnamo, would you be willing to share the steps you took to integrate with Leap Motion? Thanks!


Absolutely, the source has been available in the plugin since I forked it (around October) at https://github.com/getnamo/leap-ue4

The readme is quite extensive in how to use it, browse that to understand how it works. The example LeapRiggedCharacter (shown above) is available as optional content in the plugin found in the same repo. So it is as simple as downloading the plugin, dragging it into your project root and setting your Pawn to LeapRiggedCharacter (and setting VRController if you're using an HMD).

As for how it is done, UE plugin documentation can be found at this link. It may be sparse but, it explains the general plugin system used in UE.

In more detail: UE uses C# to specify build rules, then typically you arrange and specify additional code that gets included in the plugin source. A class sub-classed from IModuleInterface defines your plugin entry and exit points, this would be typically where you link the libraries and cleanup any memory (unless like leap it has one million classes, where I set everything directly in the class implementation).

Typically you bind to the SDK library using Headers, .libs and .dll files for windows. Then using UObjects with Blueprint type categories you can expose that through wrappers you write yourself. If you make an ActorComponent sub-class with that data, you can then provide this functionality to any blueprint where a developer might want the plugin behavior exposed and with an interface and some smart ticking you can let that blueprint receive data in an event driven fashion. All my plugins support this setup and you can browse their source code for specific examples of implementation.

If you're looking for more examples of plugin integration see the Hydra plugin or the Myo plugin. Custom Input Mapping plugin gives an example of how to use blueprint libraries to expose functions globally, instead the eventdriven component + interface structure.

andrewtek
Expert Protege
"getnamo" wrote:
Absolutely, the source has been available in the plugin since I forked it (around October) at https://github.com/getnamo/leap-ue4

The readme is quite extensive in how to use it, browse that to understand how it works. The example LeapRiggedCharacter (shown above) is available as optional content in the plugin found in the same repo. So it is as simple as downloading the plugin, dragging it into your project root and setting your Pawn to LeapRiggedCharacter (and setting VRController if you're using an HMD).


Very cool! Thanks!

getnamo
Honored Guest
@opamp Just wanted to say that your method worked beautifully. Currently have passthrough images perfectly scaled for each eye running at 75fps, and I can easily fade them in/out using a scalar parameter in the material.
Just need to add some shader warping and see if that 1:1 can be achieved.

Thanks again for you help, will post a gif when I have it working 1:1!

opamp
Protege
"getnamo" wrote:
@opamp Just wanted to say that your method worked beautifully. Currently have passthrough images perfectly scaled for each eye running at 75fps, and I can easily fade them in/out using a scalar parameter in the material.
Just need to add some shader warping and see if that 1:1 can be achieved.

Thanks again for you help, will post a gif when I have it working 1:1!


Glad I could help, I was just as suprised as to how simple it was.
Epic/Oculus really need to explain clearly whats happening with the rendering pipeline at some point.

The recent Leap game jam and your plugin is tempting me to buy a leap but I also need to upgrade my CPU due to bottlenecking and I can only afford to buy one item this month and the other in a few months time.
Id much rarther get the leap to play with over Xmas.
But Im concerned that the leap is a bit of a CPU hog which will make my bottleneck worse if I purchase it before the CPU upgrade.

Could anyone with a leap tell me If its true that the leap will hog a whole 4 cores?

getnamo
Honored Guest
This is my usage in editor, and its the same story in play standalone or packaged. This is on a i5-3570k (4 cores, no HT) with UE 4.6 and Leap SDK 2.2.



UE4 is single threaded, but may have some side services on other threads, it will typically occupy 22-25% when standalone (roughly one core for me). The editor may have more services and can pass one core usage to something like 30%.

Leap will use about 1% when idle, then typically 13-14% when tracking hands, up to 19% temporary peak when using passthrough images. I have never noticed the impact on the system personally, there is more than enough headroom.

On the passthrough, the math to properly warp it seems pretty intricate to me, so it may take some time for me to push an update with the feature enabled. I believe you first obtain the leap distortion-free image (FOV 150), and then warp it to fit the oculus FOV(110?). Would need to convert this beauty into HLSL or recreate it using material nodes.

opamp
Protege
"getnamo" wrote:
This is my usage in editor, and its the same story in play standalone or packaged. This is on a i5-3570k (4 cores, no HT) with UE 4.6 and Leap SDK 2.2.

Leap will use about 1% when idle, then typically 13-14% when tracking hands, up to 19% temporary peak when using passthrough images. I have never noticed the impact on the system personally, there is more than enough headroom.


Thanks for thank looks like i'll be getting a leap for Xmas! 😉

"getnamo" wrote:

On the passthrough, the math to properly warp it seems pretty intricate to me, so it may take some time for me to push an update with the feature enabled. I believe you first obtain the leap distortion-free image (FOV 150), and then warp it to fit the oculus FOV(110?). Would need to convert this beauty into HLSL or recreate it using material nodes.


Looking at the shader im guessing the largest part of it is trying to deybayer a raw image into a color one?
But im not entirley sure TBH.

I had a little look at the basic leap example link you posted previously, and it looks like there using the red and green components of the Calibration/Distortion Map as a UV lookup table for the distorted image.

so the distortion rectifcation part should go something like this...

getnamo
Honored Guest
"opamp" wrote:

Looking at the shader im guessing the largest part of it is trying to deybayer a raw image into a color one?
But im not entirley sure TBH.

I had a little look at the basic leap example link you posted previously, and it looks like there using the red and green components of the Calibration/Distortion Map as a UV lookup table for the distorted image.

so the distortion rectifcation part should go something like this...



This is what I love about collaboration you learn new ways to do some things. Like your mix and matching HLSL code with graph nodes, simplifies logic greatly!

I believe the color information is just to show which pixels fall out of the corrected range for reference, in my graph I have a corrected leap image (I hope :D) using the distortion values as uv's like you did and just ignored pixels outside the range (creates some interesting effect there), but I like the cleanliness of your setup. Now the correct leap->oculus is the weird part for me, the final image should be a grayscale image warped from a 150 fov pincushion to something oculus ready. Will try to reach out to someone from leap to explain a bit of the logic behind the shader as I think I'm missing some of the details.

Interestingly enough if you pass the raw images into the rift, it will appear inside like you have a wide FOV and it is already kind of usable, but quite nauseating after a while hehe. At least its more bearable than a wildly spinning control input gone wrong, isn't VR dev fun? :lol:

opamp
Protege
"getnamo" wrote:

Now the correct leap->oculus is the weird part for me, the final image should be a grayscale image warped from a 150 fov pincushion to something oculus ready.


I might be wrong but maybe its just a case of cropping the images to the correct FOV?

If it is then something like this function would crop the images by a percentage.



If thats the case then you would end up with something like this,



But I might be barking up the wrong tree...