Welcome to the Oculus Developer Forums!

Your participation on the forum is subject to the Oculus Code of Conduct.

In general, please be respectful and kind. If you violate the Oculus Code of Conduct, your access to the developer forums may be revoked at the discretion of Oculus staff.

Achieving natural depth of field

uclatommyuclatommy Posts: 89
edited October 2015 in Oculus Rift S/Rift Development
Hold a finger close to your face and look at it. We all know that you'll have double vision for distant objects behind your finger, but you should also notice that the background is blurred.

This effect does not occur in the rift because everything is rendered in focus. Sure, you could add a depth of field to your render, but what happens when you actually look past your finger? The image would converge, but it would be blurry since the computer doesn't know what you're trying to focus on.

I think depth of field is an important visual cue for a sense of space and the rift is lacking this.

But how do we achieve it? I'm no expert at optics, but I believe that your perception of blurriness is because the light entering your eyes reflected from objects at different distances is due to the different angles of incidence. So that got me thinking about whether or not it would be possible to manipulate the angles of light being emitted from individual pixels.

The first thought that comes to mind is to use prisms to do this.

Thoughts? Am I just totally ignorant here? Technically impossible?

Comments

  • geekmastergeekmaster Posts: 2,866
    Nexus 6
    It has been discussed. The options are not affordable with current technology just yet, but they will be one day.

    Electrically variable-focus lenses with high-speed eye tracking are one option. Holographic displays are another option. Other non-holographic light-field methods have also been discussed.

    With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
  • uclatommyuclatommy Posts: 89
    geekmaster wrote:
    With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.

    I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.

    What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel?
  • geekmastergeekmaster Posts: 2,866
    Nexus 6
    uclatommy wrote:
    geekmaster wrote:
    With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
    I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.

    What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel?
    Light field technology (simulating multi-axis "fly eye" lenses) is really the way to go. Holograms also provide light fields. The nice thing about light fields is that you can refocus your depth of field after-the-fact. Check out the Lytro cameras for an example. There is also a synthetic light field rendering project for the Rift posted at MTBS3D.

    It will be awhile before we have true light-field display technology though.
  • uclatommyuclatommy Posts: 89
    Just read up on light field technology. Very cool! I'm glad it's being worked on.
  • ZeroWaitStateZeroWaitState Posts: 110
    Hiro Protagonist
    geekmaster wrote:
    uclatommy wrote:
    geekmaster wrote:
    With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
    I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.

    What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel?
    Light field technology (simulating multi-axis "fly eye" lenses) is really the way to go. Holograms also provide light fields. The nice thing about light fields is that you can refocus your depth of field after-the-fact. Check out the Lytro cameras for an example. There is also a synthetic light field rendering project for the Rift posted at MTBS3D.

    It will be awhile before we have true light-field display technology though.

    Lytro's are amazing,I have played with one at a local tech space. I look forward to how light field tech will progresses. currently the processing required to manipulate the data set is a bit of a deal breaker at present, as GPU density increases in the next 12/18 months this may become less of an issue.
    "I love the French language. I have sampled every language, French is my favourite - fantastic language, especially to curse with. ..... It's like wiping your arse with silk, I love it." - The Merovingian
  • geekmastergeekmaster Posts: 2,866
    Nexus 6
    geekmaster wrote:
    ... It will be awhile before we have true light-field display technology though.
    Lytro's are amazing,I have played with one at a local tech space. I look forward to how light field tech will progresses. currently the processing required to manipulate the data set is a bit of a deal breaker at present, as GPU density increases in the next 12/18 months this may become less of an issue.
    Actually, since I posted that, the new Nvidia announcement about their light-field HMD !!! made me do some more research about light-field cameras and displays. There is some amazing DIY info in these posts:
    viewtopic.php?f=20&t=2620&p=36049#p35577
    viewtopic.php?f=20&t=2620&p=36049#p35589

    It seems that light-field photos and light-field displays are just a grid of tiny lenses over a grid of tiny pictures, just like a fly's eye. Not very complex at all...
  • aeroaero Posts: 28
    I'm also really excited to see where light field technology can go, posted about it back in june (https://developer.oculusvr.com/forums/viewtopic.php?f=33&t=1942) when I saw the SIGGRAPH emerging technologies preview video, it looks like a really promising technology.
  • geekmastergeekmaster Posts: 2,866
    Nexus 6


    It seems that a lens barrel extension can achieve 500x plenoptic resolution increase, according to that video.

    That makes me curious if such an adjustment can give a big perceived resolution boost for a plenoptic HMD too. Although having the lenses near the eyes is probably more important, if a choice needs to be made.

    Here is a link to the document at the end of the above video:
    http://www.tgeorgiev.net/FullResolution.pdf
  • sftrabbitsftrabbit Posts: 29
    What we really need is the ability to affect the incidence of light from each pixel. Unfortunately, the mapping from pixels on the display to positions on the lens is not one-to-one. It's not like we can just deform the lens at certain points to affect the incidence of light coming from certain pixels. This is because the light from a pixel is emitted in all directions and passes through the lens in all places. The angles of those rays of light as they enter the eye are what determines the focus for that pixel. At the moment, the lens causes all rays of light from a single pixel to enter the eye in parallel, so the eye needs to focus at infinity. If you somehow changed part of the lens, it would affect some of the light from all pixels.

    The next best thing we could do is have a lens with variable focal length coupled with eye tracking, but this wouldn't really solve the problem. It might prevent eye fatigue, because the focal length of the lens can be changed depending what you are looking at, so your eye can focus naturally. However, everything on the display would be in focus once you have focused on the object. You could potentially simulate the out of focus blur in software, but I imagine that this solution wouldn't work very well. There would likely be an uncomfortable delay between looking at a new object and focusing on it, where the render and focal length are adjusting and your eye isn't quite sure where to focus.

    So I'm fairly certain that lenses won't be the solution here - perhaps, as others have mentioned, light field displays will save the day.
  • EntroperEntroper Posts: 40
    Brain Burst
    I honestly prefer not to have depth of field simulated. You get better visual acuity and less eye strain when your eyes can just focus at infinity the entire time, no matter what you're looking at.
  • wyattwyatt Posts: 3 Oculus Start Member
    You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this.

    It would! Or at least in theory. Even though it would physically be on the same z-plane as everything else on screen, the point of convergence would be different (you cross your eyes more/less).
    The options are not affordable with current technology just yet, but they will be one day.

    I have family that develop Eye-Tracking and this might not be an issue anymore. I'm trying to get a hold of someone at Oculus VR to talk about it.
  • HarleyHarley Posts: 130
    Checkout NVIDIA Research on Near-Eye Light Field Display with Lens Array Optics posted in this other thread:

    viewtopic.php?f=33&t=3015
    Harley wrote:
    NVIDIA Near-Eye Light Field Display with Lens Array Optics?

    NVIDIA Research are showing of a very promising Near-Eye Light Field Display prototype HMD at SIGGRAPH this year:

    http://www.engadget.com/2013/07/24/nvidia-research-near-eye-light-field-display-prototype/

    These Near-Eye Light Field Displays features Thin Magnifying Lens Array Optics that enables the human eye to do depth focus and defocus at different depths for the same image, and this type of setup also allow them correct for someone's glasses or contacts lens prescription in software to enable sharp focus, which are two things that the LEEP optics in the current Oculus Rift DevKit do not offer.

    http://www.youtube.com/watch?v=deI1IzbveEQ


    Note that the prototype in these images and videos is just a rapid prototyped model that is basically a stripped Sony HMZ-T2 with everything but the OLED displays and controller boards removed, replaced with NVIDIA Research's own Thin Magnifying Lens Array Optics for Near-Eye Light Field Displays in a 3D printed HMD glasses shell and the unique software rendering components to use a such lens array.

    https://research.nvidia.com/publication/near-eye-light-field-displays
    http://research.nvidia.com/sites/default/files/publications/neld-abstract.pdf
    Near-Eye Light Field Displays

    Research Area: Stereoscopic 3D
    Author(s): Douglas Lanman (NVIDIA), David Luebke (NVIDIA)
    Date: July 2013

    This light-field-based approach to near-eye display allows for dramatically thinner and lighter head-mounted displays capable of depicting accurate accommodation, convergence, and binocular-disparity depth cues. The near-eye light-field displays depict sharp images from out-of-focus display elements by synthesizing light fields that correspond to virtual scenes located within the viewer's natural accommodation range. While sharing similarities with existing integral imaging displays and microlens-based light-field cameras, the displays optimize performance in the context of near-eye viewing. Near-eye light-field displays support continuous accommodation of the eye throughout a finite depth of field; as a result, binocular configurations provide a means to address the accommodation-convergence conflict that occurs with existing stereoscopic displays. This demonstration features a binocular prototype and a GPU-accelerated stereoscopic light field renderer.


    I think that it is optics engineers of this quality that Oculus VR should aspire to employ:
    https://research.nvidia.com/users/douglas-lanman
    https://research.nvidia.com/users/david-luebke
  • kf6kjgkf6kjg Posts: 5
    NerveGear
    NICE!!

    Ok, back from the excitement...

    I remember when I was first introduced to the Lytro and did some research into the technique. My gut told me that the results were reversible: that one could emit instead of just receive. It's good to know that someone with the tools to accomplish the task was able to start the ball rolling! (As has been stated before, ideas abound - practical application is where reality exists.)

    Doing some quick trig sketches (helped along by my favorite solver: SolidWorks - no kill like overkill!) I found some interesting numbers. I endeavored to determine what the rough pixel density would have to be to achieve various results on such a near-eye setup. Here are my results:
    Virtual image distance: 8 inches (203mm) from eye.
    Wanted virtual pixel density: 300 PPI.
    Eye to HMD display distance: 0.5 inch (13mm)
    Resulting PPI on HMD: 4808 PPI.

    Virtual image distance: 8 inches (203mm) from eye.
    Wanted virtual pixel density: 300 PPI.
    Eye to HMD display distance: 1.0 inch (25mm)
    Resulting PPI on HMD: 2398 PPI.

    Virtual image distance: 8 inches (203mm) from eye.
    Wanted virtual pixel density: 100 PPI.
    Eye to HMD display distance: 0.5 inch (13mm)
    Resulting PPI on HMD: 1600 PPI.

    Virtual image distance: 8 inches (203mm) from eye.
    Wanted virtual pixel density: 100 PPI.
    Eye to HMD display distance: 1.0 inch (25mm)
    Resulting PPI on HMD: 800 PPI.

    Note that what I'm terming "virtual image" is what you would get if you were to project the pixels being displayed on the HMD out to a matching surface at a given distance from the eye. The goal is that if I were to display a virtual computer screen in my "game world" and look at it like I would in real life, the virtual computer screen should have the same "pixel" (texel?) density as the equivalent screen in real life.

    Doing some quick research on the smallest LED displays, especially the OLED, I found that the feature sizes being discussed were all at the 50 to 600 um size range. A wide range, but now I wanted to know what my estimated maximum feature size would need to be for a grouping of 3 RGB round dots at the PPIs listed above. This estimate is large due to not accounting for gaps or extra components needed in making a functional display:
    4808 PPI: 2.45um
    2398 PPI: 4.92um
    1600 PPI: 7.37um
    800 PPI: 14.74um

    Even if said feature size was attainable, the really awesome methods being linked to above would result in even less virtual (aka perceived) pixel density - or would require even finer feature sizes!

    I wonder if raster isn't the best technique for displaying information in an HMD... Got to explore that thought.

    (2/10)
  • Could It be possible to (assuming you could track the eyes rapidly) find the object on screen in direct focus of the eye, adjust the FOV blur accordingly, then apply a sort of shader driven focal shift on the object in focus to correct for the lens in the eye, based on its proposed distance? I'm unsure how feasible this would be to do because I know very little about how shaders actually work, or whether or not rapid eye tracking technology is even something you can do affordably, but this is the only way I could think of that would adjust for the lens in the eyes. Tell me if you like the idea, or if I'm being dumb in some way. :)
  • QosmiusQosmius Posts: 22
    are you talking about panums field now? if you focus on a near point and try to look behind that point you get double vision..

    it is because every point on your retina corresponds to another point in the other eye..so if those points dont get activated at the same time your brain will not be able to fuse the images therefore you will see double beyond focus point...i can explain this for hours but my english is not the best and it will take some time to write here...this should not be a problem in the oculus since the screen should be at the focus point..if you had a screen behind the screen i could see this being a problem
  • raidho36raidho36 Posts: 1,312
    I'm strongly disagree with software blur depth of field based on user focus. With only slight increase of realism, it will seriously tamper the picture. And also, it doesn't look as real in VR as it looks in the 2d monitor - your eyes are still getting converged image, just blurred in software.

    However, it is generally safe and encourageable to forcibly blur objects that are closer to the viewer than 3" - human eye is physically incapable of focusing at this distances. Even if personally you can focus at as short as 2", it will start to cause some eye strain even at 5" and down 3" it becomes really bothering, so presumably, you won't be doing this.

    The only real option here is light field projection, somehow. Microlenses are obviously not an option - all-around resource hog, and all for a slight divergence of incoming light, there should be a better way.
  • CarolinaCarolina Posts: 1
    How about displaying your next venture with state of the art 3 dimensional hologram displays instead of those boring banners? With hologram display, you can add a life to your product exhibit with these 3 dimensional displays. With Olomagic, this confusing tricky technology has taken a whole new turn. It is easy to install, use, and change the images in Olomagic. Our team can help you to install it wherever you want to in just a couple of minutes. With USB drives, you can insert pictures and details to show it in 3 dimensional aspects.
  • raidho36raidho36 Posts: 1,312
    Except that it's ungodly hard to generate a hologram because you literally must render it from every possible viewing position.
  • Hello, this issue is the one thing I immediately noticed when first seeing 3D cinema. Even with a camera or high quality rendering the dof won't be accurate since everybody looks at a different spot in every situation.
    Since our brain 'scans' the image with help of the fovea the image will be out of focus in a simulated dof effect. And it gets tiresome for the eye to try to focus on something which it can't (because the projection is blurry).

    So unless we really have eye tracking dof is kind of useless apart from the more obvious examples.

    Providing the eye with something else than an infinite point of focus could be fairly easy with new hardware.
    Just put piezo elements in-between the lenses and whatever they are mounted on. Since the focal distance of our lenses is fairly short it might be sufficient to modulate the voltage the piezos get. I haven't done the calculation if that is sufficient to account for a large enough amount of variable focus.
    There are some products using piezoelectric actuators in camera auto focus so I think it might be a thing.

    What's next our eyes are pretty much always focused on the area where the fovea is pointed at and - given the eye tracking is good enough to account for the fastest and minuscule eye movements we might not need to account for the focus at all, we might not be able to tell a difference.
  • I've got a tech demo up on share that has natural depth of field in it for live action capture. You need to follow the ball all the way to the end. When the ball is in your face, focus on the center of the ball, then focus on the background. You will notice the natural blurring of background and foreground appropriately.

    https://share.oculus.com/app/live-action-360-3d-tech-demo
  • MrMonkeybatMrMonkeybat Posts: 640
    Brain Burst
    I always turn off depth of field in games that have it. If the oculus came out with eye tracking that was used by games to create depth of field I would go meh and turn it off. Having sharper focus than a real life camera seems a plus not a negative to me.
  • ElectricMucusElectricMucus Posts: 378
    I always turn off depth of field in games that have it. If the oculus came out with eye tracking that was used by games to create depth of field I would go meh and turn it off. Having sharper focus than a real life camera seems a plus not a negative to me.

    That may seem so if there isn't any eye tracking in place.

    In reality depth of field is an important depth cue for your vision. Your brain knows that blurry objects have to be at another distance than the object you are focused on. Our eye distance is way too small for us to only rely on stereoscopic vision and so we have evolved the ability to judge distance with more physical phenomena than is initially obvious.

    So I'd say that depends on the situation of the application. In some cases it might be preferable to always have the vision in focus. But if you are looking to provide a plausible virtual environment or an environment where distance is important to be judged by the user DOF is a key component.
    Of course wrongly implemented or exaggerated DOF can be devastating to the experience and in order to work well it must correspond to physics, so it's not always feasible to implement it in such a way that it can be beneficial.
  • crim3crim3 Posts: 385
    Nexus 6
  • MrMonkeybatMrMonkeybat Posts: 640
    Brain Burst
    I always turn off depth of field in games that have it. If the oculus came out with eye tracking that was used by games to create depth of field I would go meh and turn it off. Having sharper focus than a real life camera seems a plus not a negative to me.

    That may seem so if there isn't any eye tracking in place.

    In reality depth of field is an important depth cue for your vision. Your brain knows that blurry objects have to be at another distance than the object you are focused on. Our eye distance is way too small for us to only rely on stereoscopic vision and so we have evolved the ability to judge distance with more physical phenomena than is initially obvious.

    So I'd say that depends on the situation of the application. In some cases it might be preferable to always have the vision in focus. But if you are looking to provide a plausible virtual environment or an environment where distance is important to be judged by the user DOF is a key component.
    Of course wrongly implemented or exaggerated DOF can be devastating to the experience and in order to work well it must correspond to physics, so it's not always feasible to implement it in such a way that it can be beneficial.

    If you dont have a true light field display its not worth it. Fake bluring of the screen when it is really at the same focal distance is only going to exacerbate the accommodation reflex conflict. Tracking the depth your eyes are converging at will require much more accurate eye tracking than that needed for foveated rendering. In photographs you have learned that you can judge foreground and background from the focusing but in real life your eye refocuses very quickly to whatever you are looking at the eye muscles exercising provide some depth hints, but fake blurring will only confuse it. The human eye also has a pretty deep focal plane between things beyond arm length the amount of blurring is less than the angular resolution on curent HMDs. During the day hold up you hand at arms length against the horizon the amount of bluring between the two is quite minor. And as you can only see clearly with the 4 degrees of your fovea the situations when things with such great distances between them are both inside you fovea are really quite rare.
  • tonyLongsontonyLongson Posts: 6
    NerveGear
    It seems to me that there is some confusion about what is being discussed.

    Depth of field is a camera lens effect which occurs when the lens has a large aperture (usually because light levels are low).

    The apparent doubling of images at distances different from the focus of attention is a result of convergence. Anything in the field of view that we are not converging on will to some extent appear as a double image. This is very clearly a component of natural vision, and it is not replicated in any kind of stereo images; being flat, they have just a single focal plane.

    Neither phenomenon could be simulated with eye tracking which measures eye convergence, as the eyes are focusing on a flat plane - the VR screen, the eye convergence is always the same.

    However, tracking to see what object the viewer is looking at, figuring out how far away it is, and changing the display of everything else (depth of field, and faking convergence artefacts) could work.

    Is anyone working on this ?
  • tonyLongsontonyLongson Posts: 6
    NerveGear
    This is very clearly a component of natural vision, and it is not replicated in any kind of stereo images; being flat, they have just a single focal plane.

    Neither phenomenon could be simulated with eye tracking which measures eye convergence, as the eyes are focusing on a flat plane - the VR screen, the eye convergence is always the same.

    Sorry... obviously wrong about this. I was forgetting the difference between the two images..
Sign In or Register to comment.