Hold a finger close to your face and look at it. We all know that you'll have double vision for distant objects behind your finger, but you should also notice that the background is blurred.
This effect does not occur in the rift because everything is rendered in focus. Sure, you could add a depth of field to your render, but what happens when you actually look past your finger? The image would converge, but it would be blurry since the computer doesn't know what you're trying to focus on.
I think depth of field is an important visual cue for a sense of space and the rift is lacking this.
But how do we achieve it? I'm no expert at optics, but I believe that your perception of blurriness is because the light entering your eyes reflected from objects at different distances is due to the different angles of incidence. So that got me thinking about whether or not it would be possible to manipulate the angles of light being emitted from individual pixels.
The first thought that comes to mind is to use prisms to do this.
Thoughts? Am I just totally ignorant here? Technically impossible?
0
Comments
Electrically variable-focus lenses with high-speed eye tracking are one option. Holographic displays are another option. Other non-holographic light-field methods have also been discussed.
With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.
What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel?
It will be awhile before we have true light-field display technology though.
Lytro's are amazing,I have played with one at a local tech space. I look forward to how light field tech will progresses. currently the processing required to manipulate the data set is a bit of a deal breaker at present, as GPU density increases in the next 12/18 months this may become less of an issue.
viewtopic.php?f=20&t=2620&p=36049#p35577
viewtopic.php?f=20&t=2620&p=36049#p35589
It seems that light-field photos and light-field displays are just a grid of tiny lenses over a grid of tiny pictures, just like a fly's eye. Not very complex at all...
It seems that a lens barrel extension can achieve 500x plenoptic resolution increase, according to that video.
That makes me curious if such an adjustment can give a big perceived resolution boost for a plenoptic HMD too. Although having the lenses near the eyes is probably more important, if a choice needs to be made.
Here is a link to the document at the end of the above video:
http://www.tgeorgiev.net/FullResolution.pdf
The next best thing we could do is have a lens with variable focal length coupled with eye tracking, but this wouldn't really solve the problem. It might prevent eye fatigue, because the focal length of the lens can be changed depending what you are looking at, so your eye can focus naturally. However, everything on the display would be in focus once you have focused on the object. You could potentially simulate the out of focus blur in software, but I imagine that this solution wouldn't work very well. There would likely be an uncomfortable delay between looking at a new object and focusing on it, where the render and focal length are adjusting and your eye isn't quite sure where to focus.
So I'm fairly certain that lenses won't be the solution here - perhaps, as others have mentioned, light field displays will save the day.
It would! Or at least in theory. Even though it would physically be on the same z-plane as everything else on screen, the point of convergence would be different (you cross your eyes more/less).
I have family that develop Eye-Tracking and this might not be an issue anymore. I'm trying to get a hold of someone at Oculus VR to talk about it.
viewtopic.php?f=33&t=3015
Ok, back from the excitement...
I remember when I was first introduced to the Lytro and did some research into the technique. My gut told me that the results were reversible: that one could emit instead of just receive. It's good to know that someone with the tools to accomplish the task was able to start the ball rolling! (As has been stated before, ideas abound - practical application is where reality exists.)
Doing some quick trig sketches (helped along by my favorite solver: SolidWorks - no kill like overkill!) I found some interesting numbers. I endeavored to determine what the rough pixel density would have to be to achieve various results on such a near-eye setup. Here are my results:
Virtual image distance: 8 inches (203mm) from eye.
Wanted virtual pixel density: 300 PPI.
Eye to HMD display distance: 0.5 inch (13mm)
Resulting PPI on HMD: 4808 PPI.
Virtual image distance: 8 inches (203mm) from eye.
Wanted virtual pixel density: 300 PPI.
Eye to HMD display distance: 1.0 inch (25mm)
Resulting PPI on HMD: 2398 PPI.
Virtual image distance: 8 inches (203mm) from eye.
Wanted virtual pixel density: 100 PPI.
Eye to HMD display distance: 0.5 inch (13mm)
Resulting PPI on HMD: 1600 PPI.
Virtual image distance: 8 inches (203mm) from eye.
Wanted virtual pixel density: 100 PPI.
Eye to HMD display distance: 1.0 inch (25mm)
Resulting PPI on HMD: 800 PPI.
Note that what I'm terming "virtual image" is what you would get if you were to project the pixels being displayed on the HMD out to a matching surface at a given distance from the eye. The goal is that if I were to display a virtual computer screen in my "game world" and look at it like I would in real life, the virtual computer screen should have the same "pixel" (texel?) density as the equivalent screen in real life.
Doing some quick research on the smallest LED displays, especially the OLED, I found that the feature sizes being discussed were all at the 50 to 600 um size range. A wide range, but now I wanted to know what my estimated maximum feature size would need to be for a grouping of 3 RGB round dots at the PPIs listed above. This estimate is large due to not accounting for gaps or extra components needed in making a functional display:
4808 PPI: 2.45um
2398 PPI: 4.92um
1600 PPI: 7.37um
800 PPI: 14.74um
Even if said feature size was attainable, the really awesome methods being linked to above would result in even less virtual (aka perceived) pixel density - or would require even finer feature sizes!
I wonder if raster isn't the best technique for displaying information in an HMD... Got to explore that thought.
(2/10)
it is because every point on your retina corresponds to another point in the other eye..so if those points dont get activated at the same time your brain will not be able to fuse the images therefore you will see double beyond focus point...i can explain this for hours but my english is not the best and it will take some time to write here...this should not be a problem in the oculus since the screen should be at the focus point..if you had a screen behind the screen i could see this being a problem
However, it is generally safe and encourageable to forcibly blur objects that are closer to the viewer than 3" - human eye is physically incapable of focusing at this distances. Even if personally you can focus at as short as 2", it will start to cause some eye strain even at 5" and down 3" it becomes really bothering, so presumably, you won't be doing this.
The only real option here is light field projection, somehow. Microlenses are obviously not an option - all-around resource hog, and all for a slight divergence of incoming light, there should be a better way.
Since our brain 'scans' the image with help of the fovea the image will be out of focus in a simulated dof effect. And it gets tiresome for the eye to try to focus on something which it can't (because the projection is blurry).
So unless we really have eye tracking dof is kind of useless apart from the more obvious examples.
Providing the eye with something else than an infinite point of focus could be fairly easy with new hardware.
Just put piezo elements in-between the lenses and whatever they are mounted on. Since the focal distance of our lenses is fairly short it might be sufficient to modulate the voltage the piezos get. I haven't done the calculation if that is sufficient to account for a large enough amount of variable focus.
There are some products using piezoelectric actuators in camera auto focus so I think it might be a thing.
What's next our eyes are pretty much always focused on the area where the fovea is pointed at and - given the eye tracking is good enough to account for the fastest and minuscule eye movements we might not need to account for the focus at all, we might not be able to tell a difference.
https://share.oculus.com/app/live-action-360-3d-tech-demo
That may seem so if there isn't any eye tracking in place.
In reality depth of field is an important depth cue for your vision. Your brain knows that blurry objects have to be at another distance than the object you are focused on. Our eye distance is way too small for us to only rely on stereoscopic vision and so we have evolved the ability to judge distance with more physical phenomena than is initially obvious.
So I'd say that depends on the situation of the application. In some cases it might be preferable to always have the vision in focus. But if you are looking to provide a plausible virtual environment or an environment where distance is important to be judged by the user DOF is a key component.
Of course wrongly implemented or exaggerated DOF can be devastating to the experience and in order to work well it must correspond to physics, so it's not always feasible to implement it in such a way that it can be beneficial.
http://www.gizmag.com/stanford-research ... eld/38825/
If you dont have a true light field display its not worth it. Fake bluring of the screen when it is really at the same focal distance is only going to exacerbate the accommodation reflex conflict. Tracking the depth your eyes are converging at will require much more accurate eye tracking than that needed for foveated rendering. In photographs you have learned that you can judge foreground and background from the focusing but in real life your eye refocuses very quickly to whatever you are looking at the eye muscles exercising provide some depth hints, but fake blurring will only confuse it. The human eye also has a pretty deep focal plane between things beyond arm length the amount of blurring is less than the angular resolution on curent HMDs. During the day hold up you hand at arms length against the horizon the amount of bluring between the two is quite minor. And as you can only see clearly with the 4 degrees of your fovea the situations when things with such great distances between them are both inside you fovea are really quite rare.
Depth of field is a camera lens effect which occurs when the lens has a large aperture (usually because light levels are low).
The apparent doubling of images at distances different from the focus of attention is a result of convergence. Anything in the field of view that we are not converging on will to some extent appear as a double image. This is very clearly a component of natural vision, and it is not replicated in any kind of stereo images; being flat, they have just a single focal plane.
Neither phenomenon could be simulated with eye tracking which measures eye convergence, as the eyes are focusing on a flat plane - the VR screen, the eye convergence is always the same.
However, tracking to see what object the viewer is looking at, figuring out how far away it is, and changing the display of everything else (depth of field, and faking convergence artefacts) could work.
Is anyone working on this ?
Sorry... obviously wrong about this. I was forgetting the difference between the two images..