Hold a finger close to your face and look at it. We all know that you'll have double vision for distant objects behind your finger, but you should also notice that the background is blurred.
This effect does not occur in the rift because everything is rendered in focus. Sure, you could add a depth of field to your render, but what happens when you actually look past your finger? The image would converge, but it would be blurry since the computer doesn't know what you're trying to focus on.
I think depth of field is an important visual cue for a sense of space and the rift is lacking this.
But how do we achieve it? I'm no expert at optics, but I believe that your perception of blurriness is because the light entering your eyes reflected from objects at different distances is due to the different angles of incidence. So that got me thinking about whether or not it would be possible to manipulate the angles of light being emitted from individual pixels.
The first thought that comes to mind is to use prisms to do this.
Thoughts? Am I just totally ignorant here? Technically impossible?