Foveated rendering promises high performance and/or increased visual quality perception by reducing the rendering resolution in the periphery of the retina, outside the fovea. At first, this seems to be a great idea, since the fovea is where our eyes perceive fine details. Michael Abrash said during the OC5
that the technique requires rendering only 1/20th as many pixels as full resolution.
The problem is that we have specialized cells (e.g., starburst amacrine cell
) outside the fovea that detects motion and, as far as I understand it (and I might be completely wrong), they require some sharpness to work. In that case, simply producing a "blurred" image around the fovea might greatly reduce motion perception in the peripheral visual field.
Maybe using deep learning to fill in the missing pixels (as suggested by Abrash) can avoid this - as long as it doesn't generate flickering artifacts, otherwise it might induce
motion sensing when there is none to be sensed.
Does anyone know if this is in fact a known problem for foveated rendering?