No, this is absolutely raytracing, but a very simple one. I used it in "dOculus" long before Oculus VR did this. What you do it, everything needed is tracing a mathematically barrel distorted ray directly to a Quad. Then you will have some UV-Coordinates calculated. Now you can do texture picking directly on the calculated UV coordinates.
https://www.youtube.com/watch?v=U1Xp_t9xKko With a traditional scanline compositing, the Quad would be drawn in the target buffer (an probably get a bit blurry though linear interpolation and possibly mipmapping or alike, also if the buffer is not huge the borders are resolution bound), and then there would be barrel distorted lookup from the compositor introducing a second interpolation making it even more blurry.
Since raytracing needs a chromatic aberration lookup for each color, it is even possible to calculate the lookup with subpixel precision. Since the raytracing calculation for a quad (or a plane) is ending up in not much more than a Dot-Product and Divide per component this is much simpler than multiple drawing and picking needed with a scanlined intermediate buffer. Making it more efficient and thus means preserving energy! Having a crisper and better picture is only a positive side effect
🙂PS: Thank you John Carmack for presenting this as your idea... fast sqrt anyone?