One panorama is 18 billion pixels and all of the others panoramas are all over a hundred million pixels. Many others have been created but not posted because this is just our beta testing of the viewing process. This should work well with the Oculus as it already works with 3DHDTVs.
We have a variety of viewing modes including half height and half width over under and side by side. We use the mouse to control scrolling and our very deep zoom. I can see where turning the head to change the view is a natural. Maybe one just uses a wireless mouse wheel to control the zoom.
Anyhow, please give it a try if you are able and interested. The notion is that one could take virtual tours of a 3D-360 world. Let us know what you think and guide us if we are almost there.
"2EyeGuy" wrote: The Rift is less than 2560x2560 for a complete sphere, or 6.5 million pixels. For comparison, the VR920 is more like 7680x7680 for a complete sphere, or 60 million pixels.
The number of pixels in the sphere depends entirely on the virtual viewing distance to that sphere. I am using a series of many concentric spheres of varying diameter. The distant spheres are viewed with more virtual magnification (just like using binoculars). And just like binoculars, using more than about 7x magnification requires a steady viewpoint (like a virtual tripod) and beyond 20x telephoto view even a stable viewpoint is difficult to pan without overshooting. When viewing my distant spheres with virtual binoculars, I see FAR MORE than a mere 60 million pixels. I would estimate at least a bazillion pixels (and perhaps even a gazillion pixels). :lol:
My closest sphere appears about one foot from the face. I control transparency of all the spheres, so I can view distant PixelBall bitmaps or nearby PixelBall bitmaps at my whim.
I can map my mind in these concentric PixelBalls viewable ONLY in my RiftDK, much better than my previous method of using a "memory wall" covered in post-it notes and reference material snippets. I call this virtual environment my "Mentarium" (Mental Planetarium). And I am using 3D-Panoramas to test it out.
I successfully built the chromiumembedded embeddable web browser, which I plan to incorporate into my PixelBall Mentarium, so that I can view 3D-360.com stereoscopic panoramas in my RiftDK.
Resolution upgrades are sure to come in the form of LCOS micro displays. Our current viewing system should work with 4K displays now as they are Flash based. I guess we would be grossly oversampling for OculusRift if we did not have zoom. I can understand why you need some sort of damping like stabilization at higher zoom. That is like my cameras. I could see where a click on a subject could function as a "target designator" for image stabilization. Your "virtual tripod is a good anology and I guess that I would add that it could also perform in ways that are better than a real "fluid head".
Relative motion of an image that is different than one's real motion can be disorienting if it does not have consistant game physics. I bet that lag would also be a consideration along with overshoot. I suspect that it might be possible to use rate of change to predict anticipated positions and display those in time for the head to arrive at the correct location.
If I were creating images only for the Oculus I would use a simpler camera system and create more panoramas by moving from location to location. Alternately, I can use stereo videos to connect locations or even make panoramic stereo movies. Another method that I currently use is to integrate my panoramas with multiple walk around viewpoints to create object models.
I am seeking a viewing system that could allow us to view 3D models in stereoscopic 3D that works with all platforms, especially 3D TVs. We learned years ago that posting in a format that converts to the desired viewing format on the fly is necessary. Otherwise, one stops taking photos and spends all their time converting, posting, and linking.