cancel
Showing results for 
Search instead for 
Did you mean: 

360x180 CG Renders

ghostlyfu
Explorer
I'm facing a problem when rendering any pair of equirectangular images from inside a 3D modelling package (Maya in my case, but no doubt the same issues apply to many different packages). The attached (side-by-side format) image is an example. I'm hoping the good folk on this forum can help.

I render out my panorama (I use the Lat-Lon Mental Ray lens plug-in) and it looks as you'd expect. Looking at it through the excellent VRPlayer, it works well. I then shift my camera to another position approx 6.5 'virtual' cm to the side of the first position and re-render. Again this panorama works fine on its own.

I was hoping that I could just place these x2 renders side-by-side (as they are in the attached example) to make a 3D version of the scene. Not so. What's happening is that the 'forward' view works fine but the depth gets reversed towards the peripheries of the image (which just looks bad) until the 'back' view is completely inverse (and so look pretty much OK again).

Logically I can see that the reason is that the 'Right camera' becomes the 'Left camera' as it renders the part of the view directly behind itself. However, I haven't got a clue how to fix it and output an equirectangular panorama that works in 3D. Help!
165 REPLIES 165

cheerioboy
Explorer
mediavr! you're full of useful info! It just clicked to me, reading how you setup vray with a spherical camera but keep the fov at 1. This whole time I've been setting my camera to spherical with 360 fov, and then cropped on a strip of the view to render. This always caused me having to figure out the pixel overlap when using that stereo maker.

can't wait to try this method out :lol:

although this is still a very cumbersome approach.. I have renewed interest in using since nothing else is out there. I have been using single spherical renderings along with a depth map in vr player. Which gives you a quasi-acceptable view - but it's still not ideal - and even more difficult when working with GI. If not using a brute force approach you'd need to pre-calc the movement of the cameras to avoid any issues. Although I haven't extensively tested this. And it makes rendering 3d animation almost impossible... although now with your 1 fov trick I might be able to find another way to stitch the batch of slices.

I'll keep prodding vray to update their stereo helper to work with spherical cameras 🙂

cheers

cheerioboy
Explorer
So with this 1 degree of fov, what's a suggested resolution? I just tried 1 x 1080, which gave me a squeezed image (duh) - going to try 5 x 1080 to get something closer to 1920x1080.

Is there no other software that will stitch/line up a sequence of frames into a full frame? I'm thinking AE, Nuke, some sort of compositing program there ought to be a way to automate it to make this feasible for animation. At least for now.

cheerioboy
Explorer
2 tests, both rendered with a spherical camera & 1 degree of fov

the first was 1 pixel wide, 1080 high - the second was 5 pixel wide, 1080 high.
next round is changing the fov to 5, for 5 pixels wide to see if that works...

update: still struggling with how to up scale the 360x180 resolution

My thinking was that since the camera was moving around 360 frames, 1 pixel wide each time, if I increased the frames around I could increase the width. So currently I'm rendering 1920 frames in a circle, with 1080 height... but that just looks like 360x180, resized. So the content looks blurry.

Does anyone know the best resolution for a spherical image viewed inside the Oculus? Which I suppose will soon double when the new one arrives..

mediavr
Protege
Try rendering a 5 pixel wide by 900 pixels high strip -- at 1 degrees wide (5 pixels = 1800/360) then when you join the strips the end result should be 1800 by 900

Nukemarine
Rising Star
I'm no expert, but your 5x360 render looks like each slice was mirrored prior to stitching. For a curved surface it's creating a saw tooth pattern. This may also explain the blurriness in 1x360 image. Hopefully there's some setting that can make sure the slices are rendered left to right and the stitching is also left to right. For the 1x360, Is there a way to increase the frames with an even smaller FOV?

By the way, I've viewed both images in the Oculus so the 360 feeling is great but you left a 5 degree black strip that shows up behind me on the 5x360 render, so make sure that's cut off in the rendered frame although this may be an artifact of rendering technique. Resolution for an Dev Kit Oculus is at around 6 pixels per degree of view, so a photo that's 1080x1920 should look fine for now if people keep a 110 FOV. For a 75 FOV that same photo will look blurrier because you're basically magnifying the image 1.5x or so.

cheerioboy
Explorer
thanks for the responses guys, I'm going to test them!

since that post I thought I had nailed the sizing issue by rendering 1920 frames at 1 fov, 1 pixel wide x 1080 high. But it gave me quite a strange result. See attached. It looks like a resized version of the original small resolution image 😛 - the smaller two, which I did exactly as mediavr mentioned with the 360x180 render which I think came out right (haven't tested it).

Also the comment nukemarine made about the direction of stitching got me thinking.. and I realized my cameras were rotating counter clockwise! So I wonder what effect that had on the setup...

cheerioboy
Explorer
mediavr, you're too damn good! I followed your recipe and got some beautiful renders. now I want to build a set of these that slowly gets more pixel dense until we can find the best resolution for the current oculus.

1080x1920 is a good place to start. or is it 1920x1080?

Nukemarine
Rising Star
Wow! Now that worked great. I even took your Left eye and Right eye 5x900 renders, combined them in Paint as a SBS format and got decent stereoscopic viewing in VR Player. What's interesting is I didn't realize the front character's hair was sticking out way in front until I saw it in 3D.

From testing, the spherical render is stereoscopic 3D starts to fail when you start to look up and down near the poles.

Here's the image I merged:

http://i.imgur.com/9Bj8L3a.jpg

Anyway, the 5x900 looks much better than the 1x900 for whatever reason. I'm really impressed with the results.

Now I'm wondering if such a technique can be done to take 360 degree screenshots in game. There's a big difference seeing just one angle and being able to see every angle of a location. Of course, after that comes the natural question: can we record game video in 360 degrees. My guess is that it can be done at a lower frame rate and resolution, but from my experience that's not a bad thing as viewing low res, low framerate in VR Player is still an enjoyable experience.

cheerioboy
Explorer
@nukemarine

I'm going to guess it'll be difficult to take screenshots in-game at the moment. If it takes this long just to render a still image in a 3d programe, ie, Maya or 3dsmax. Unless the game is rendering the full 360 image and not just what you're seeing on the screens in your headset. Otherwise each time you hit a button to capture the image - a new camera would need to 'scan' the environment and save out. What would be even cooler is if the screen capture wasn't just an image but capturing the 3d environment, with baked textures and all.

But you know how these things work, all in due time and we might be able to some cool programming wizardry to make ways to automate this process into an image captured in an instant.

j1vvy
Honored Guest
It is a matter of incorporating the panoramic screen capture or panoramic video capture into the game engine. Game engines can already provide instant replays or screen captures it is just a matter of doing it with a different camera setup. The 360° s3D video would be too much to render in real time for most systems, but right after would be fine.

I think the largest but still easy to encode video is as over/under in .mp4 in h264 in 4k video size. That would be 3072X3072 pixels same number of pixels as 4k's 4096X2304. Two panos of 360°X180° and 3072X1536 pixels each in spherical format.

Using physical cameras to produce a similar result I would use as few cameras, or images, as possible to simplify workflow. But doing it with a game engine I would use as many cameras as possible change the camera for ever column of pixels. Set the camera to one pixel wide and a FoV of 360/width. Rotate the cameras 360/width each time.

A prerendered 360° S3D video would not show the correct 3D when the viewer rolls his head and the zenith and nadir will result if very little S3D.