Tuesday 19 March 2013

Using Blender with Google's "Photo Sphere" to Easily Create, Share 3-D Renders

Google's "Photo Sphere" for Android's camera is a neat feature that brings the ability to create 3-D photos to everyday mobile devices.  Why not use all the web applications available to easily create, share 3-D renders made in Blender?  Try it out: Cityscape and Basic Scene.

Here is a nice overview of Photo Sphere if you haven't seen it yet.

A rendered Photo Sphere of a Cityscape.  I found this scene on Blendswap by Dimmyxv.
Photo Spheres allow the photographer to take a 360 degree photo like a panorama except that it also pans up/down creating a full sphere around the user.  The Photo Sphere application typically saves these series of photos as a single Equirectangular projection to capture an entire sphere.  The technology has been around for a long time, but until recently I've never seen such an easy way to create/share this type of medium with others.  Today, 3-D scenes are most likely done in 2-D; not all scenes should be photo spheres but a lot could benefit from a full user immersion.   The process for an Android phone (e.g. Nexus 4) is extremely easy, but for Blender it isn't so straightforward.  This post will help walk you through that process:
  1. Exporting photo spheres to an image.
  2. Uploading a rendered photo sphere to Google+.
  3. Downloading photo sphere from Google+.
  4. Importing photo spheres in Blender.
  5. Future thoughts.

Exporting Photo Spheres

I only know of two ways of doing this: baking a texture using reflection and using the Equirectangular Camera.  The first way can only be done using Blender's internal render engine and second can only be done using cycles.

Scene Setup

To change the perspective: with the camera object selected, turn on "Properties Panel -> Object Data -> Lens -> Panoramic".  The default type is FishEye, but change it to Equirectangular.


Move the camera to a good location like the center (tip: use Alt+G and Alt+R to quickly clear the location/rotation).  The camera needs to be at eye level; I just choose an arbitrary number like 2 meters (1 blender unit is 1 meter).  To do this in feet/meters, just go to "Properties -> Scene -> Units".  After adding a subject around your camera, you should have a scene that looks something like this:

Render the Scene

Switching to Camera View (numpad 0) and to Render Shading will now show what the final flat photo sphere image is going to look like:

Uploading Rendered Photo Spheres

Initially I tried to match the Nexus 4's image by setting the resolution to 2811x1118.  It turns out the size of the viewport (changing the size of your browser) affects distortion a great deal.  I suspect to get this more user friendly would be some development on how Google+ is transforming the image to 2-D based on page size.  Also, there seem to be artifacts while rotating now and then; these go away if you zoom in or refresh the page.

For Google+ to accept a photo as a photo sphere, it needs some specific XMP info encoded in the file.  If you don't know how to add this yourself, Google provides an online converter for free (typically used for Google Earth/Maps/Street View).  I found that PNG files won't work (the download comes back as 0 bytes in size) but JPG files work fine.  The converter will ask for compass heading, horizontal FOV, and vertical FOV; make sure to set the vertical FOV to 180 and horizontal FOV to 360.

Ok, now you have your image with the XMP data. Simply upload it to Google+ and it will automatically detect that it as a photo sphere.  Try it out: Cityscape and Basic Scene.

Downloading Photo Spheres From Google+

For Google+, just go to the photo you want to download - like this.  There should be a download link at the bottom left  "Options -> Download Full Size".

Importing Photo Spheres in Blender

For cycles, go to "Properties -> World" and and change the surface to an "Environment Texture" (you might need to enable nodes).  Open the image you want to be the background.  You can change the projection to either Equirectangular or Mirror Ball depending on the type of photo you have.

Background set to "Environment Texture" 

Future Thoughts

Equirectangular Video Player

Google+, Photosynth, and others sites typically just handle 2-D photos.  It would be neat to make an "Equirectangular Video Player" cross platform and easy to use - if it were me, I would probably write it in JavaScript and WebGL.  Players exist today, but aren't what they should be... it is definitely possible (e.g. Kolor Eyes and krpano) to render a series of equirectangular photos to video today. Ultimately, it should be easily accessible to everyone much like YouTube or Vimeo and have a polished look/feel like Street View.

Emerging Technology

These types of 3-D images/videos are useful to others and me because of emerging technology increasing in performance, much like the Oculus Rift.  Watching an Equirectangular Video on a 2-D screen makes it hard to understand what is going on unless you can move the screen around with your head.



4 comments:

  1. Hi interesting article. I had no idea of the XMP data thing, thanks for sharing.

    "Blender won't let me set the focal length"
    Can you elaborate on that? Blender doesn't let you change the focal length for panorama renders because it makes no sense at all ;)

    I fail to see, however, how does that relate to your needs of setting the ideal image size.

    Also, I would recommend to work with 2:1 images.

    That can make the most uniform distribution of the pixels. I find strange the claim that G+ can't handle this aspect ratio properly, given this is the defacto ratio for equirectangular panoramas.

    cheers

    ReplyDelete
    Replies
    1. "Blender won't let me set the focal length", that isn't wrong ;) But seriously, focal length is probably the wrong way to explain it or correct for it. I was trying to talk about the path that the objects take around the camera and how it is distorted. If you look at one of the objects in the scene, it doesn't take a circular path - it takes an elliptical one.

      Looking at the difference between x:1 images and 1:1 images I uploaded, there there seemed to be a lot more stitching issues with images that are not 1:1 in the horizon and objects - pixels distribution wasn't distorted enough for me to tell the difference. I went back to to take snapshots for you but it really isn't that much difference in stitching. The horizon looks a tiny bit better in the 1:1 but it could be because of the number of samples in the render. I'll take this part out of the article because I think you're definitely right long term. Thanks.

      Delete
  2. Hi Brian! Thanks for this great tutorial, it is just what I was trying to do.

    I have photo spheres taken from my nexus, and I want to import them into blender, sort of like in the inside of a sphere, so you can recreate the animation of being there. I am glad you did it first, because I am absolutely new to Blender.

    However, following the tutorial, I realized that my menu is different, although we have the same version of blender! Do you have any ideas of what I could do?

    http://i.imgur.com/R9Y8UjY.png

    ReplyDelete
    Replies
    1. Yeah, at the top of your screen you are probably using "Blender Render" as the rendering engine. To use this technique, try using "Cycles". Here is a good intro:
      http://www.youtube.com/watch?v=UTwXG3K4l2g

      Blender's Rendering engine uses the "baking a texture" example in the post. I didn't talk about it because I prefer to use Cycles but the user's video talks a great deal on it anyways.

      Delete