When in Dome…

The Morehead Planetarium & Science Center Production Blog

Author Archive | Peter Althoff

Working with the artists over at Paperhand Puppet Intervention to create The Longest Night was a real treat. Once we were able to explain the possibilities of using their artwork in a digital realm they started to think and adjust their process for creating easily adaptable painted textures.

One asset in particular was a cabin in the woods that the “old woman” character lived in.

They approached the creation of the textures by building a small paper mock up and then unfolding it before painting the individual pieces.

 

This template was used to create the textures, and even worked as a great reference point for building the final 3D model.


After incorporating it in the scene, and lighting it we were able to create a great mood and feel using these excellent painted textures and a little Photoshop manipulation.

Early on in the planning stages of “The Longest Night,” we realized there were going to be some big changes to how we approached production. Typically we work with animated CG characters and environments. We can dictate actions and are in charge of a camera that is essentially unlimited in its range of motion.

With this production we knew we were going to need to actually capture some performances and integrate that footage into some sort of environment. An example of what we thought would stylistically work with the look and feel of a typical Paperhand Puppet Intervention show is the Modest Mouse video “Float On”.

The flat nature and stylistic treatment of the video seemed to be flexible in the sense that it didn’t have to make physical sense and could be executed with static camera green screen shots. The footage could be mapped to flat cards and be moved around. We shot some test footage and made a quick proof of concept.

We found this technique to be functional but limiting.

We eventually realized that we could put movement into the green screen footage and match-move it. Match-moving is a technique in post-production where points are calculated out of the footage and software can use the points to map out digitally what the camera is doing in real life. Then we can use that data to animate a fisheye camera in 3D space where we could do more dynamic and realistic uses of footage in relation to the camera. This was a theory that we decided to test out.

You can see the final result of our test below.

Below is an earlier version where you can see the original footage before it was keyed out.

We matched the real world footage to a digital background. This rough test was the foundation for the work we would then build upon for designing the rest of the show.

When we started producing dome content 4 years ago, we were working on two different 3d platforms, 3ds max and Maya, and still doing a 5 camera stitch with a hemi-cube. We used the 5 camera stich to create our first two productions, “Earth Moon and Sun” and “Magic Tree House.” On our most recent production, “Solar System Odyssey,” we knew we wanted to try something different. Since we were doing a character driven piece, I took it upon myself to learn Maya. One of the greatest achievements in our recent production was the proper implementation of the DomeAFL shader for mental ray, created by Daniel Ott.

This opened up new doors for rendering and camera techniques. The reduced time of manually stitching together comps freed us up to try and tackle more challenging aspects of production. One of the new features we’d be able to render was an Ambient Occlusion pass that gave our elements new depth.

We no longer were fighting to fit disjointed pieces together before running out of time, but instead were able to refine our work from a rough state to a more polished product.

 

Recently we upgraded our software from Maya2008 to Maya2012. In that upgrade the shader stopped working. Fortunately, I was able to locate an updated version. The work these fine folks are doing is taking the shader to new dimensions by creating stereoscopic imagery (via Roberto Ziche on http://fulldome.ning.com/forum).

 

In a previous post Jim talked about doing a believable shake on the 3D camera itself. With motion blur turned on this can get a bit expensive as far as render times. Sometimes we lean on After Effects to push a shake to even greater extremes.

In this example you’ll see a 2D shake added to enhance the launch sequence. Now on the flat screen the shake doesn’t seem to be all that extreme, but on a dome it feels much more intense. In the last shot of the sequence I did a 3d Camera shake, and felt it needed to be pushed more. Rather than re-animate, we used After Effects and did a 2D wiggle on top of the existing shake to get the desired look.

I do this by using the Wiggle Expression in After Effects. [wiggle(a,b)] where a= frequency of the wiggle per second, and b= how much or amplitude.

 

I link them to sliders so I can animate how much wiggle I want. Now that I have a wiggler ready to go, I wiggle a null. The location of the  null will be the center point of the wiggle. Once you’re ready to go, parent your footage to the null.

Now depending on how comfortable you are with After Effects I might have lost you. So feel free to watch the following tutorial about wiggle, and its various uses.

For a long time I was a 3ds max user, and only in the last year have I switched to Maya. One tool that 3ds max had that was incredibly useful for building hard surface or objects that repeat themselves was the array function. Thankfully, I found a script developed by Ed Caspersen that brings this functionality into Maya.

http://www.creativecrash.com/maya/downloads/scripts-plugins/utility-external/copying/c/array-for-maya

I used this tool to produce the following model of a launch tower in less than 2 hours.

With this you can build a small section of geometric detail and control how it is replicated in any direction, and even set it to get gradually smaller. Working in a 4k immersive format, you can only get so close to textures before you start to see the individual pixels or see the resolution soften. Having the extra geometry helps break up the visual landscape and make up for those instances where textures start to fall apart. It’s perfect for building repeating shapes quickly and adding the much needed detail that the fulldome format demands.

 

Designing models and assets to be used in Fulldome video requires you to think about a combination of variables.

  1. How long will it be on screen?
  2. How fast is it moving?
  3. How close will it be to the camera?
  4. How many times will we use it?

We developed this method of evaluation after our first production, during which we learned the importance of pre-visualizing our 3d scenes before even building our models.  The following is a lunar vehicle we designed and built custom for a sequence on the Moon. In the final shot, the vehicle was seen from a long distance aerial flyover and was on screen for a short time.

 

With the time and resources put into this model, we will re-purpose it for another show, should the need arise.

If there is one thing we’ve learned in the past, it’s that particles can be expensive in terms of development, implementation, and hardware resources. Though there are many effects that call for particles, and sometimes using them is unavoidable. A workflow we’ve come to use is a process where we generate a particle system in After Effects using the Trapcode Particular Plugin, then map that image sequence to a plane in 3d to get the look we need without spending hours tweaking a fluid or particle system in Maya.

You could even use the same principle with stock footage of bullet hits and explosions. This process is best for systems that have limited interaction with their environment, and that the camera sees them from a distance. Typically we’ve used this for bursts and explosions, and a few eruptions.

Now for the flat screen this concept of using 2d effects layered over your comps isn’t very new. Applying the idea to the dome world requires you match it by hand using one of the dome plugins for After effects, or map it to Geometry in a 3d Scene and render it with a 5cam stitch or fisheye. These passes generally take little to no time even at a 4k resolution because you’re essentially rendering a simple piece of geometry with a single image texture. The texture files are generally 2k, unless the situation calls for more resolution.

When we started this project over a year ago, I had a very limited knowledge of Autodesk Maya, and was primarily a 3ds Max user. We made the commitment to do a character driven show so I had to learn this new tool, and I don’t think I could have if it hadn’t been for the training materials we got from Digital Tutors.

 

I started with their intro to maya DVD set and then moved on from there into more complex and production specific tutorials.

 

In Solar System Odyssey I had to make a space ship, terrains, eruptions, rocket exhaust, explosions, and even a futuristic Mega City full of flying traffic.

I hope to address how most of these effects were created in the coming weeks with more blog posts. For the mean time if you haven’t checked out www.digitaltutors.com I would highly recommend them. They offer training on a variety of software platforms.

Recent work on this project has produced a wonderful revelation. I’ve been working almost exclusively in Maya for this project, and this is a big deal for me because I’m traditionally a 3ds Max guy.

One of the hurtles we had created a work around for was the toon shader and how it reacted to lighting scenarios. I discussed this in my previous blog post “Have it both ways.” Since we don’t have a R&D team we make our developments on the fly. I recently discovered you could use 1 image file and then duplicate it in Maya and alter its color gain to create the various levels of shading in the toon shader.

Then I took it a step further. I started tinkering around and discovered something that simplified our production by reducing our render passes.

I realized you could take a toon shader and plug that in as the color value of a standard shader, thus having your cake and eating it too.

Have it both ways

December 16, 2010 | 1 Comments | When in Dome

With this next project, we’re trying something a little unconventional. Something to try and spice things up a bit. As if generating fulldome content weren’t difficult enough?

One of the problems we ran into using a toon shader is that it doesn’t accept lighting color information or calculate shadows very well.

So, by taking a color pass and a grey scale pass you can combine them and season to taste.

The final test result is what you see here.