When in Dome…

The Morehead Planetarium & Science Center Production Blog

Author Archive | Jim Kachelries

When our current team solidified at the very beginning of Morehead Production, we came from very different backgrounds. Jay, a designer and journalist; Pete, a motion graphics and compositor; and myself an animator. The one thing we did all had in common however was we had never done dome work. We have since learned from our mistakes starting out, and taken new steps to refining our processes to making dome shows. Among those changes has been our storyboarding.

Earth, Moon, and Sun

This was our first show that we made, so our storyboards are closer to those for flat screens than domes. We saw there was a challenge of how to draw a storyboard accurately for the dome and tried to illustrate it, however once production and animation started it was clear the boards were very limited and ultimately not very useful.

Magic Tree House

Our second show, Magic Tree House, is an adaptation of the original analog show we had here at Morehead Planetarium. With the Zeiss’ retirement drawing closer, we were tasked to make a digital version of this show. Although nearly every scene was simply updated with better visuals, there was a sequence which we were able to re-imagine. This is where we thought to introduce the idea of drawing the storyboards in the dome itself. We found it was very successful, and there were no surprises when it came time for actual production because the composition of what you saw in the storyboards could translate directly to what we’d do in production. It took a little getting used to drawing with the distortion, but it was ultimately worth it.

Solar System Odyssey

After the success of storyboarding in the dome for Magic Tree House, it made the most sense to continue that practice with Solar System Odyssey. A challenge presented itself however, which was the element of a character driven show. In all our previous productions, there were either a small appearance of one character, or no characters at all. Solar System Odyssey had a staggering three characters on screen at nearly all times. It was because of this that we had to think and draw not only where the characters are on the screen, but also where the camera is going to give the best staging direction for our characters. This was where we introduced the idea of our ‘dance chart’, which we would make for each scene with characters. I also called it the Law and Order effect (in respect to one of my favorite shows on television).

The Longest Night

Advances and evolutions we’ve been making in storyboard development took a strange turn for this show, The Longest Night. Not only was it to be our first show with significant amounts of live action in it, but it would be a collaboration between us and Paperhand Puppet Intervention. There were plenty of challenges already with storyboarding a hybrid of live action and digital environments, but added to that is taking the script and boards they drew and adapting it for the dome.

This is an example of one of the boards that Paperhand created before it was taken to the dome. We could get an idea of what was wanting to be seen, but we needed to be able to see it on the dome to get a better sense of scale and placement of the characters and environments. Using what we learned from shows in the past and adapting the dance chart to include ‘real world’ camera and ‘digital’ camera placement, we came up with an example seen here.

As I said in the title, this is our evolution of storyboarding. There will never be an end to the changes we make in our process as we learn and grow in the dome field. We will just keep applying and adapting what we have learned in the past to our work in the future.

One of the dangers we run into during our productions has been object distortion. It’s most frequently seen when you fly towards or away from a moon or planet. That dreaded bulge is caused by the closest part of the sphere being much closer and therefore much larger than the farther parts of the surface. We have been actively trying to avoid these situations in our shows, as it tends to break the illusion of immersion. Sometimes, however, it is unavoidable, either through demand of the script or storyboards. It is in these cases that we try to make these close-to-camera actions happen as quickly as possible so as not let the mind start to think, “Boy, that really looks strange!”

Here’s an example I quickly threw together showing various distances.

One of the things I discovered is that when you want to have a camera shake, normal camera translation doesn’t really work. Hardly any motion is perceived, unless the camera moves enormous distances. What I found to be the most effective approach is to rotate the camera, rather than change its position. This really makes the audience feel uneasy and unbalanced, which is exactly what we want the camera shake to portray.

Here’s an example of it from our new show, Solar System Odyssey.

A few months ago I had the pleasure of going to Universal Studios in Orlando, Florida. The Simpsons Ride had recently taken the place of the old Back to the Future ride. Although it was sad to see Marty and Doc gone, The Simpsons Ride was a worthy replacement, and a great example of a fun dome show.

The animation was spectacular, paying good attention to make character actions and poses clear despite being a dome show. If you’re ever visiting Universal Studios, be sure to hop on this ride!

In an older post I talked about the camera rig we use for our planetarium shows. I also uploaded it incase there were some people that wanted to use it as a springboard or to get ideas of how to make their own.

Well in light of Pete getting a fisheye lens to work, I went ahead and updated our rig.

Here’s our brand new camera rig with all the bells and whistles. Able to switch between the FK system in the old camera rig, and an aiming constraint for the young at heart. A non-renderable dome sits on top so that you can get a sense of what will be visible to the audience. There’s also another camera which can used to look around via aiming constraint for playblasting animation.

These were made in Maya 2008, but could easily be brought into newer versions.

5Camera

Fisheye

And lastly, both of them zipped together.

The wrap deformer was something that I didn’t learn about while I was going to school, but instead something I only learned about recently. It is an incredibly useful tool that can help add an extra dimension to characters and animation.

What the wrap deformer essentially is (atleast to my untrained eye in character rigging), is a way to have geometry influence an object seperate from it. It’s influence is based on proximity. What it effects is not the object as a whole, but the vertices; or in this particular case the lattice of another object.

This is an example of an eye from a character in our new show. The details on how wraps work exactly would be best read in the book I linked too in an earlier post:

Which you can find here

And here’s how it looks without a wrap:

What a difference!

Once you start to use the process it becomes quicker to make. Some important things to remember is that the object which is the deformer will not render, so it’s good to have a second head as the deformer, and make it affect a blendshape to go to the real eye.

This of course isn’t just limited to eyes but also teeth and anything else you can think of that you want to add some motion too.

In our new show that’s currently under production, there are a lot of very deliberate camera moves and turns. Attaching our old 5 camera rig to a motion path I found that there were a lot of problems that popped up. Namely whenever the camera would need to rotate Maya would often do strange calculations to get from one key to another (At least in forward kinematics keying). I needed to find a way to separate the different axis so I wouldn’t have to fight the rig.

What I came up with was our 5 camera rig built for a path!

What’s great about this for us is a couple things. With the X and Y axis separated we get a lot of control with no weird rotations in there. We have guides to show where the sweet spot is, at both 45º and 60º respectively. What I’m happiest with is a roll feature I put in. The roll is especially useful because I made it so that when you roll the camera, it stays centered on the sweet spot in both the 45º and 60º settings.

The only thing its missing is a look at constraint, which I hadn’t had the time to include. Here’s the hypergraph hierarchy:

And lastly the attributes as seen on the supermover:

The Mask attribute is to turn the useBackgroundShaders on and off, incase you want the render to not cut off the dome master.

What good is all this info without the file itself though to use and mess around with?

Well here it is!

http://www.moreheadplanetarium.org/whenindome/wp-content/uploads/2010/5camPath.ma

Being the small studio that we are, the members of our team are generalists in trade. Jack of all, master of none, we’ll wear whatever hats are needed for production to get the job done. This can naturally cause some difficulty when we need to wear those hats that calls for intensive scripting and planning, in this case ‘rigging’. In the last post I made, I focused on the facial rigging. A talking head can only take us so far, we need to have that body move as well. In order to save time, and give us more controls over our models we use The Setup Machine 2.

Being able to take these rigs and apply them to multiple characters of different body types is extremely helpful and a huge time-saver. Of course TSM2 doesn’t replace a good TD who could give you a very custom-made rig with controls you wouldn’t have otherwise, but for a low budget production with a small team like ours TSM2 gives us what we need.


http://www.anzovin.com/products/tsm2maya.html

One of the steps for building a character (in this case with Maya) is to rig them. That is to say, putting the bones inside the characters and adding the controls to manipulate them. This is generally just for the body, and unfortunately only half the battle. The remainder that we have to worry about is the face, which in itself a whole other beast.

A beast that is tamed much easier with this wonderful book at your side:

This is a book written by Jason Osipa, which details and teaches ways to rig the face to have all the expressions you could think of. The best part is that with the CD that comes with the book, he provides you the examples seen in the book, as well as the mel scripts and resources to apply to your own character! You really only need to make the blendshapes, and then the scripts he provides plugs it all in for you! It’s a huge time-saver, and will really make your characters jump out.


You can pick it up through Amazon with this link:

http://www.amazon.com/Stop-Staring-Facial-Modeling-Animation/dp/0471789208/

ISBN: 978-0-471-78920-8

As the storyboarding phase is drawing to a close, I thought I’d touch on a couple things that I’ve learned.

Our new show, Solar System Odyssey, is heavily driven by story. Characters are continually interacting with each other, both in dialogue and action. This brings up challenges that haven’t been issues in shows prior to this one. In previous shows, interactions between the characters had been strictly though dialogue. Many of the voices were heard without seeing exactly who was doing the talking. In our first show, Earth, Moon and Sun, the main character, Coyote, would converse with a narrator that the audience would never see. Coyote could simply reply to the narrator by facing towards the audience. On the other hand, when more than two characters can be seen on screen, all interacting and talking to one another, things were a bit different and sacrifices needed to be made.

One of the great things about the dome is being able to be immersed in the environment. Current best practice would typically say that if you’ve got the dome real estate, use it. However, what we really want to showcase during these segments are the conversations between the characters. The audience needs a focus and we want to discourage their eyes from venturing around. Sure, the inside of the ship may look pretty and the viewer’s eye will want to look around it, but it’s truly a secondary element. We want their eyes focused on the character’s faces and body movements.


What’s all this leading to?



This map I affectionately call the “Dance Chart”.

For each scene I’ve created, I’ve drawn a chart in which I map out the character’s key points in the scene, as well as the location and front direction of the camera. I’ve been very careful to allow for good staging for the characters at most times, and the immersion comes from the deliberate and timed movement of the camera moving through the environment. The audience becomes like a fly on the wall, watching the scene unfold in front of them. While the characters are generally in the front half of the dome the majority of the time, by keeping the camera in near constant moving and framing, an immersive sense is developed and maintained.


When creating a Dance Chart, some key elements to think about are these:


•   An element of focus (Character, Object, Follow the bird)

•   Lots of play with foreground, midground, and background elements

•   Don’t move the camera too fast. Keeping a focus point helps, but only to a certain degree.


The element of focus allows for the camera to move without there being much motion sickness. Having foreground, midground, and background elements shifting and moving helps to sell the environment and reinforce the camera movements. The camera speed should maintain graceful elegance with its motion.


When you want to encourage the audience to look around, keep the camera still or moving slowly and deliberately. It’s our experience that people favor movement that compliments the direction they’re already looking.