I was recently at a non-planetarium, non-fulldome conference for science communicators called ScienceOnline. The attendees that I met, who happened to be mostly scientists, science journalists or pr people, generally didn’t know what I meant when I said I “produced fulldome video.” As many of us have experienced, saying that you make “planetarium shows” doesn’t quite work either because most adults tend to think about pre-digital shows. It’s a good thing I attended a session called “Pimp Your Elevator Pitch” and decided to use it to work on giving a definition of what fulldome is in less than 45 seconds.
Here’s what I ended up with:
Fulldome videos are primarily science documentaries that are projected onto a domed surface, typically in a planetarium. Many fulldome videos deal with astronomy, but other subjects are appropriate for the dome, especially topics or environments that are difficult to experience as a human being, such as deep underwater, inside the human body or in the future. We like to think of a flat screen video as a window into another world but with a fulldome video you can poke your head up inside that world and become immersed within it. Think of a 3D animated movie crossed with IMAX and put it in a planetarium.
Some feedback I got with my original pitch was that I started by saying that they’re “not planetarium shows,” which instantly put the idea of an analog show in people’s minds. I also originally described them as a combination of PIXAR and IMAX in a dome and was told that people in very rural areas might not know what PIXAR or IMAX are. Something to keep in mind.
Any other ideas out there? How have you described “fulldome” to others quickly?
When we started producing dome content 4 years ago, we were working on two different 3d platforms, 3ds max and Maya, and still doing a 5 camera stitch with a hemi-cube. We used the 5 camera stich to create our first two productions, “Earth Moon and Sun” and “Magic Tree House.” On our most recent production, “Solar System Odyssey,” we knew we wanted to try something different. Since we were doing a character driven piece, I took it upon myself to learn Maya. One of the greatest achievements in our recent production was the proper implementation of the DomeAFL shader for mental ray, created by Daniel Ott.
This opened up new doors for rendering and camera techniques. The reduced time of manually stitching together comps freed us up to try and tackle more challenging aspects of production. One of the new features we’d be able to render was an Ambient Occlusion pass that gave our elements new depth.
We no longer were fighting to fit disjointed pieces together before running out of time, but instead were able to refine our work from a rough state to a more polished product.
In a previous post Jim talked about doing a believable shake on the 3D camera itself. With motion blur turned on this can get a bit expensive as far as render times. Sometimes we lean on After Effects to push a shake to even greater extremes.
In this example you’ll see a 2D shake added to enhance the launch sequence. Now on the flat screen the shake doesn’t seem to be all that extreme, but on a dome it feels much more intense. In the last shot of the sequence I did a 3d Camera shake, and felt it needed to be pushed more. Rather than re-animate, we used After Effects and did a 2D wiggle on top of the existing shake to get the desired look.
I do this by using the Wiggle Expression in After Effects. [wiggle(a,b)] where a= frequency of the wiggle per second, and b= how much or amplitude.
I link them to sliders so I can animate how much wiggle I want. Now that I have a wiggler ready to go, I wiggle a null. The location of the null will be the center point of the wiggle. Once you’re ready to go, parent your footage to the null.
Now depending on how comfortable you are with After Effects I might have lost you. So feel free to watch the following tutorial about wiggle, and its various uses.
We just rendered a flat screen version of the trailer for our newest show – Solar System Odyssey. Looks pretty good in a rectangular format, if I do say so myself. Check it out below. But you’ll have to check it out on a dome to get the full effect, obviously.
For a long time I was a 3ds max user, and only in the last year have I switched to Maya. One tool that 3ds max had that was incredibly useful for building hard surface or objects that repeat themselves was the array function. Thankfully, I found a script developed by Ed Caspersen that brings this functionality into Maya.
I used this tool to produce the following model of a launch tower in less than 2 hours.
With this you can build a small section of geometric detail and control how it is replicated in any direction, and even set it to get gradually smaller. Working in a 4k immersive format, you can only get so close to textures before you start to see the individual pixels or see the resolution soften. Having the extra geometry helps break up the visual landscape and make up for those instances where textures start to fall apart. It’s perfect for building repeating shapes quickly and adding the much needed detail that the fulldome format demands.
One of the dangers we run into during our productions has been object distortion. It’s most frequently seen when you fly towards or away from a moon or planet. That dreaded bulge is caused by the closest part of the sphere being much closer and therefore much larger than the farther parts of the surface. We have been actively trying to avoid these situations in our shows, as it tends to break the illusion of immersion. Sometimes, however, it is unavoidable, either through demand of the script or storyboards. It is in these cases that we try to make these close-to-camera actions happen as quickly as possible so as not let the mind start to think, “Boy, that really looks strange!”
Here’s an example I quickly threw together showing various distances.
Designing models and assets to be used in Fulldome video requires you to think about a combination of variables.
How long will it be on screen?
How fast is it moving?
How close will it be to the camera?
How many times will we use it?
We developed this method of evaluation after our first production, during which we learned the importance of pre-visualizing our 3d scenes before even building our models. The following is a lunar vehicle we designed and built custom for a sequence on the Moon. In the final shot, the vehicle was seen from a long distance aerial flyover and was on screen for a short time.
With the time and resources put into this model, we will re-purpose it for another show, should the need arise.
Before diving in, I realize that some of you may not have even heard of the word “previz”. “Previz” or “pre-visualization” is a step in production after storyboarding and before final animation where simple models are laid out in 3D space, basic animation is done and camera moves are locked in place. This allows the director to get a better idea of what the final shot will look like before any intensive work is done on the models or the scene. It also allows camera moves to be changed without needed to do extensive rendering.
Lets back up a bit and put this in context.
Our production process has 4 major steps:
1. Scriptwriting & Concept Art
3. Animatics & Voice-over
4. Previz & Sound Effects
5. Final animation & Score
The difficult moment in any film/tv/dome production is how to move from the animatics phase (essentially a flipbook storyboard with scratch audio) to the final animation stage without really knowing what the shot will look like. A good example of this would be a scene in our latest show, Solar System Odyssey. In the scene, our two heros are trying to escape from the radiation belt around Jupiter, which is causing havoc to their ship. This is what the original storyboard/animatic looked like:
As you can see, there was a lot of proposed camera movement in that shot. The difficulty was knowing how much movement would be most effective to make the scene interesting and tense, but not make the audience confused or nauseous. So we took low-poly renders of the characters, did basic animation on them and put them in a basic textured, low-poly environment. This is what it looked like:
By doing the previz stage, we got some great intel back. We realized that the shot felt dead. There was very little tension in the shot with the current camera moves. And since it’s difficult to build tension through editing, like in a flat screen film, we realized that we’d have to make the camera moves more dynamic. We did this by making the moves faster between rest points and adding dutch angles to the pause points. This was the final product:
Previz is becoming very popular in Hollywood, typically with action shots. We already find it an integral part of our process. Not only does it allow us to more clearly visualize the final look of a shot, but it actually speeds up the production process by preventing us from needing to go back and re-tweak an already rendered shot. For a great video about the importance of previz, check out this video about how it’s being used in Hollywood:
A few months ago, we officially retired the Zeiss Mark VI Star Projector at Morehead Planetarium. We’d had it for 42 years and it served us well. However, the ol’ Zeiss had been getting old and despite the heroic efforts by our Chief Technician, Steve Nichols, to keep it going the decision was made to put ‘er down. And since we added a digital system to our planetarium over a year ago, we were able to just roll on forward. However, before the Zeiss was dismantled, we thought we’d shoot some footage of it and put together a short tribute video. RIP Zeiss.