When in Dome…

The Morehead Planetarium & Science Center Production Blog

Author Archive | Peter Althoff

DeadlineMost network rendering engines can access the cmd line functions of multiple software packages and run their rendering engine without requiring costly licensing. (For Example, After effects, Combustion, Maya, 3dmax, etc.) As long as you have a workstation license you can install and operate these on as many Rendering computers (slaves) as you’d like. Other Software, and namely plug-ins, require licensing, and make a distinction between Slaves, and workstation functionality. Plug-ins like Mental Ray, Render man, Final Render, Trapcode, Vue, and Dreamscape, require licenses for each slave, and sometimes per processor in the slave. You can imagine with 3 workstations and 23 slaves, each containing 2 cores, and 8 processors the cost can start to get a little extreme.


So the above mentioned software providers are leaders in the industry, and even with the proper licensing, there can still be hick-ups in the installation of the software onto the slaves, and getting all of the tools to talk with each other and work properly. That’s if you were working in a flat screen. Most of the above mentioned tools were developed to be primarily used on standard flat image. Take into account adding a level of complication like Full dome rendering, and you’ll find yourself with even a larger set of hurtles to overcome. There is a wonderfully active group in the full dome community. They’re busy creating tools, and plug-ins to make our work flow easier and more streamlined. The main issue is these tools are free and generally work as Workstation only mode, and quite often software version specific. Some major software providers are acknowledging the full dome application of their product and have looked at these tools and started to integrate them into their products. Hopefully this trend will continue, and accelerate as full dome video becomes more popular.

As a content creator on a small staff, one spends a vast majority of their time making sure the slaves are doing their jobs. Sometimes its easier to do things on a workstation, but it can take an awful amount of time.

I figured its about time to reveal a little bit of the tricks we’ve picked up for doing simple effects on the dome that enhance the viewing experience and make production a little less 3d heavy. Granted some of these effects aren’t physically accurate, and if done to certain extremes can break the illusion of the dome space.

When using a panorama on the dome, or any type of still image, the viewer will become aware that it is a still and quickly grow board of it once it hits a static position. So to prevent a sequence from becoming static we will add a slight rotation of roughly 5 degrees or more depending on length to the entirety of a shot. What this does is keep the pixels moving on the dome surface and has a subliminal effect on the viewer. The won’t notice its there, but they’ll notice it if it wasn’t there.


Another trick is using scale. Scaling a dome master or a panoramic actually translates into movement on the dome. A scale with its anchor point at the zenith translate to vertical movement up or down into a panorama. Coupling this with some images of clouds on 3d layers in after effects you can give the illusion that you’re moving up off the ground and into space without having to create a 3d scene and camera move. Another trick involving scale is to use a null object on a dome master and scale toward the point of focus. Doing this by 3-10% over the course of several seconds gives a slight push that focuses the viewer toward the source of the scale. True you’re scaling pixels and distorting the image over the dome surface, but by only increasing it by a little amount over time you’ll never notice any resolution loss, or the angular distortion that takes place. Also by offsetting your foreground and background images, i.e. your panoramic foreground and stars as your background. You can simulate the natural parallax that happens in a camera move.


Keep in mind that nothing should ever really stop moving on the screen unless its an intentional freeze frame to illustrate a point. Rather than taking an object’s scale from 0-100 in 3 seconds when your sequence is 10 seconds long. Consider taking your object from 0-95 in that 3 seconds and then ease in that last 5% over the course of the remaining 7 seconds. The effect will look more polished and make your point of focus a touch more dynamic.

enjoy these video examples.

We’re approaching a new show, with an even newer frame of mind. Currently our shows have had a bit of a mishmash style bordering cartoonish/realistic. The challenge being how does one do scientific content without reflecting some realism in the images that are created. Obviously, until live action video catches up to the dome, everything depicted will be a simulation, or digitally filtered into some variation of artistic interpretation. It would be nice to push a specific style forward, and try to really keep it consistent through the whole show. I’ve seen some other productions out there that use some stylization for portions of their show but end up falling back to the semi photo real content.

Domefest has always been a great place to check out some more experimental art direction, and visual concepts. I’d like to be able to do a collection of music videos themed for the dome and explore the potentials of certain styles on the dome.  Like the following examples.

Red Hot Chili Peppers – OthersideThe funniest home videos are here

The  time has come for us to consider some software upgrades. Let me preface this with the fact that generally I’m a content generalist. The majority of my knowledge is in the design and creation of content in multiple software packages, but requires me to know a little bit about the technical aspects of the software the we use.

Currently we run the following software packages:

  • 3ds Max 2008
  • Pixologic Z-brush 3
  • Maya complete and unlimited 2008
  • Mental Ray for Maya 3.6.1
  • Adobe After Effects CS3
  • Frantic’s Deadline 2.7

When considering a software upgrade, there are multiple factors.  One might be inclined to think that the cost per product would be the largest factor, but really the main concern is cost in time to install, trouble shoot, and hope that the upgrade doesn’t disrupt the current pipeline setup.  Our hardware set up includes a mixture of PC’s and Mac’s. For our PC’s we have 2 primary workstations and and 23 Render machines supplied by BOXX technologies.WorkstationrenderfarmExtreme

On the Mac side of things we have 4imacs and 1g5 workstation. They’re generally used to create base assets that eventually get moved into the PC realm and finalized anyway.

We’ve got our eyes on upgrading the software to the following versions.

  • 3ds max 2010
  • maya 2010
  • mental ray 3.7
  • Deadline 3.1

We’ve also considered moving to cebas’s Final Render, but this change will be put on hold due in part to the fact we’re running 2 different 3d software packages. The support has arisen for both independently, but there is talk to have one standalone engine that both 3ds max, and Maya can use simultaneously.

When filming in a traditional flat screen medium,  one may use a variety of lenses to create certain dramatic effects. They can accompany these lenses with zooms and dolly moves to create the Oh-so-dramatic Zolly, where the characters world shifts around them. This video I found explains it pretty well, even though its a bit cheesy in style.

What’s covered are some classic filming techniques, but how can we translate them to the dome.

Unlike a window where you really only have one direction of z-space to sell, a dome is 360 degrees of z space. The viewer is fixed in the middle of a scene. In order for the environment to be correctly projected  on a dome, we’re stuck using only one lens setting, and can’t exactly zoom, because that would actually translate into a camera move.


Everything is based upon the 3d cameras proximity and placement within a scene, and its field of view has to remain a constant. The filming language we’ve grown to accept without realizing is subtle, and full of nuance.  The dome world is still building a shooting vocabulary, let alone a well developed visual language.

Often times I find myself being asked, “what do you do for a living?” and it’s never a short answer.

The response is usually, “I design Planetarium shows”. The conversation never stops there.  It’s usually followed by remarks of wonder, and enthusiasm, but never a sense of comprehension. So of course an explanation is needed to further fill out exactly what kind of planetarium shows that it is I design.

Explaining that Planetariums are no longer just planetariums is my first step, and introducing new vocabulary of Full Dome Video, is what follows. I usually explain that if they’ve seen an IMAX show, to imagine being inside the screen rather than looking at it.WorthamIMAX-1

We turn the entire surface of the dome into one large screen that uses modern animation techniques similar to that of the motion picture industry. Which of course makes for even more enthusiasm and excitement, and a little more sense of what it is I do.Print

This concept that Planetariums are no longer just grounded in space science has been something that most science centers, and planetariums are having trouble explaining to the public. We can do anything now, and seeing as this new spectrum of opportunities is wonderful, its equally troubling because the public’s expectations haven’t caught up yet. They come to a Planetarium to sit in the dark and see stars.  So of course “Branding” has become a central focus for newly converted domes.  Terms like SciDome, DigiDome and Dome Theater, are being used to get people to understand its not just a Planetarium anymore, but instead a Full Dome experience.  We here at Morehead are going through the same growing pains, and are currently in process of discovering what our new theater will be called once its upgraded.

As the medium continues to gain ground, and become more widely recognized, this will of course become a problem of the past. I’m excited to think that one day people will happily be able to go down to the science center not knowing what to expect, rather than expecting something they’ve seen before. That having a show about biology, or zoology will be just as excepted as seeing a show on the constellations, or our solar system. Who knows, maybe even one day going to the Planetarium looking forward to catch that new Hollywood blockbuster that has been released on limited dome screens.BATMANSUPERMAN

Maybe that last part is just a nerdy fantasy of being able to see Batman vs Superman on a dome, but a boy can dream.

In production we know that Particles are just darn expensive. They require a lot of meticulous editing, and a massive amount of time to render.

We had the challenge to visualize a Black hole, but had to do it in about a 2 week period.  So of course doing a scientifically accurate simulation using particles and immensely complex equations to describe the physics of a theoretical object for a kids show was a little out of the question. I instead went with using animated textures and alphas, on solid geometry to create an artistic representation.

Blackhole_grey copyBlackhole_wire copyBlackholeBlackhole_Paddimage

Working with our content expert, we reached a comfortable compromise and the final product is equally beautiful and terrifying as a result.

Morehead planetarium black hole test from Peter Althoff on Vimeo.

Now this is an example of keeping the target audience in mind. We know that this show is intended for children and families so it gives us some flexibility. Generally the public isn’t going to be all that concerned, or more importantly, notice a difference between something artistically visualized vs accurately simulated. If we were trying to generate something for scientific minds to analyze, we might not have gone this route.

We’ve been running into some issues stitching together frames that have varying opacity. Namely, clouds and particle systems. Originally when using a sequence of PNG’s we’d find ourselves having a seam around the stitched boarder. This was due to the alpha being added together at the seam line creating a 1-2 pixel boarder that had a combined opacity greater than the pixels around it.


I realized the problem came with having the stitching software not being able to understand the alpha channel, and that if I controlled that myself rather than leaving it to the code I could remove this variable from the equation. So by out putting an opaque color pass and an opaque alpha pass I could use one to cut out the other as a luma matte in after effects.





Thus, removing the seem issues, and having an alpha channel that could be independently manipulated.


True this creates more files, but really doesn’t increase render time, as the alpha information is calculated in a render anyway and either mixed into a 32 bit frame, or simply discarded in a 24 bit frame. Though if you select Alpha Split in the .tga File set up when outputting, rather than discard the information it will save it as “A_[filename].tga” giving you the two opaque frames you need for stitching.



Hope this is helpful, I know for us this is a great discovery, and kind of a “why didn’t I think of that before”, moment. I also realize that stitching isn’t the best solution, but sometimes is necessary.

I’m going to discuss some potential issues I’ve been mulling over about blending live action and cg on a dome. Following links will discuss in further detail some of the terms I may be using.
Chroma Keys (Aka, Green Screen)
Match Moving

Generating live action footage for a dome has been an on going challenge for anyone producing content larger than 2k. The current resolution standards on most HD cameras only allow us to create the bottom have of a 4k fisheye master. This means of course that part, if not all, of the environment that live actors interact with will need to be computer generated. Also shooting live action, you’re somewhat limited to how much motion you can incorporate into a shot.

The challenge of shooting a moving camera shot, is needing to match that motion in the digital 3d world. You’ll need to be able to record the camera’s position and orientation for each camera move, and replicate it so that your filmed and separated actor/actors are rooted to the scene. You could achieve this using a motion control rig that the camera sits on. With every take you can program the camera’s move so that human error is removed from the situation. The downside is the cost of renting and operating such equipment can be excessive.

Another option is to try syncing the camera up using some match move software and tracking markers. Though most of the software has been developed to track xyz positions in relation to a single plane of footage, and has yet to be calibrated for working with the unique distortion of a fish-eye lens. A work around would be to lock down the camera during filming and then move the actors image in 3d, but would be limiting in its ability to recreate complex camera moves.

Hopefully as Fulldome video becomes more mainstream, camera companies will develop the hardware that will make live action a more plausible solution for smaller studios. The benefits of using real actors, and building on existing sets, leads to a more believable experience for audiences. It also makes production a little simpler because practical solutions can be generated rather than leaning everything on being created in post.

Lunar City

July 21, 2009 | No Comments | Production

We’re reworking a shot from an old show we’ve been commissioned to convert to the full dome platform. We’re seeing what a colony on the moon may look like. Rather than go with something that is the equivalent of the MIR space station on the moon. I thought something much more fantastic, like a full city on the moon may be more inspiring to the younger audiences who see the show. Its still a work in progress, but its come a long way.

The city is equipped with its own fleet of touring taxi’s.