When in Dome…

The Morehead Planetarium & Science Center Production Blog

Author Archive | Peter Althoff

With each production we learn a little more about the trends in the industry. One thing that has made itself clear is that tilted domes require consideration when picking the sweet spot for viewing fulldome video. You want the bulk of your content to show up in or around this sweet spot. If something is produced for a flat dome, the sweet spot would be about 45 degrees up from the spring line, and the horizon just visible around 5 degrees. This works nicely to create a natural feeling as viewers sit back and experience the content in a flat dome.


However, if you take that same content and place it in a tilted dome the audience feels as though they’re perpetually looking down a hill, and creates a kind of mental confusion that breaks the immersion.

In order to resolve this problem, we shift everything up about 15-20 degrees. This creates a natural feeling for most tilted domes, and doesn’t disrupt the viewing experience of flat dome viewers when they sit back in tilted chairs.

To help us keep this in mind we created an overlay to use while viewing our animatics to make sure we don’t stray too far from the ideal sweet spot and framing of content.

City engine

November 19, 2010 | No Comments | When in Dome

In the new show we’ve got in the pipe, one of the first elements on screen is a futuristic city. The job of tackling this flying through an unreal landscape of large modern sky scrapers and a smog filled city grid would only be possible if it could be generated procedurally. That is without modeling and meticulously placing every building and asset in the screen.After doing some research I chose to use the City Engine (http://www.procedural.com/).

The functionality of this software seemed promising. Be warned there is a steep learning curve if you want to start really customizing things. Once you’ve tackled how to implement the design elements, and what is going on behind the scenes, you start to get an better understanding. Our hope is this will be helpful in the future for other projects, and since it’s capable of importing street map data you could generate any real city you wanted. They offer a great set of educational discounts and plenty of out of the box functionality. I will write another assessment once I’ve had more time and experience with it, but for now if you’re interested in a dynamic city generator that is very robust this is definitely worth a look.


image is an example from www.procedural.com

As most of you know there are not very many production tools out there specifically designed for us Fulldome folk. So often times we have to get creative with what we have. Early on in my experiments with working in Fulldome, I created this little project. (download AE project here).

It takes the 3d space of After Effects and funnels it through several different cameras and then in a master comp stitches them together using the After Effects plug-in from Sky-Skan, DomeXF. I’m sure it can be done with the AE FullDome plug-in, just some of the settings will be different.

At the root is a 3d Scene.

This one scene is placed into 5-6 comps, then each of those comps gets a different camera.

The key to remember is to check the “Collapse Transforms” mind sweeper looking button on the layer in each camera comp.

These are the settings for each camera in each comp. Each Comps Dimensions should be 2048×2048 if you’re trying to create a full 4k final output. You could make them smaller, ie, 512×512 to make a 1k… as long as they’re a 1:1 ratio.

Now you just point the cameras in their respective comps based on these settings.

Now that you’ve got  your 5-6 different views of the AE3d Scene, you move those into 1 master stitch.

Each comp now a layer gets the DomeXF plug-in, with their respective settings, and then you’ve got a fisheye image of the 3d space in after effects.

Now something of note, I haven’t gotten the Down camera to work with the DomeXF, but I believe it works with the FullDome Plug-in.

I’ve recently started playing with some expressions so I can link the attributes of the DomeXF settings on each layer so I can adjust the dome tilt, though if you’re using the FullDome Plug-in that’s that’s pretty easy. You just link the dome tilt field  on each of the layers to either a slider, or to one common layer.

Hope this is helpful.



By the term scale, I am referring to the relative size of objects as they’re perceived on the dome in relation to each other.

Recently, while constructing the office of our stories instigator, we noticed an interesting optical illusion. Though the two spaces were a rough test to try and sell the idea that this characters office was massive, they didn’t feel very large once we looked at them on the dome. There were two versions of the space in question. The images better illustrate the difference in size. The objects in the center are to represent a person 6 ft tall next to a shorter person standing behind a desk.

The following videos are indeed two different versions of the room, but due to the uniform scale on the environment, and the lack of visual cues, you’re unable to determine the scale of the space.



After this discovery, we went back to the drawing board. The issue is that we needed to establish scale, and big part of that is incorporating more information.

We broke up the scene a bit more, and added the guardrail element to give us an reference for our eyes, and mind to fill in the missing details.  In a traditional flat screen format, we can use distorted perspective to help sell distance, and depth.  In photography there are ways to blur the focal range, or different lenses can be used to elongate a space. In the dome world that type of visual trickery isn’t as simple, as the perspective is already distorted to work on a curved surface.

This is by no means a new invention. Traditional painting, and photography often times work a person or some other reference object into an environment to help people establish a sense of scale.


Conjure the image of a cloud, shouldn’t be too hard… Except now take that white fluffy cloud and try to imagine it “cartoony”, this became a bit more of a challenge. We are going for a non photo-realistic rendering (NPR) style aka Toon Rendering for this next piece. This created a challenge for developing atmospheric effects such as clouds. Bellow are a few of the screen tests we’ve done to determine which look and feel we’ll go with.

MoreRealistic

This is a digitally created representation of a very real looking set of clouds.

Realistic

this is a similar, yet slightly altered state of realistic clouds. In it we pushed the fluffy askpects and made them more solid

PainterlyClouds

This is what we are calling “painterly” clouds. He we are playing with what feels like a cloud, even though they don’t look realistic.

SuperCartoony

This is “cartoony” representation of clouds, that is close in style to the other hard surface elements of the show.


Although the “cartoony”  clouds are the most consistent with the visual style, we are leaning toward the painterly set. The issues that exist with moving through the cartoon clouds would result in hitting  hollow point inside these empty objects. Not to mention they’re rather resource intensive. They contain millions of polygons, that when rendered take up valuable processing time. We also realized with the painterly clouds we want people to feel as though they’re traveling through the familiar into this flat “cartoony” world, and since they feel like a real cloud, they will be accepted as such.


We here at Morehead have been working with a local talented artist who’s been exploring the notion of projecting art onto surfaces. Where from my understanding his goal is to take two things that were incomplete on their own, and combine them. This process changes both pieces and creates something new. Now as a compositor who takes many things and figures out how to make them work together, its definitely something I can relate to.

Working with him and helping him figure out how images translate from flat to full dome has been a journey for us both. He’s been taking concepts that have become somewhat standard in our productions and wanted to create real footage equivalents. The images that have been generated are certainly abstract. The visual interests of taking seemingly simple concepts like capturing food coloring in water, have produced vivid images with very natural and somewhat unexpected movements.

E052_C002_0223QV_F copyA040_C001_0215SX_F copyE048_C001_0223QF_F2 copyE048_C004_0223GA_F3 copy

As he progressed in his visual experiments the compositions and staging became a bit more deliberate, but still contained the organic nature of the subject matter.

The final piece is scheduled to be shown April 17th as a repeating 10-15 minute  presentation for viewers to experience at their leisure.


The brand new projection system is installed, and now we’re ready to take some content and test it on the venue its intended for. I’ve got my 4k dome masters that I’ve so painstakingly slaved over, and I can’t wait to see them projected across a 60 ft surface… … … now what?!

Dome_Master copy

The Process for preparing a “dome master”  for the dome is called ” slicing”. This takes a large 4k image breaks it apart for your particular projection system.  For example we’ve got two projectors, some people have 4, an I think they go up in even numbers from there. What ever the Configuration you’ll need to feed each projector with its own specially distorted version of the original dome master.

2Projectors

For us, even though we have 2 projectors, each projector needs 4 feeds. So this results in 8 individual feeds of images. The software provided by our full dome facilitator will take our image sequence of the dome masters and cut it up into 8 individual image sequences.  Then those sequences get put into an mpeg video with appropriate audio to be synced with a master Audio Track.  This process is fairly lengthy as the slicer can process 1000 frames per hour. Though luckily you can use multiple computers to do the slice, but when it comes to processing the mpegs you’ll become limited to 8 individual machines.

eightfeeds

After that, then you can take your 8 video files and push them to the slaves that run the projectors, get them all synced up, and finally watch your full dome content… Its amazing, its fantastic, its got an odd flicker happening on some geometry about 30 frames in and lasts the rest of the shot.  On a 4k system you can see any technical blemish you may have missed. At that point you go back to your desk, cry a little, and start all over.




As we get deep into preproduction and asset creation for our next Fulldome show, a key piece of equipment they’ll be using in the reality we create is their ship. After a long period of distilling down ideas from classic rockets, to the more realistic shuttles… We settled on something reminiscent of a fighter jet mixed with a flying saucer. ShipSketchblockedOut

The big issue we’ve been considering is scale… Just how big should it be. Originally the thought was that it shouldn’t be much bigger than the existing shuttles that go into space. But proportionately the cockpit would have been just under 18 feet wide as displayed in this first drawing.

ChairShipcockpit01

We quickly blocked out the dimensions and started some camera move tests. It was just feeling too cramped. So we went bigger.

Cockpit_conceptSketch02

Changing the diameter of the cockpit to something closer to 30 feet gave us a lot more space for the characters to be able to move around, and allow our camera to move a little nicer through the scene. With larger more graceful motion paths.  Also allows us to mix up where the characters will be standing in the environment, and giving us a little more bang for our buck. Who says you can’t fly a skyscraper of a ship into space… it’s the future, right?

The following is a wonderful source for after effects ideas, and production assets. If you’re making full dome content chances are you’re using Adobe after Effects because of its support for 4k compositing, and because its fairly easy to use. Andrew Kramer has a quirky sense of humor in all of his tutorials, but also does a great job explaining process, and some excellent design ideas and procedures. Anyone who’s asked me questions about understanding the program I usually point them to this link.

and for good measure, http://videocopilot.net/

The tutorials have been free, but some of his production bundles are worth taking a look at. For someone new to After Effects there are some great starter tutorials, and for some of us more advanced types he’s always coming out with some new and challenging material.


We’re working on an experimental short that is designed to help us break the daily grind of working strictly on planetarium shows. The short is intended to be entered at DomeFest. An interesting challenge has arisen as we try to generate a crowd that should ultimately cover the dome surface.

My number one priority was to make sure the object that was to be duplicated thousands of times was as lowpoly as I could get it. LowPoly, reffering to an amout of the shapes known as polygons that define the shape of a computer generated model.

The resulting insect is based upon the anatomy of a pill beetle.

bugRendered

This doesn’t look like much up close, but its intent it to be very small. I was able to make this bug for less than 1000, polys.

bugWireframe

But when you’ve got 5000 on the screen at one time, then you’re talking about close to 5 million polygons, all being animated. The bugs legs and antenna had to move. We couldn’t just have them sliding around being pushed or dragged by some imaginary force. All of that information can cause some instability in the scene, so to make life even easier I worked with smaller crowds and plan to layer them up to appear as one large crowd in the final composite.

This  one bug was duplicated multiple times and simulated in a crowd simulation.


The Following animation is a short test showing a crowd of 500 bugs approaching a dome structure.