Monday, July 28, 2008

Six Tuts On Light And Shade, Part III, Moonlight

Sunny AfternoonTwilightMoonlightElectricalCandle LightUnderwater


MOONLIGHT


Hello and welcome to the third part of the environment lighting series for Autodesk Maya 8.5, where we will be discussing a very interesting lighting situation: natural moonlight. So let’s wait for full moon and a cloudless sky, then we can turn off the lights and get started...

If you followed the preceding two tutorials (which I recommend), you will already be familiar with the scene (Fig. 1). Before we start placing lights and tuning parameters, we should take some time to think about what ‘moonlight’ actually is. If you are not interested in this concept then you might want to skip or come back later to the next two paragraphs, as they are not essential. They are however valuable for the understanding of why certain methods have been used in the execution of this moonlight setup.

So what is moonlight? First of all, by moonlight we mean a nighttime situation, and for the sake of convenience let’s say we have a full-moon/nighttime situation. There are several sources and components of illumination in this setting (i.e. in the descending order of energy): the moon itself (by scattering sunlight from its surface in all directions), the sun (by scattering light around the edge of the earth), planets and stars, zodiacal light (dust particles in the solar system that scatter sunlight), airglow (photochemical luminescence from atoms and molecules in the ionosphere), and diffuse galactic and cosmic light from galaxies other than the milky way. All of these illumination sources have their characteristics, and in order to super-realistically simulate such a night-sky, we would have to account for all of them. But please bear with me, we will only be concentrating on the moon itself, and an atmospheric ‘soup’ including all the other ingredients.


Fig. 1



Besides, and this is very interesting, even if we did that super-realistic night-sky simulation then we would perhaps get a very photo-realistic rendering, but I am sure many people would be disappointed by it. This is for the simple fact that, seeing a night-sky/moonlit photograph is fundamentally different from actually viewing such a scene with our own eyes. The photograph might be physically correct, but also completely different from what we are used to physiologically perceiving. In the end, we would most likely shift the photograph’s white balance heavily towards blue, because this is what we are used to seeing: Opposed to how a camera sensor works in dim lighting levels, the sensitivity of the human perception of light is shifted towards blue; the color sensitive ‘cones’ in the eye’s retina are mostly sensitive to yellow light, and the more light sensitive ‘rods’ are most sensitive to green/blueish light. At low light intensities, the rods take over perception and eventually we become almost completely color blind in the dark, hence it appears that the colors shift towards the rods’ top sensitivity: green and blue. This physiological effect is called the “Purkinje” effect, and is the reason why blue-tinted images give a better feeling of night - even though it’s not correct from a photographic point of view.

So we will rely on a hint of artistic freedom, rather than strict photo-realism, for this tutorial. To simulate the moon’s light I chose a simple directional light with the rotation: X -47.0 Y -123.0 Z 0.0 (Fig. 2).


Fig. 2



For the light color I decided to use mental ray’s mib_cie_d shader (Fig. 3). Its Temperature attribute defaults to 6500 K (Kelvin), which means an sRGB ‘white’ for this so-called D65 standard illuminant, which is commonly used for daylight illumination, will be as follows: every temperature above 6500 K will appear blueish, and every temperature below 6500 K will appear reddish. The valid range is from 4000 K to 25000 K. Although the moon actually has a color temperature of around 4300 K, I chose a temperature of 7500 K. This is not necessarily correct from a physical point of view, for various reasons. Firstly, the moon is not a black body radiator and so its color cannot precisely (only approximately) be expressed with the Kelvin scale. Second, the moon’s actual color is mainly a result of the sunlight (with a temperature of around 5700 K - still lower than the white point of our D65 illuminant, or in other words more reddish if expressed with it), a slightly reddish albedo of the moon’s surface and the reddening effect of rayleigh scattering (blue light, i.e. smaller wavelengths, tend to scatter more likely than red light and greater wavelengths, therefore a higher amount of blue light gets scattered in the atmosphere leaving more red light from the perspective here on Earth). This would, in photo-reality, surprisingly yield a quite reddish moonlight, even if we did choose a very low white balance for our photograph at maybe around 3200 K (which is considered ‘tungsten film’). However, for the physiological reasons described previously, I went for 7500 K on the D65 illuminant as this gives a pleasing - not too saturated but still very natural - blueish light.

To cut a long story short, if you wanted to go for photo-realism you would have to use a reddish light color, but you would most likely white balance everything towards blue afterwards to achieve the cool night feeling! And that’s basically what I did - only in a rush...


Fig. 3



For the same reasons I chose a turquoise (blue-greenish color) for the surrounding environment, which was simply applied as the camera’s background color (Fig. 4). Although this will only have a subtle effect it makes sense for the completeness, and after all we will see this color through our back windows. Note that what we see on the actual Background Color’s color swatch will be (deliberately) gamma corrected later on. To overcome this and to ensure that the color I choose is the color that I will see later on in the rendering, I use a simple gammaCorrect node, with the inverse gamma applied. The gammaCorrect is connected via mmb drag&drop onto the ‘Background Color’ slot.


Fig. 4



Before we push the render button, let’s make sure we have something that takes care of our indirect illumination, and that we are rendering in an appropriate color space. For the sake of simplicity I chose final gathering with Secondary Diffuse Bounces for the indirect light contribution (Fig. 5). This is easy to set up, yet effective. As you can see I set low quality values, but since we are only doing a preview this will suffice.


Fig. 5



Because there is a little shortcoming with the Secondary Diffuse Bounces setting, I’m selecting the miDefaultOptions node (Fig. 6), which is basically the back-end of the render globals. There I set the FG Diffuse Bounces to 2, which is my desired value for the indirect illumination bounces. To select the miDefaultOptions simply type “select miDefaultOptions” (without the quote marks), in the MEL command line, and then hit Enter.


Fig. 6



I’m also setting the Ray Tracing depths to reasonable values - they seem very low, but are absolutely sufficient for our needs (Fig. 7).


Fig. 7



To take care of the desired color space (sRGB) we simply need to set a gamma curve in the Primary Framebuffer tab of the render globals (Fig. 8). Since a gamma curve of value 2.2 is similar to the actual sRGB definition, we only need to set the Gamma attribute to 1/2.2 = 0.455, as this is how mental ray’s gamma mechanism works. For a basic understanding as to why we should render in sRGB, I greatly encourage you to go through the “Note on color Space” in the first tutorial of this series (Sunny Afternoon), if you haven’t already. As a general note, it has to do with the non-linearity of human light perception and rendering in a true linear space (gamma = 1.0), as any renderer usually does by default, which is the main reason for CG looking “CG-ish” (which we dont want). Spread this knowledge to your buddies and with this understanding you’ll be the cool dude at every party, trust me!


Fig. 8



So here is our first test render (Fig. 9). It looks a bit dark, and since we want to have a full-moon the shadow seems a bit too sharp.


Fig. 9



To soften the shadow, let’s increase the Light Angle of our directional light (Fig10). Because widening the light angle introduces artifacts, we should also increase the amount of shadow rays to yield a smooth and pleasing shadow. I’m also increasing the intensity of the mib_cie_d a little.


Fig. 10



This is a good base (Fig11) and all we need to do now is increase the general quality settings for our final render.


Fig. 11



For better anti-aliasing and smoother glossy reflections we should crank up the global sampling rates (Fig12). A min/max value of 0/2 and a contrast threshold of 0.05 should suffice. I used a Gauss 2.0/2.0 filter for a sharp image.


Fig. 12



For the final gathering this time I chose a fairly unorthodox method... Remember the last couple of times we used the automatic mode, which in most cases does a really good job. Well, in automatic mode all we need to worry about are the Point Density and Point Interpolation values. However, sometimes in this mode the interpolation becomes quite obvious and displeasing, especially in corners where you can usually spot a darker line where the interpolation happens to be very dull. For a sharper interpolation, I decided to use the scene unit dependant Radius Quality Control (Fig13). It generally takes a little time to estimate the proper min/max values (in scene unit values), but as a guideline you might want to do a diagnostic automatic Final Gathering solution (see Diagnostics in the render globals) as a base, to see its point densities. Then, step by step, approximate this density with the scene unit Max Radius control. Note that the density is only decided by the Max Radius (the lower the Max Radius, the more Final Gathering points are being generated); the Min Radius only decides for certain interpolation extents. Once you are satisfied with this general density, you will usually want to raise the Point Density value. This Point Density is added to the density we estimated with the min/max radii; however, the interpolation extents do not change so we are basically only adding points to the interpolation, which is similar to raising the Point Interpolation in automatic mode (only more rigid and somehow it puts the cart before the horse this way). It’s always good to know how and why things are happening, and this knowledge is useful if you ever want to use the Optimize for Animations feature. It’s also a bit easier if the View radii are being used, since the min and max radii can be generalised (min/max 25/25 or 15/15 in pixel units is a good starting point).


Fig. 13



As a little trick to enhance details in our scene, I turned the Ambient Occlusion on in the mia_material shaders, in the Details mode. Simply select them all and switch the Ao_on attribute to 1 (On), using the attribute spread sheet (Fig14). The Details flag, in combination with Final Gathering, ensures that we don’t get that rather unpleasant dark-cornered-and-strange Ambient Occlusion.


Fig. 14



To prepare for the final render, I set the framebuffer to half floating point and the image format to OpenEXR (Fig15). Floating point means the image gets stored with a high dynamic range, as opposed to 8bit or 16bit integer images, which are clipped at RGB values greater than 1.0 (‘white’). With a floating point image we can map values greater than 1.0 back to the visible range in post-production (i.e. we will be able to eliminate completely burnt areas). Half floating point means the floating point with half precision, taking less memory and bandwidth. To be able to render a floating point image right out of the GUI we need to set the Preview Tonemap Tiles to Off, but keep the Preview Convert Tiles at On. The preview in the render view might look very dark and psychedelic, but the OpenEXR image written to disk in the images\tmp folder will be alright, and that’s the one we will be processing later on in Photoshop (or any other HDRI editor of your choice). Mind that floating point images are stored without gamma correction (i.e. linearly), and e.g. photoshop applies (hopefully) proper correction by itself. If the image looks incorrect when being imported to photoshop or whereever else, you most likely have to apply the gamma correction by yourself there. This does not relieve us from setting the proper gamma value in the render global's framebuffer menu however, as the textures still need to be linearized before rendering!


Fig. 15



Here’s my final render without post processing (Fig16).


Fig. 16



As with any photograph we shouldn’t judge the raw shot; instead let’s take it into the ‘darkroom’ and apply some color and contrast improvements here and there (Fig17).

I hope you’ve enjoyed following this little exercise as much as I have enjoyed writing it! Sadly this is the last part concerning natural exterior lighting, but the upcoming electric light tutorial will be no less challenging and just as much fun, I’m sure!

Discuss on cgtalk


Fig. 17

Credits:

Original concept and geometry - Richard Tilbury

Original idea - Tom Greenway

Editor - Chris Perrins

Tutorial - floze



Sunny AfternoonTwilightMoonlightElectricalCandle LightUnderwater
  

5 comments:

  1. can you explain a little more in depth the post-procces, the final image looks 10 times better than the original render...can you make a tut only about post-production?

    anyway great tutorial and great info, I read it just to understand basic concepts about light (which you explain very well), I am a vray user....

    ReplyDelete
  2. I have to agree. The final post processed image looks wonders over the final render. It would be cool to learn how you go about this worklflow.

    The tutorials are great btw. Great work floze!

    ReplyDelete
  3. Who knows where to download XRumer 5.0 Palladium?
    Help, please. All recommend this program to effectively advertise on the Internet, this is the best program!

    ReplyDelete
  4. mib_cie_d shader the information about this, also what to do with its hmm nothing nice basic information but not wher to now

    ReplyDelete
  5. Thanxs Dear..can u Send me exercise file...

    my email id is- upendrak3d@gmail.com

    ReplyDelete