In this article, I will introduce the reader to the different rendering components in Unity. I will introduce the Camera component as well as the different lighting components that are available. I will also talk about materials in Unity and introduce you to a few of the shaders that are available. And finally, I will also introduce light-mapping in Unity.
Table of Contents
- Rendering Components
- Materials and Shaders
- Light Mapping
Lighting and special effects are an integral part of every polished game. Lighting is very import when setting the general mood or atmosphere of your game. Proper lighting can mean the difference between a game that looks like it has been made by a group of programmers, or a highly polished game made by professionals. The general mood of your game is set by the use of lighting in your levels. To convey a mood of “fear” is achieved through sparse use of lighting in strategic locations. However, you can achieve the opposite mood by using bright lights that bring out the vivid colors of your game.
Lighting can be achieved using a few dynamic light sources, or through a process known as lightmapping where the lighting information is rendered into a texture that is placed on the static objects in your scene. But the best application of lighting is a combination of both dynamic light sources and baked lightmapping.
Special effects are used to add that “wow” factor to your game. When used correctly, they can add an extra boost to your gameplay that will keep the players coming back for more. In this article, I will introduce the Shuriken particle editor that was added to Unity in version 3.5. I will talk about the various module that are used to make an individual particle system and how you can combine particle systems to make complex particle effects.
I will also introduce a technique called lightmapping. Unity includes the Beast lightmapper and doesn’t require a pro license to use it.
The Render Components are the group of components that effect in-game rendering such as cameras and lights.
The Camera component is used to capture a view of the world and display it to the player. Without at least one Camera component attached to a GameObject in the scene, you will simply see a gray screen. When you create a new scene, Unity will automatically place a single GameObject called “Main Camera” in your new scene by default.
You can have as many cameras in your scene as you want, but generally you will only have one main camera that is used to display your game.
The Camera determines what you see and how it appears on screen. The camera has several properties that are common to all camera types regardless if it is used by Unity or not. It is important to understand these properties to properly configure your camera’s view. These properties include position and orientation, projection, near and far clipping planes, field of view (for perspective projection), size (width and height for orthographic projection), and in some cases viewport size.
Position and Orientation
In Unity, the Camera component is attached to a GameObject that is placed in your scene. The GameObject’s position and orientation in the world will determine what the Camera is looking at. The Camera will always look in the direction of the GameObject’s positive Z axis.
As can be seen in the image, the camera is pointing in the positive Z axis (represented by the blue arrow in the image).
The Projection determines how objects are mapped from a 3D space (usually View Space) into a clipped camera space (called Clip Space). There are two common types of projections that can be applied to the camera: Perspective Projection or Orthographic Projection.
A Perspective Projection is the most common type of projection you will use in your game. Using this type of projection, objects that are close to the camera will appear larger than objects that are placed farther away from the camera. This is also how the human eye works.
Using a Perspective Projection an additional property becomes available called the Field of View. This value measures the vertical viewing angle. Making the Field of View angle smaller (closer to 0), objects will appear zoomed-in and making the Field of View larger (angle closer to 180) will cause objects to appear zoomed-out.
Using this type of projection the View Frustum resembles a Pyramid with it’s top cut-off (see the image above).
An Orthographic Projection is more commonly used for 2D side-scrolling or platforming games. The Orthographic projection is also useful when displaying GUI elements on screen (although Unity handles this for your when you use GUITexture and GUIText and when using the GUI API to create menus). With an Orthographic Projection, objects will appear the same size regardless of how far they are from the camera.
Using the Orthographic Projection you will have an additional property called Size that determines the width and height (but not the depth) of the view frustum. Making the Size smaller will cause objects to appear zoomed-in and making the Size larger will cause objects to appear zoomed-out.
The shape of the View Frustum resembles an elongated box (see image below).
The Camera component has the following properties:
- Clear Flags: Determines how the screen is cleared before the scene is rendered. The following options are available:
- Skybox: This is the default clear flag. If the GameObject the Camera component is attached to also has a Skybox component, the Skybox material will be used to clear the screen before rendering. If no Skybox is configured, the Background color property is used to clear the screen.
- Solid Color: Use the Background color to clear the screen.
- Depth Only: Do not clear the color buffer, only clear the depth buffer. This is useful for some post-processing effects where only the depth values are taken into consideration.
- Don’t Clear: Neither the color buffer nor the depth buffer are cleared.
- Background: Specifies a background color to use when either Solid Color is specified for the Clear Flags or when Skybox is specified for the Clear Flags but no Skybox has been configured.
- Culling Mask: The Culling Mask property allows you to specify which Layers should be rendered with this Camera component. By default, all layers are rendered.
- Projection: Specifies the type of camera view. Valid values are Perspective and Orthographic.
- Perspective: This is the default view type. With a Perspective projection objects closer to the Camera will appear larger than objects farther away. This is the default view and the one most commonly used to render 3D games.
- Orthographic: With an Orthographic projection, objects that are farther away from the Camera will not appear smaller than objects closer to the Camera. In other words, objects in your game will not demonstrate perspective. This view type is useful for 2D games, isometric games, and top-down views of your game world (for example, to generate a mini-map display).
- Field of View: This option is only available when the Projection property is set to Perspective. This option controls the zoom level of your Camera. This is similar to putting a zoom lens on your own Camera or looking through a pair of binoculars. With a smaller field of view, objects will appear zoomed-in but with a large field of view, things will appear very far away. The default value is 60 degrees but values in the range of 45 degrees to 90 degrees will generally still look okay. Smaller or larger field of view may appear distorted.
- Size: This option is only available when the Projection property is set to Orthographic. This property determines the size of the box that is formed by the orthographic view. Making the orthographic size smaller will make the objects in your scene appear zoomed-in and making the size larger will make the objects in your scene appear zoomed-out.
- Clipping Planes: The clipping planes determine the range of your camera’s view. Objects outside of the range of the clipping planes will be clipped. It’s generally a good idea to keep the Near and Far clipping planes close together. Setting the clipping planes too far apart will result in a rendering artifact known as Z-Fighting.
- Near: The Near clipping plane determines how close an object can be to the Camera before it will be clipped from view. It is generally a good idea to keep this value in the range 0.1 to 0.5.
- Far: The Far clipping plane determines how far an object can be from the Camera before it will be clipped from view. This value should only be as large as absolutely necessary and not larger. For indoor environments, the Far clipping plane can be set fairly close to the Near clipping plane since objects far away will be occluded by the walls of your level. For outdoor scenes you may need to make Far clipping plane farther away but don’t make it larger than needed.
- Normalized View Port Rect: The View Port Rect determines where this Camera will be rendered on the screen. This parameter is useful for implementing split-screen gameplay.
- X: The normalized distance from the left side of the screen to display the view of the Camera.
- Y: The normalized distance from the bottom of the screen to display the view of the Camera.
- W: The normalized width of the view of the Camera.
- H: The normalized height of the view of the Camera.
- Depth: This property determines the order in which this Camera will be rendered. Valid values for the Depth of the Camera are -100 to 100. Camera components with a lower Depth value will be rendered before Camera components with a higher Depth value. The default value is 0.
- Rendering Path: This property determines the rendering method used for this Camera.
- Use Player Settings: The Camera will use whatever settings are configured in the Player Settings dialog.
- Vertex Lit: Lighting is computed per-vertex instead of per-pixel. This rendering method is faster than forward rendering but produces lower quality renders.
- Forward: This is the standard rendering method used. It produces high quality results but is performance bound by the number of lights used in your scene.
- Deferred Lighting (Pro Only): Deferred lighting means that lighting information will be computed after all other information (such as depth, screen-space normals, and specular contribution) has been rendered into several screen-space buffers (called the G-Buffer). Deferred lighting allows for many dynamic lights to be computed efficiently. Deferred lighting requires graphics hardware with support for Shader Model 3.0 (SM3.0) which is available on most desktop GPUs but currently not available on most mobile platforms.
- Target Texture (Pro Only): This allows you to specify a texture (actually a Render Target) where the view of the Camera will be rendered into. This texture can then be applied as a standard texture to other objects in your scene. This is useful for implementing things like a security camera in your scene that is displayed on a monitor somewhere else in the scene. It can also be used to create a “Picture-in-Picture” effect for your main GUI. If set to None (the default value) then the Camera will render to the screen.
- HDR (Pro Only): Enable High Dynamic Range rendering for this Camera. By default, the red, green, and blue channels have a value in the range 0 to 1.0 (with 256 shades per channel). With HDR enabled, colors are stored as 32-bit floating-point values that can exceed 1.0. This means that overly bright colors will be handled correctly and post-processing effects (like HDR bloom) can be applied correctly. Before being rendered on-screen, the HDR colors will be shifted back in the 0 to 1.0 range using a technique called HDR tonemapping. For more information about HDR rendering, please refer to the online documentation here: http://docs.unity3d.com/Documentation/Manual/HDR.html
The Skybox component is used to draw an environment that appears infinitely far away from the camera. You can imagine it as an infinitely large box that is centered at the position of the Camera. The Skybox component has a single property called Custom Skybox which allows you to assign a skybox material.
A Skybox material uses the RenderFX/Skybox Shader which uses 6 textures to render the sides of the skybox cube (front, back, left, right, top, and bottom).
If you want to define a default Skybox that will be automatically applied to all cameras in your scene (whose Clear Flags parameter is set to Skybox) you can assign a Skybox material to the Skybox Material property in the Render Settings dialog (select Edit -> Render Settings from the main menu).
If you specify the Skybox Material in the Render Settings dialog, that Skybox will also appear in the Scene view.
Unity comes with a few Skybox materials that you can use in your own games. To import the Skyboxes package, select Assets -> Import Package -> Skyboxes from the main menu.
By default when you create a new scene, you will not have any lights in your scene. Without lights, all objects in the scene will have a dark-gray color. This dark-gray color is the default color of the Ambient Light property defined in the Render Settings dialog (select Edit -> Render Settings from the main menu).
Unity provides 3 light types (excluding the Area Light which is only available with the Pro license):
- Directional Lights
- Point Lights
- Spot Lights
Each light type is implemented with the Light component but the light type is chosen from the Type property of the light.
Directional Lights are most common in outdoor scenes. Directional Lights mimic the behavior of the Sun, that is, they will illuminate everything in the scene regardless of the position of the directional light relative to other objects in the scene. If you place an object behind a directional light, the object will be lit according to the direction of the light but the position of the light will be ignored. Directional Lights will always point in the direction of the GameObject‘s Z axis.
As you can see from the image above, the directional light is placed above the sphere and pointing in the direction of the capsule, but all three shapes are lit as if the light was placed infinitely far away.
Directional Lights are also the only light type that can be used to generate shadows in the default rendering mode (forward rendering).
Point lights emit light evenly in all directions and unlike Directional Lights, the position of point lights is very important but their direction is irrelevant.
Point Lights are most commonly used as localized light sources such as light bulbs, illuminating explosions, or used to complement other particle effects like fire and flames.
As can be seen from the image, the Point Light has a maximum range indicated by the yellow sphere. You can click and drag the handles on the yellow sphere to adjust the range of the light.
You will also notice that the intensity of the light decreases as the point being lit is further away from the light. This effect is called Attenuation of the light. The Attenuation of the light is dependent on both the Range and the Intensity of the light.
Spot Lights are a combination of Point Lights and Directional Lights. The Spot Light is the only light type for which both the position and direction of the light are taken into consideration when computing the contribution of the light. Just like Directional Lights, Spot Lights will always point in the direction of the GameObject‘s positive Z axis.
Spot Lights are most commonly used as light sources for flashlights, headlights for a vehicle, or light emitting from a street lamp.
Spot Lights also exhibit Attenuation (as the point being lit is further away from the light, the intensity of the light decreases). Again, the range and intensity of the light will contribute to the Attenuation of the light.
The size of the Spot Light’s cone angle is controlled by the Spot Angle property. The intensity of the Spot Light is also effected by the difference in the angle to the spot being lit and the direction of the Spot Light. As the angle between these increases, the intensity of the light decreases. This effect is called intensity fall-off. This fall-off effect produces a more realistic lighting result but unfortunately, the intensity fall-off factor is not configurable in Unity.
The Light component has the following properties:
- Type: Determines the type of the Light component.
- Spot: A Spotlight which simulates a cone shaped light pointing in the local Z-axis.
- Directional: A Directional light which illuminates all objects in the scene from a single direction. Position is irrelevant.
- Point: A Point light shines light in every direction equally. Only objects within Range distance are affected by the light.
- Range: This property is only available if the Type is either Spot or Point. This value determines the distance (in world units) at which the light will affect an object. Objects beyond this distance from the light will not be illuminated by the light.
- Spot Angle: This property is only available if the Type is set to Spot. This value determines the cone angle (measured in degrees) of the spotlight cone.
- Color: The primary color of the light. This value determines both the diffuse and specular contributions of the light.
- Intensity: The brightness of the light.
- Cookie: The alpha channel of the Cookie texture is used to determine the brightness of the light. If the Light is of Type Spot or Directional, then this must be a 2D Texture. If the Light is of Type Point, then this must be a Cubemap.
- Cookie Size: This property is only available if the Type is set to Directional. This property allows you to scale the cookie texture that is applied to the Directional light.
- Shadow Type (Pro only): Shadows are only available with the Pro license. Using the Forward rendering path (see Camera Properties), only Directional lights produce shadows. Using the Deferred rendering path, Point and Spot lights also produce shadows.
- No Shadows: This light does not produce shadows.
- Hard Shadows: The light produces hard shadows. This is the preferred method for generating shadows because it is cheaper to compute.
- Soft Shadows: This method produces shadows with blurred edges. This method is slower than Hard Shadows, but it produces a better result.
- Strength: The darkness of the shadow will be multiplied by this value to reduce the intensity of shadows produced by this light.
- Resolution: This property determines the size of the shadow map texture used by this light. Very High Resolution will produce better shadows at the expensive of speed and memory.
- Bias: If the light produces an effect called Shadow Acne then you can adjust the Bias parameter to resolve the artifacts. However, setting the Bias parameter too high may cause Peter Panning.
In the image above, the cylinder appears to be “floating” above the plane when in fact, the cylinder is placed exactly on the plane.
- Softness: This parameter is only available if Shadow Type is set to Soft Shadows. This parameter is used to scale the size of the soft are of the shadow (the outer edge of the shadow where it begins to soften). Valid values are in the range [1 .. 8]. With a value of 1, the shadow will appear to be a Hard Shadow. With a value of 8, the shadow will appear soft around the “penumbra”.
- Softness Fade: This parameter is only available when Shadow Type is set to Soft Shadows. This value determines the intensity of the shadow penumbra (the soft edge of the shadow that is partially in shadow) based on the distance from the camera.
- Draw Halo: Draws a Halo effect at the position of the light. Although this property can be assigned to Directional lights, it usually doesn’t make sense to add this to a directional light because Directional lights don’t really have a position, only a direction. This property should only be set on positional lights light Point and Spot lights. Optionally, a Halo effect can also be achieved using a Halo component.
- Flare: Adds a Flare effect to the light. The Flare effect mimics a Lens Flare effect that is seen on camera lenses in real reality. This parameter accepts a Flare texture. Optionally, this effect can be achieved using a Flare component.
- Render Mode: This property determines how and when the light is applied to objects.
- Auto: The quality of the light renderer is determined by the proximity and intensity of the light relative to the object being rendered.
- Important: This light will always be rendered using per-pixel lighting shaders.
- Not Important: This light will always be rendered using per-vertex lighting shaders.
- Culling Mask: Include or exclude objects assigned to different layers from being affected by the light.
- Lightmapping: This property determines how this light contributes to the Ligthmapping process.
- Auto: Uses real-time dynamic lighting when the light is close to the camera and baked lighting information is used when the light is far away from the camera. This technique requires dual lightmaps to be baked for your scene.
- Realtime Only: Lighting information for this light will not be baked into the lightmap. This is useful if your light’s properties (Color or Intensity for example) will change at run-time.
- Baked Only: This light is not simulated at run-time. It is only used as a light source for the light map. This is useful for static lights that will not change at runtime. Do not use this setting on lights that should generate dynamic shadows at run-time.
Materials and Shaders
A Shader is the source code that defines how a particular effect is rendered. A Material uses a Shader and the Material exposes the properties defined in the Shader to the Inspector. Also, you cannot apply a Shader directly to a GameObject in the scene, but instead you must apply a Material to the GameObject.
Unity provides a set of built-in shaders that you can use in your games. The built-in shaders are split into groups called Shader Families. Each Shader Family defines several variants of the base shader type for that family.
The different Shader Families you have available in Unity are:
- Normal Shader Family: These are probably the most common shader type. It is used on opaque objects that do not need to appear transparent or light-emitting.
- Transparent Shader Family: These shaders are used on transparent objects that are either fully, or semi-transparent.
- Transparent Cutout Shader Family: These shaders are used on objects that have either opaque or fully transparent pixels. This is useful for solid objects that contain holes like a fence, or a chain link, or blades of grass.
- Self-Illuminated Shader Family: These shaders will simulate an object that emits light. In this case, the object will appear lit even if there are no lights illuminating the object. Objects with a Self-Illuminated shader applied to them will not illuminate other objects!
- Reflective Shader Family: These shaders allow you to apply a reflective cube map on the material. The alpha channel of the base texture is used to determine the amount of reflectivity that is applied at each pixel.
The Normal Shader Family define the most basic shaders used in Unity. This shader type is used on opaque objects that do not have any transparency or emissive properties.
This shader family has several variants and each variant defines different properties that effect the way the shader is rendered.
The Vertex Lit variant performs only per-vertex lighting and does not perform any per-fragment lighting or lighting effect that require per-fragment processing like light cookies, normal mapping, or shadows.
This is a fast shader and should be used whenever correct lighting results are not very important. It is recommended you use this shader type on small objects or objects that are placed far away from the camera.
The Vertex Lit shader has the following properties that are exposed in the Material:
- Main Color: The Main Color property can be used to lighten or darken the Base texture. If the Main Color is white, then the texture will appear as it is in the Base texture. If it is black, then the texture will appear black.
- Sepc Color: This Spec Color is used to determine the color of the specular highlight.
- Emissive Color: The Emissive Color is added to the base texture and can be used to brighten the object even in the absence of light.
- Shininess: The Shininess property determines how shiny the surface of the object appears. A low shininess value will result in a large specular highlight and a high shininess value will result in a small specular highlight.
- Base (RGB): The Base texture that is used to apply the image to the object. Textures can be scaled on the object by adjusting the Tiling property and the texture can be offset by adjusting the Offset parameter.
- Tiling: This property determines how many times the texture is repeated across the surface of the object in each direction.
- Offset: As the name suggests, the Offset parameter is used to shift the texture across the object.
- Back Plane
- Name: Back
- X: 0
- Y: 0
- Z: 5
- X: 270
- Y: 0
- Z: 0
- Bottom Plane
- Name: Bottom
- X: 0
- Y: -5
- Z: 0
- X: 0
- Y: 0
- Z: 0
- Left Plane
- Name: Left
- X: 0
- Y: -5
- Z: 0
- X: 0
- Y: 0
- Z: 270
- Right Plane
- Name: Right
- X: 5
- Y: 0
- Z: 0
- X: 0
- Y: 0
- Z: 90
- Top Plane
- Name: Top
- X: 0
- Y: 5
- Z: 0
- X: 0
- Y: 180
- Z: 180
- Name: Cube01
- X: 2
- Y: -1.5
- Z: 1
- X: 0
- Y: 45
- Z: 0
- Name: Cube02
- X: -2.5
- Y: -3
- Z: 0
- X: 0
- Y: 0
- Z: 0
- Point light
- X: 0
- Y: 4.5
- Z: 0
- Range: 12
- Static: True
- X: 0
- Y: -2.5
- Z: -2.5
- Point light
- X: 0
- Y: 0
- Z: 0
- Type: Point
- Color: Green
- Render Mode: Important
- Lightmapping: Realtime Only
The Diffuse shader computes per-pixel lighting contribution using the Lambert reflectance lighting model. This shader will not produce specular highlights.
The Specular shader adds an additional parameter to control the specular shininess.
The Bumped Diffuse shader adds an additional texture parameter called the Normalmap. The Normalmap texture is used to add additional detail to the surface of the object without adding additional vertices to the model. This shader does not produce a specular highlight.
The Bumped Specular shader adds a specular highlight property to the material.
The Parallax Diffuse shader adds additional parameters for Height and Heightmap. The Height property is a scalar property that is used to adjust the apparent height of the surface of the object and the Heightmap texture property uses the Alpha channel of the texture to determine high and low parts of the surface. The opaque parts of the alpha map will appear extruded while transparent parts of the alpha map will appear recessed.
This shader is generally applied to brick and mortar textures where you want the bricks to appear higher than the mortar grout.
The Parallax Specular shader is similar to the Parallax Diffuse shader except it has an additional property for the Specular Color and the Shininess values.
The Decal shader is similar to the Diffuse shader with an additional texture that is placed on top of the original texture. The Decal texture is not blended with the Base texture, it just replaces the pixel behind it.
The Diffuse Detail shader exposes an addition texture parameter that is used to add detail to a texture when the camera is close to the surface. The Detail texture will be faded-in as the camera gets close to the surface. The purpose of this shader is to hide the filtering artifacts that become visible when you look closely at low-resolution textures.
Transparent Cutout Shaders
Lightmapping is the process of computing high-quality lighting information and storing this information in a texture. This texture is “mapped” over the static objects in your scene which gives the impression of high-quality lighting at run-time. The process of generating the light maps is called baking the lightmaps.
Unity comes with a built-in lightmapping program called Beast that you can use in the free version of Unity to generate static lightmaps for your scenes.
To understand how lightmapping works in Unity, we will create a simple scene that uses lightmapping to generate direct and indirect lighting information for your scene.
Create a New Scene
Start Unity and create a new empty project. Let’s call this new project Lightmapping. For this example, you do not need to import any of the standard packages.
Unity will automatically create a new empty scene for you.
Save the scene and call it Lightmapping.
Create a Box
Create 5 planes in your scene with the following properties:
Your scene should look something similar to what is shown below.
Add Some Color
The scene looks pretty boring at this point with no colors. Let’s color two of the walls in our room. We’ll color the left wall red, and the right wall blue.
Create two new materials. Name the first material to Red_Material and the second material to Blue_Material
Select the Red_Material and set it’s Main Color property to Red.
Select the Blue_Material and set it’s Main Color property to Blue.
Drag-and-Drop the Red_Material from the Project view onto the Left wall.
Drag-and-Drop the Blue_Material from the Project view onto the Right wall.
Place 2 cubes in the scene with the following properties:
Your scene should now look something like this:
Save your scene.
Create a yellow material (call it Yellow_Material) and place it on Cube02.
Now let’s put a light in the scene.
Create a Light
Create a Point light in the scene with the following properties:
Leave all other properties as the default.
Our scene is looking a little better now with this point light, but I think it can get better.
Create the Lightmap
Up to this point, we havn’t done anything that we havn’t seen before (if you have been following the previous articles then you will already have had some experience placing lights in the scene). Now we are going to add some realism to the scene by pre-computing the lighting information into a lightmap that is otherwise very difficult to achieve at real-time frame-rates.
Open the Lightmapping view (select Window -> Lightmapping from the main menu) .
Click the Bake button at the top of the Lightmapping view to show the Bake settings.
Set the Mode to Single Lightmaps.
Single vs Dual Lightmaps
Single lightmaps will generate a single lightmap for the entire scene. Single lightmaps will generate full lighting information for the lights in your scene that are set to Auto or Baked Only. Single lightmaps take less time to bake because Beast only needs to generate one set of lightmaps instead of two. Single lightmaps are required when using the forward rendering mode.
Dual lightmaps are supported for deferred rendering mode. Dual lightmaps will generate two sets of lightmaps; Near, and Far lightmap sets. The Near lightmaps will store indirect lighting information for real-time lights while the direct lighting contributions will be computed from the real-time lights in the scene (unless they are set to Baked Only). The Far lightmaps will be baked with both direct and indirect lighting contributions for lights in the scene that are marked Auto or Baked Only. Realtime Only lights will still be used to compute direct lighting on Far objects but dynamic shadows will be disabled.
The transition between the Near and Far lightmaps is determined by the Shadow Distance property in the project’s Quality Settings window (select Edit -> Project Settings -> Quality from the main menu).
For the purpose of this exercise, we will only use Single lightmaps.
For more information on Single and Dual lightmaps in Unity, please refer to the Lightmapping In-Depth page on the Unity website here: http://docs.unity3d.com/Documentation/Manual/LightmappingInDepth.html.
Direct vs Indirect Lighting
Direct lighting is the light that comes directly from the light source. In reality, when light hits an object, the light will “bounce” (or reflect) off of an object’s surface and it may go on to hit something else. This “bounced” light causes the indirect illumination of surfaces around that object.
Indirect light is the contribution of light that is reflected from surfaces and contributes to the illumination of surfaces in close proximity.
Indirect light cannot be simulated using standard rasterization techniques that are generally used to render your game. But the indirect lighting contribution from environment lighting or lighting that is reflected from nearby objects can be simulated by baking this information into a lightmap. The lightmap generator will use a rendering technique that is well suited for computing the indirect lighting contributions but is not well suited for real-time rendering and therefor this information can be pre-baked into the lightmaps for use at run-time.
Open the Lightmapping view if it isn’t open already.
Set the Mode to Single Lightmaps.
Set the Quality to High. The quality setting will determine the overall quality of the generated lightmap. While you are trying to get the lighting right, it is a good idea to keep this setting at Low quality because the lightmap generation will take considerably less time to produce and give a good overall indication of how the high quality lightmaps will look. When you are satisfied with how the lightmaps look, don’t forget to change this setting back to High and rebake the lightmaps.
Set the Bounces to 1. Setting the Bounces property higher than 0 will enable indirect lighting to be computed on objects. Setting the Bounces property to 0 will cause only direct lighting contributions to be baked into the lightmap.
Set the Sky Light Intensity to 0. We don’t want to have any skylight contributions added to our lightmap. Setting this value to 0 effectively disables the Sky Light.
Set the Bounce Boost and the Bounce Intensity values to 2.0. This will increase the contribution of indirect light on objects.
Set the Final Gather Rays to 1000. This value controls the number of Final Gather Rays that are generated at each Final Gather Point (see Contrast Threshold below). Increasing this value will improve the quality of the final lightmap at the expense of computational cost. Gernally, 1000 is a good value to use but if you find the quality of the final lightmap is insufficient, then you can increase this value but keep in mind this will have an adverse effect on the time it takes to generate the lightmap.
Set the Contrast Threshold to 0.05. The Contrast Threshold value tells Beast how often to generate Final Gather Points and thus effecting the total number of Final Gather Rays that need to be generated to lightmap the scene. Setting this value lower will cause more Final Gather Points to be generated (because the contrast variance required to generate a Final Gather Point will be less). Setting this value higher will cause less Final Gather Points to be generated because more contrast variance is required to generate a Final Gather Point. Higher values of Contrast Threshold will generate smother results in the final lightmap at the expense of detail.
Set the Interpolation value to 0.5. The Interpolation value controls the interpolation method used to interpolate the colors between the Final Gather Points. A value of 0 means a simple linear interpolation between points will be used to determine the intermediate colors. A value of 1 will give smoother results but can also reduce the detail in the final result.
Set the Interpolation Points to 15. This property controls the number of Final Gather Points to interpolate to produce the color at the current pixel in the lightmap. Higher values can create smoother (more blurred) results at the cost of detail in the lightmap.
Set Ambient Occlusion to 0. Ambient occlusion is a rendering technique that approximates the brightness of a pixel based on nearby objects in the scene. It simulates the occlusion of environment lighting due to nearby geometry. Please refer to Chapter 17 of the GPU Gems text for a complete explanation of Ambient Occlusion (available online here: http://http.developer.nvidia.com/GPUGems/gpugems_ch17.html
This image shows an application of Ambient Occlusion. On the left image, the model is shaded with simple diffuse shading. The model on the right is shaded with both diffuse shading and ambient occlusion. Notice the darker areas on the inner thigh of the model and on the ground and around it’s feet.
Set the LOD Surface Distance to 1. This property effects the LOD Group component that is used to replace high-poly models with low-poly models as the camera moves further away from the GameObject. LOD Groups are only available in the Pro version of Unity, this option may not be available to users of the Free version.
Make sure the Lock Atlas option is not checked. The Texture Atlas determines the tiling and offset of objects in the lightmap. Since many objects share the same lightmap, Unity needs to know how much area of an object’s surface is occupying the lightmap texture. The Texture Atlas is automatically generated every time you bake the lightmaps. Checking the Lock Atlas option will prevent the Texture Atlas from being modified and the current tiling and offset settings will be retained for each renderer object (GameObjects with a Mesh Renderer component attached to them). You probably want Unity to figure out the Texture Atlas for you so you can leave the Lock Atlas option unchecked.
Set the Resolution property to 50. This value determines the number of texels per world unit that will be used to generate the lightmaps. Higher values will generate more detailed lightmaps at the expense of texture memory. Lower values will generate less detailed lightmaps.
Make Objects Static
Before we can bake the lightmaps, we need to make some geometry static. Only objects that are marked Lightmap Static will be considered during the lightmapping process.
In the Lightmapping view, select the Object button at the top of the window.
Select the Renderers button under Scene Filters to show only the GameObjects in the Hierarchy view that have a Mesh Renderer component attached to them.
Select the Back, Bottom, Left, Right, Top, Cube01, and Cube02 GameObjects from the Hierarchy view.
With all of the static renderers selected in the Hierarchy view, make sure the Static option is checked in the Inspector as shown in the screenshot above.
You should also confirm that the Lightmap Static option is checked for your selected object in the Lightmapping view, but this should be the case if the object is marked Static in the Inspector.
With the Lightmapping view open to the Object tab, select the Point light GameObject in the Hierarchy view.
Set the Lightmapping mode to Baked Only. This will ensure the light is considered during the light baking process but the light is ignored at run-time.
Set the Baked Shadows property to On. It doesn’t matter if you choose the On (Realtime: Hard Shadows) option or the On (Raltime: Soft Shadows) option because real-time lighting contributions are ignored on lights that are set to Baked Only.
Leave all other settings default and click the Bake Scene button on the bottom of the Lightmapping window.
The resulting view should look something like this:
This looks better than it did with just a simple point light, but there are still a few issues we can improve. For example, the shadow edges are very hard. There is no softening of the shadow edges as the shadow surface gets further away from the the object that is casting the shadow.
With the Point light selected in the Hierarchy view, open the Lightmapping view and click the Object tab again.
Set the Shadow Samples property to 20. This value determines the number of shadow rays that are cast to determine if the pixel being rendered is actually in shadow or not. In reality, light is scattered off of an object’s surface in an infinite number of directions. Some of the light scattered off of an object’s surface will go on to hit another object that does not emit light, but some may also hit something else that emits light. Obviously, we can’t simulate an infinite number of scatter rays but we can simulate a fixed number of random ray directions to determine if an object is in light or in shadow. With only 1 shadow ray being cast, it is likely that a randomly generated ray will be cast in the wrong direction and produce a wrong shadow. This is why we got blotchy shadows before. To resolve this, we just need to increase the Shadow Samples parameter to allow more shadow rays to be cast from each pixel in the lightmap which will increase the likelihood that the shadows will be computed correctly.
Set the Shadow Radius to 0.5. The Shadow Radius parameter is used to adjust the radius of the currently selected Point light. In reality, no light can possibly have a radius of 0. In virtual reality, Point lights always have a radius of 0 but this does not produce realistic shadows. Lights that have a radius larger than 0 are actually called Area Lights because they have a surface area greater than 0. Area Lights are required to produce realistic shadows with a soft penumbra (the edge of the shadow that is neither completely in shadow nor completely in the light). Point lights with a radius of 0 will only produce hard shadows in the lightmap even if you select Soft Shadows in the Inspector view.
Now bake the lightmaps again by pressing the Bake Scene button.
Now you see that the shadows on the walls and floors are smoother and they also get smoother when the surface in shadow is further away from the shadow caster.
Emissive Materials (Pro Only)
What if we want to make something appear to be glowing, like radioactive slime for example. For this purpose we could use a self-illuminating material. Also known as an emissive material.
First, let’s create a self-illuminated material.
Create a new material in the project view and call it “Green_Emissive_Material“.
Select the new material in the project view and change the Shader parameter to “Self-Illumin/Diffuse“.
Change the Main Color to green.
Set the Emission (Lightmapper) value to 2. This value determines the intensity of the light being emitted from the object. A value of 0 will not contribute anything to the lightmapping computations.
Add a sphere to your scene with the following properties:
Drag-and-Drop the Green_Emissive_Material onto the Sphere in the Scene view.
Don’t forget to make sure the sphere is marked as Static!
Save your scene.
Open the Lightmapping view and click the Bake Scene button. When the bake is complete, you should see something similar to what is shown below.
Observe the green glow that is emitted from the sphere and illuminating the objects around it.
This works fine for static objects in our scene, but what if we wanted to achieve this glowing effect at run-time on dynamic objects?
Emissive materials (like the one used on the green sphere) cannot be used to emit light onto other objects in the scene dynamically at run-time. However, we can “fake” emissive materials with point lights.
Emissive Materials on Dynamic Objects
A similar effect can be achieved with dynamic objects by attaching a Point light to the emissive GameObject.
Un-check the Static flag on the green Sphere we just created. This will prevent the Sphere from being considered for lightmapping.
Attach a Rigidbody component to the sphere so that it will be physics controlled.
If the Sphere doesn’t have a Collider component, attach a Sphere Collider component and set it’s Material property to “Bouncy” (from the Physics Materials standard package).
Add a Point light GameObject as a child of the Sphere GameObject and set it’s properties to the following values:
Setting the light’s Render Mode property to Important will make sure it’s rendered with per-pixel lighting and setting the Lightmapping property to Realtime Only will ensure that it is not considered when generating the lightmap (which it won’t if we unchecked the Static flag).
You should see something similar to what is shown below.
In this article, you have been introduced to cameras, lights, materials and lightmapping in Unity. We have seen how we can use Self-Illuminating materials in our scene to create apparently glowing objects in the lightmap and I’ve also shown a technique that can be used to “fake” this effect in real-time (at the expense of GPU processing power).
Most of the content for this article has been derived from various sources in the Component Reference documentation on the Unity website: