NormalMapping

# Introduction

Normal mapping is a shader technique that is used to add detail to the face of a polygon that does not require further tessellation of the mesh.

In some cases it may be impractical to have a highly tessellated model to the degree that the artist wishes in order to capture the full detail of the model. Shader programmers needed a solution that could allow the model to be expressed with a very low polygon count yet still retain the detail that the artists work so hard to achieve. The solution is normal mapping.

How this works is that the artists will create the highly detailed model with 3D modeling software and from this high polygon model a texture is generated that contains the surface normal of the mesh at every point and this surface normal is what is encoded in the texture. Then the artist will create a simplified, low-polygon version of the same model. The low-polygon version will resemble the general shape of the high-polygon version of the model but it will not retain the detail.

Using the normal map that was generated from the high-polygon model, the lost details can be simulated in low-polygon model.

# Dependencies

The demo shown in this article uses several 3rd party libraries to simplify the development process.

• The Cg Toolkit (Version 3): The Cg Toolkit provides the tools and API needed to integrate the Cg shader programs in your application.
• Boost (1.58.0): Boost has some very useful libraries that I use throughout my demo applications. In this demo, I use the Signals, Filesystem, Function, and Bind boost libraries to provide a generic, platform independent functionality that simplifies some of the features used in this demo.
• OpenGL Extension Wrangler (GLEW): The OpenGL extension wrangler is the API I chose to check for the presence of the required extensions and to use the extensions in the application program.
• Simple DirectMedia Layer (1.2.14): Simple DirectMedia Layer (SDL) is a cross-platform multimedia library that I use to create the main application window, initialize OpenGL, and handle keyboard, mouse and joystick input.
• OpenGL Mathmatics (GLM): An OpenGL centric mathmatics library for 3D graphics applications.
• Simple OpenGL Image Library (SOIL): SOIL is a tiny C library used primarily for uploading textures into OpenGL.

All of the dependencies described here are included in the source code example included at the end of this article.

# The BumpMapping Demo

For the BumpMapping demo, I will setup the application and the shaders that will be used to render the effect. I will be using the EffectFramework library that I used in the previous article titled [Environment Mapping with Cg and OpenGL].

Let’s start by taking a look at the application code.

The first thing we do in any application source code is include all the headers and define any global variables and declare any methods that are defined later in the source file.

These headers that come after the required pre-compiled header include the classes from the EffectFramework library. Details of the EffectFramework library were demonstrated in the article titled [Introduction to Cg Runtime with OpenGL] so I will not go into any further detail here about them.

Next we’ll define a few global variables.

The Application variable will be used to create the main application window, initialize OpenGL, and process and translate events.

A pivot camera is also defined that will allow use to pan and pivot the view.

We will be rendering a torus to demonstrate the effect. We will use an OpenGL display list to generate a procedural torus and to render the torus every frame. The g_TorusDisplayList variable is used to store the ID of the display list that defines the procedural torus.

On lines 23-25, the OpenGL texture ID’s are defined to reference the different textures that are used in this demo. The g_EnvCubeMap variable will be used to refer to the cube-map texture that will be used for the skybox that is rendered as the background for our scene. This cube map will also be used as the environment map that will be applied to the model to simulate reflection on the torus. For a full explanation on environment maps, you can refer to my previous article titled [Environment Mapping with Cg and OpenGL]. The g_NormalMap variable is used to reference the normal map that is applied to the torus to generate the detail of the stone pattern that is applied the torus. And the g_DiffuseMap is used to apply a color texture to the torus. In this application the same texture coordinates are used for both the diffuse map and the normal map so these two textures should match up with each other.

The g_ReflectiveMaterial variable is used to define the diffuse, specular and reflection coefficient values that will be used to render the torus.

On lines 29-32 a few variables are defined that will control the lighting for the scene.

The g_bAnimate variable is used to toggle the animation of the torus and the light in the demo. The [space] bar can be used to toggle the animation on and off and the g_fRotatePrimitive is used to determine how much rotation to apply to the light and torus primitive.

On lines 36-38 a few variables are defined to store mouse information that is used to control the pivot camera.

We also need to forward-declare the event callbacks that will be registered with the application class.

As well as a few additional function declarations.

The InitGL method is used to initialize a few OpenGL state parameters before we start rendering and the InitGlew method will initialize the OpenGL extensions that are used in this demo.

Any texture or model resources that are used in this demo will be loaded in the LoadResources method.

The DrawCubeMap method will render the sky-box model that is used as the background for the scene.

## The Main Method

The main method will simply initialize a few parameters and register the callbacks for the application class.

The camera is initialized using the parameters that were defined in the global scope and the callback functions are registered with the application class.

On line 88, the main application window is created and the application loop is started by calling the Application::Run method.

## The OnInitialize Method

The OnInitialize method is invoked after the application has created the OpenGL context and the main application window. We can use this method to load graphics resources and initialize the resources that are used for the demo. We will also load the effect shaders that are used to render the torus.

Before we can use any OpenGL extension features, we have to initialize the OpenGL states and GLEW extension support.

On line 233, the EffectManager instance is created and initialized and a reference to that instance is stored in the local variable called effectMgr.

On line 236, the torus display list is generated that defines the torus primitive that is used to demonstrate the bump mapping effect.

On lines 240 and 241 we register event callbacks for when an effect is loaded and when a Cg runtime error occurs.

On line 244 the shader effect that performs the bump-mapping (normal-mapping) is loaded by the effect manager.

The effect manager class defines a few shared parameters that can be applied to all shaders that define effect parameters with a particular semantic (see EffectManager::CreateSharedParameters to see which shared parameters are created by the effect manager class.

On line 247, the global ambient shared parameter is set.

On lines 250-253 a material that attempts to mimic a gold brick surface properties are defined in the g_ReflectiveMaterial material variable.

## Initialize the OpenGL context

Before we do any rendering, we should make sure that the OpenGL context is setup correctly. We will also be relying on the existence of some of the OpenGL 2.0 functions so we will use GLEW to setup the extensions and check for OpenGL 2.0 support.

The only thing that necessarily needs to be initialized is the value that the depth buffer is set to when we call glClear together with the GL_DEPTH_BUFFER_BIT enumeration and to enable depth testing.

Since I am using an extension of OpenGL that is introduced in OpenGL 2.0, I need to initialize GLEW and check for the version support. If GLEW fails to initialize or the required extension support is missing then an error message will be displayed and the program will exit.

## Creating the Procedural Torus

The torus object is created procedurally in the CreateTorusDisplayList method. This method is identical to the one shown in the previous article titled [Environment Mapping with Cg and OpenGL]. The only difference in this demo is that in addition to the vertex normal, we also need to generate the tangent vector and optionally the binormal vector for every vertex of the torus object. Together with the vertex normal, the tangent and binormal vectors are needed to generate the tangent-space basis vectors that are used to transform the surface normals that are encoded in the normal map into object space. The creation of the tangent-space basis vectors will be discussed in more detail later in this article.

In the TorusVertex method will be used to compute the tangent and binormal vectors for each vertex of the torus.

The u, and v parameters are in the range [0..1] passed from the CreateTorusDisplayList method.

The sTexCoord and tTexCoord static variables allow the texture coordinates to be stretched across the surface of the primitive in order to achieve a more desirable texture mapping effect. I choose some values here that provide a nice effect for the base texture and normal map I am using.

The POSITION, NORMAL, TEXCOORD0, TANGENT, and BINORMAL constants define the generic attribute ID’s for those corresponding types. In the shader program, we can bind the input streams for the tangent and binormal vectors to the TANGENT, and BINORMAL semantics in order to receive this data from the application. This will be shown in the section about the shaders.

On line 158 – 160 the X, Y, and Z vertex positions are computed based on the parametric equation of a torus:

$\begin{array}{rcl}x(u,v) & = & (R+r\cos(2\pi{v}))\cos(2\pi{u}) \\ y(u,v) & = & (R+r\cos(2\pi{v}))\sin(2\pi{u})\\z(u,v) & = & r\sin(2\pi{v})\end{array}$

Where

• $u, v$ are in the interval $[0, 1]$.
• $R$ is the outer radius of the torus.
• $r$ is the inner radius of the torus.

The normal is computed by taking the derivative of the parametric equation for the torus.

The tangent is computed by taking the partial derivative of the parametric equation with respect to u and the binormal (or also known as the bitangent) is computed by taking the partial derivative of the parametric equation with respect to v.

The texture coordinates for the torus are simply the passed-in u, and v parameters scaled by the values of the sTexCoord and tTexCoord parameters. Scaling the texture coordinates allows us to repeat the texture across the face of the torus.

In most cases, you won’t have to be concerned with the generation of the tangent and binormal vectors but since we are procedurally generating the torus geometry, we need to do this ourselves. Generally, the 3D modeling package can generate these vectors for you and export them together with the rest of the geometry information.

If you are interested in learning how to compute the tangent space basis vectors for an arbitrary mesh, you could refer to the article titled [Computing Tangent Space Basis Vectors for an Arbitrary Mesh] located here: http://www.terathon.com/code/tangent.html

On lines 181-185 the vertex attributes are written to the display list. Using the glVertexAttrib[N]f methods are equivalent to using the similar glVertex3f, glNormal3f, and glTexCoord2f methods to commit vertex data to the GPU. But there is no (standard) method in the OpenGL SDK for sending tangent and binormal vertex data to the GPU. So to be consistent, I use the glVertexAttrib[N]f method to send all vertex data to the GPU.

The best resource that explains what vertex attributes are associated with which ID’s, you can refer to the nVidia developer documentation for the GPU profiles supported by Cg (http://http.developer.nvidia.com/Cg/vp40.html – see the section titled “Varying Input Semantics”). Although this documentation is horribly incomplete, it is almost the only source that documents which shader semantics are associated with which OpenGL attribute IDs. If you can provide a better source that explains how to use the vertex attributes to send vertex data to the GPU depending on the profile, then please leave a comment and let me know.

The LoadResources method is used to load the cube map texture, the diffuse texture, and the normal map texture that will be used to render our scene.

This method is almost identical to the same method in the previous article titled [Environment Mapping with Cg and OpenGL] so I won’t go into any detail here. Basically we load the three textures used by the demo. First, the cube map texture is loaded that will be used to render a scenic background for our scene as well as used as the environment map to apply a reflective surface effect on the bumped torus.

On line 115, the normal map texture is loaded and on line 121, the diffuse map texture is loaded.

Whenever an effect is loaded by the effect manager, the OnEffectLoaded method will be invoked passing as a parameter a reference to the effect that was loaded.

We can use this method to set some of the constant parameters that are used by the effect (such as the textures).

The three texture parameters are being set in this function. First the cube map is assigned to the “evnSampler” parameter. Then the normal map is assigned to the “normalSampler” effect parameter and finally the diffuse map is assigned to the “diffuseSampler” shader parameter.

## The OnUpdate Method

For every frame, the application class will invoke the OnUpdate method.

On lines 394-399, we check to see if any of the loaded shader effects have been modified on disc and if so, the effect manager class can reload the effects automatically by calling the EffectManager::ReloadEffects method.

On lines 401-411 the rotation parameter is updated and the light position is rotated around the scene in a circle.

## The OnPreRender Method

Before we can render the scene, we need to make sure that the effect manager’s shared parameters are set correctly.

On line 417, we get a reference to the EffectManager singleton. The EffectManager defines a few shared parameters that can be used to connect the shader parameters to so that any effect that defines a parameter with a particular semantic will automatically have the parameters updated when the effect manager’s shared parameters are updated. You can refer to the EffectManager::CreateSharedParameters method and the EffectManager::UpdateSharedParameters method in the sample code to find out what shared parameters are supported by the effect manager.

On line 427 the view transform is applied so that objects that are rendered using the fixed-function pipeline are rendered correctly.

## The OnRender Method

The OnRender method will render a single torus in the center of the world with a rotation about the Y-axis. It will also render the skybox that is used as the background for our scene. A sphere that matches the color of the light is also rendered to represent the single light source that is used to illuminate the sphere.

We start by getting a reference to the EffectManager instance and setup a parameter called eyePos which represents the position of the viewer in world space.

The depth surface buffer is cleared using the glClear method. Notice that we don’t need to clear the color buffer because the skybox that we will render every frame will always overdraw the entire color buffer anyways.

The skybox is rendered at the position of the camera by calling the DrawCubeMap method and an axis that represents the focal point for the pivot camera is rendered using the DrawAxis method.

The position and orientation of the torus primitive is determined by the world transform that is associated to the object. This transformation is stored in the worldMatrix parameter defined on line 479.

Next we’ll draw the torus using the “C8E6_bump_mapping” shader effect that was loaded in the OnInitialize method.

On line 485, the shader parameter that represents the world position of the viewer is queried and set to the eyePos parameter that was defined earlier.

The gLight shader parameter is a struct parameter with struct members Light.position and Light.color. These parameters are set to the global parameters for the lights position and color respectively.

Then we define the world transform of the torus by building a rotation matrix that rotates the torus about the Y-axis.

On line 495, the uniform shader parameters are set and on line 498, and they are updated and committed to the GPU.

On line 501, the only technique defined in the shader is queried and for each pass in the technique, the torus geometry is rendered using the display list that was created using the CreateTorusDisplayList method earlier.

Then we also want to draw a sphere that has the color and position of the light that is used to illuminate the torus.

On lines 511-518, a sphere is rendered the same color and position as the light that is used to illuminate the torus.

And finally, the back buffer is presented on the screen by calling the Application::Present method.

We are only using the “C8E6_bump_mapping.cgfx” shader that was loaded in the OnInitialize method shown earlier.

## Structs and Globals

First we will define a few structs and some global variables that are used in the vertex program and the fragment program.

The Material struct stores the material properties that are required by the fragment program. The Ke defines the emissive color component and the Ka parameter defines the ambient color component. Diffuse and specular values are defined in the Kd and Ks properties respectively and the shininess parameter defines the specular power of the material which controls how shiny the material appears. In order to implement the environment effects on our primitives we also need to know how reflective the material is. For that we define the reflection property which should be assigned a value between 0 and 1 where 0 indicates the material is not reflective and 1 indicates the material is completely reflective.

The Light defines the properties of the light that are used in the fragment shader. For this demo we are only interested in the position and general color of the light. You might see some lighting models split the light color into ambient, diffuse, and specular colors but for simplicity, I don’t do that.

For a complete discussion on implementing lighting in Cg, you can refer to my previous article titled [Transformation and Lighting in Cg].

Then we define three samplers. The diffuseSampler is used to sample the base texture of the primitive and the normalSampler is used to sample the normal map. The evnSampler is the environment cube map texture that will be used as the reflection map that is blended with the object’s base color. The same environment map is also used to render the background of our scene.

Then we also define three matrices which are used to transform the object space vertecies and normals into clip-space and world space. The gModelToWorldIT matrix is used to transform the vertex normal and the tangent vectors into world space correctly even if the objects has non-uniform scaling applied to it’s world transform.

The reason we multiply the vertex normal by the inverse transpose of the world matrix ensures that the normal vector stays normal. That is, the orientation is preserved while the scaling is removed. If we simply multiply the normal by the world matrix, then if the world matrix contains a non-uniform scale (different scale factors on all 3 axes), then the normal will become skewed. At least that’s the short answer. For a longer answer that may or may not make sense, you can refer to “Subject 5.27: How do I transform normals?” of the comp.graphics.algorithims frequently asked questions.

On lines 48-51 the additional global variables are defined for the position of the viewer in world space, the material, the light, and the global ambient that will be used to properly calculating the lighting in the fragment shader.

## The Vertex Program

The vertex program will be used to transform the object-space vertex positions into clip space. Also, the vertex normals and tangents will be transformed to world-space and passed to the fragment program.

The first few parameters to the function are the position, texCoord, normal, and tangent values expressed in object space are passed from the application program. The next set of out parameters are computed in the vertex program and passed as input parameters to the fragment program. The uniform parameters are the parameters that do not change for a single pass. These are the parameters that are assigned to the effect parameters in the application before the geometry is rendered.

On line 69, the clip-space position of the vertex is computed and assigned to the output parameter with the POSITION semantic. This is the only out parameter that is not an in parameter of the fragment program.

The object-space position and normal vectors are transformed into world-space because I chose to perform all of the lighting calculations in world-space instead of object space (as was done in the previous article titled [Transformation and Lighting in Cg]). My reason for this is that I wanted to combine the environment mapping technique demonstrated in [Environment Mapping with Cg and OpenGL] with normal mapping. In order to sample the correct color from the environment map, the reflected ray must be computed in world-space because the environment cube map also exists in world space. On one hand, I could perform all of the basic lighting equations in tangent space (which is the accepted way of doing lighting when applying normal maps) or I could do the lighting calculations in object-space or world-space. If I decided to do the lighting in tangent-space then I would need to convert the reflected ray vector from tangent space into world-space in the fragment program anyways in order to do the cube map lookup. And if I wanted to perform the lighting in tangent-space, then I would need to transform the light vectors from world space into tangent space in the vertex program and pass the tangent space light vectors as parameters to the fragment program – but that would also limit the program to one or two lights per pass.

So I decided to only compute the tangent space basis vectors in the vertex program and do all of the lighting calculations in the fragment program in world space. This only required that the surface normal from the normal map needed to be transformed from tangent-space into world-space. This could be done simply by multiplying the tangent-space normal by the transpose of the matrix formed from the tangent basis vectors in world-space. This will be shown in the fragment program.

Before we can perform the correct lighting based on a surface normal that is defined in the texture, we need to be able to transform the normal in the normal map into world-space (the same space as the light position and vectors are expressed). We need to create a set of basis vectors that are used to describe the correct space. These basis vectors are defined for every vertex of the mesh and we have already seen one of the vectors that are required in other vertex programs and that is the normal. In order to create an orthonormalized rotation matrix, we need at least one more vector and that vector is called the tangent vector and this vector is supplied by the application when rendering the model. The third required vector is called the binormal (or bitangent) and this vector can be computed simply by taking the cross-product of the normal and tangent vectors. This is shown on line 77 in the code sample above.

## The Fragment Program

The fragment program will do the final lighting calculations and blend the base color of the torus together with a reflected color from the environment map.

Since the surface normal stored in the normal map is stored as an RGB value where each component is stored in the range [0..1], we need to convert the normal stored in the texture into a normalized surface normal with each component in the range [-1..1]. We use the expand method defined on line 82 to convert the range-compressed surface normal into a 3D normal we can use.

The first thing we do in the fragment program is build a rotation matrix from the tangent space basis vectors that are computed in the vertex program. Since the vectors are passed as texture coordinates, we need to re-normalize them to remove any scaling that my have been accidentally introduced. The tangent space matrix is computed by setting the tangent vector as the X-vector, the binormal as the Y-vector and the normal vector as the Z-vector of the transform matrix.

It’s important to understand that this Tangent, Binormal, Normal (TBN) matrix will transform a vector from world-space to tangent-space. So in order to transform the tangent-space normal to world-space, we must multiply by the inverse of the tangent-space matrix.

The tangent basis matrix can be used to convert the view (eye) position, the light position, and the light direction vectors from world space into tangent space, but it can also be used to convert the normal from the normal map from tangent space into world space (which makes more sense to transform one vector into world space than it would be to transform 3 vectors into tangent space). So on line 111, we transform the tangent-space surface normal into world-space by multiplying the normal vector (N) by the tangent matrix on the right. This is equivalent to multiplying the normal vector by the transpose of the tangent matrix on the left.

Since we know the tangent matrix is orthonormalized, we can compute the inverse of the matrix simply by taking it’s transpose which saves us the trouble of inversing the tangent matrix. For a review of matrix transpose and inverses, you can refer to my previous article titled [3D Math Primer for Game Programmers (Matrices)].

Now that we have the surface normal in world-space, we can use it to calculate the regular lighting contributions as shown in the previous article titled [Transformation and Lighting in Cg], and to compute the reflected color from the environment map as shown in the previous article titled [Environment Mapping with Cg and OpenGL].

The final fragment color is simply a blend between the lit texture color of the fragment with the reflected color of the environment based on the material’s reflection factor.

## Techniques and Passes

In this effect, we only define a single technique and a single pass.

Both the vertex program and the fragment program specify the keyword “latest” as the profile to compile the program for. This causes the Cg runtime to compile the vertex programs and fragment programs using the latest profile supported on platform that the application is running on.

If everything goes good, you should see something similar to what is shown below.

# References

 The Cg Tutorial The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics (2003). Randima Fernando and Mark J. Kilgard. Addison Wesley. The cube map images used for this demo were downloaded from http://www.hazelwhorley.com/textures.html. Special thanks to Hazel Whorley for creating these great cube maps. The brick texture and normal map were downloaded from http://www.tutorialsforblender3d.com/Textures/Bricks-NormalMap/Bricks_Normal_3.html.

The source code including dependencies used to create this demo can be downloaded from the link below.

NormalMapping.zip

## 9 thoughts on “Normal Mapping with Cg and OpenGL”

• This is a file that is required by SDL to provide a sound implementation on the Windows operating system. It is included in the DirectX SDK and if you have your search paths correctly installed then the compiler should automatically find this file.

Make sure you configure your Win32 search paths in Visual Studio 2008 & 2010 (not just the x64 search paths!)

1. The texture coordinates of the torus are wrong/not set.

You can see this if you only apply a texture to the torus. If you replace glCallList( g_TorusDisplayList ) with glutSolidTeapot(1.0f) you can see the shader is working though 🙂

• Not sure why the textures aren’t working for you. When you run the demo on your PC, you don’t see the same torus that is shown in the YouTube video that is embedded at the end of this article? If you don’t see the textured torus when you run this demo then I would have to guess there is something unique with your hardware. Perhaps the attribute ID I am using for the texture coordinates (first stage) is something other than ‘8’?

• No, I don’t see the torus shown in the vid, I only see the reflection of the skybox.

I fixed it by replacing TEXCOORD0, on line 54 in C8E6_bump_mapping.cgfx, with ATTR8. Don’t know if it still works on other machines though.

PS: I tested this on a ATI Mobility Radeon HD 4570.

• I had exactly the same program with the same videocard series: ATI Mobility Radeon HD 5400

• This only happens on ATI cards and its because the glVertexAttrib3f() function is used to set the attribute ID. For ATI you will have to use ATTR0-15 when setting the vertrex attribute manually like this. NVIDIA works fine with the ATTR semantics as well so to be ATI safe use those.

2. Great tut, just keep it up, i very like your tutorials stlye, its not just give us the soulution, but it also give us knowladge that needs to create our own engine!

I also find a bug 🙂 In the cgfx file this line make no sense, i mean its pointless to multiply a vec whit a matrix (also its cause invalid normals).
N = mul( N, tangentMatrix );

Solution:
N = mul( tangentMatrix, N );

ui: sorry for the english