# Transformation and Lighting in Cg 3.1

Lighting with Cg 3.1

In this article I will demonstrate how to implement a basic lighting model using the Cg shader language. If you are unfamiliar with using Cg in your own applications, then please refer to my previous article titled Introduction to Shader Programming with Cg 3.1.

# Introduction

Lighting is one of the most important features in games. If used correctly, lighting in games can cause the participant to experience the desired emotions at the intended moments. This is an important factor when one is trying to create a game that completely envelops the participant. For me, one of the most enveloping games I’ve played was Doom 3. It’s masterful use of light and shadows sometimes caused me to scream like a little girl when baddies suddenly jumped out of the darkness.

In this article, I will demonstrate how the basic lighting model can be implemented using shaders. If you’ve only used the fixed-function pipeline of Direct3D or OpenGL, you have probably taken advantage of the built-in lighting system that is implemented in the rendering pipeline. If you want to write your own programmable vertex or fragment shaders, then you need to bypass the built-in transformation and lighting calculations and implement them yourself. In this article, I will show you how to do that.

# Dependencies

The demo shown in this article uses several 3rd party libraries to simplify the development process.

Any dependencies used by the demo are included in the source distribution (source code is available upon request).

# The Basic Lighting Model

In computer graphics, there are several lighting models that can be used to simulate the reflection of light with specific material properties. The Lambertian reflectance model simulates the light that is diffused equally in all directions. The Gouraud shading model will calculate the diffuse and specular contributions of every vertex of a model and the color of the pixel will be interpolated between vertices. This is also known as per-vertex lighting. Phong shading, often referred to as per-pixel lighting is a lighting method that will interpolate the vertex normal across the face of a triangle in order to calculate per-pixel (or per-fragment) lighting. Using the Phong lighting model the final color of the pixel is the sum of the ambient (and emissive), diffuse, and specular lighting contributions. A variation of the the Phong shading model is the Blinn-Phong lighting model. The Blinn-Phong lighting model calculates the specular contribution slightly differently than the standard Phong lighting model. In this article, I will use the Blinn-Phong lighting model to compute the lighting contribution of the object’s materials.

The general formula for the lighting model is:

$\mathbf{c}=\mathbf{ambient}+\mathbf{emissive}+\mathbf{diffuse}+\mathbf{specular}$

Where $\mathbf{c}$ is the final output color of the pixel, $\mathbf{ambient}$ is the ambient term, $\mathbf{emissive}$ is the emissive (the color the material emits itself without any lights), $\mathbf{diffuse}$ is the light reflected evenly in all directions based on the angle to the light source, and $\mathbf{specular}$ is the concentration of light that is reflected based on both the angle to the light and the angle to the viewer.

## Ambient and Emissive Terms

The Ambient term can be thought of as the global contribution of light in the entire scene. Even if there is no light in the scene, the global ambient contribution can be used to create some global illumination. If there are lights in the scene, each light can contribute to the global ambient if it in the line-of-sight of the object it is illumination. The global illumination is then the sum of all of the individual lights ambient contribution.

The emissive term is the amount of light the material emits without any lights or ambient contribution. This value can be used to simulate materials that can glow in the dark. This does not automatically create a light source in the scene. If you want to simulate an object that actually emits light, you have to create a light source at the objects location and everything in the scene must take that light source into consideration when rendering (except the object itself).

If we make the following definitions:

• $\mathbf{i}_{a}$ is the global ambient contribution.
• $\mathbf{k}_{a}$ is the ambient reflection constant of the material.
• $\mathbf{k}_{e}$ is the emissive reflection constant of the material.

Then the final ambient contribution of the fragment will be:

$\mathbf{ambient}=(\mathbf{k}_{a} \mathbf{i}_{a})$

And the emissive contribution will be:

$\mathbf{emissive}=\mathbf{k}_{e}$

## Lambert Reflectance

Lambert reflectance is the diffusion of light that occurs equal in all directions, or more commonly called the diffuse term. The only thing that we need to consider to calculate the lambert reflectance term is the angle of the light relative to the surface of the normal.

If we consider the following definitions:

• $\mathbf{p}$ is the position of the surface we want to shade.
• $\mathbf{N}$ is the surface normal at the location we want to shade.
• $\mathbf{L}_{p}$ is the position of the light source.
• $\mathbf{L}_{d}$ is the diffuse contribution of the light source.
• $\mathbf{L}$ is the normalized direction vector pointing from the point we want to shade to the light sorce.
• $\mathbf{k}_{d}$ is the diffuse reflectance of the material.

Then the diffuse term of the fragment is:

$\mathbf{L}=normalize(\mathbf{L}_{p}-\mathbf{p})$ $\mathbf{diffuse}=\mathbf{k}_{d} * \mathbf{L}_{d} * max(\mathbf{L}\cdot\mathbf{N}, 0)$

The $max(\mathbf{L}\cdot\mathbf{N},0)$ ensures that for surfaces that are pointing away from the light (when $\mathbf{L}\cdot\mathbf{N} < 0$), the fragment will not have any diffuse lighting contribution.

## The Phong, and Blinn-Phong Lighting Models

The final contribution to calculating the lighting is the specular term. The specular term is dependent on the angle between the normalized vector from the point we want to shade to the eye position (V) and the normalized direction vector pointing from the point we want to shade to the light (L).

Blinn-Phong vectors

If we consider the following definitions:

• $\mathbf{p}$ is the position of the point we want to shade.
• $\mathbf{N}$ is the surface normal at the location we want to shade.
• $\mathbf{L}_{p}$ is the position of the light source.
• $\mathbf{eye}_p$ is the position of the eye (or camera’s position).
• $\mathbf{V}$ is the normalized direction vector from the point we want to shade to the eye.
• $\mathbf{L}$ is the normalized direction vector pointing from the point we want to shade to the light source.
• $\mathbf{R}$ is the reflection vector from the light source about the surface normal.
• $\mathbf{L}_{s}$ is the specular contribution of the light source.
• $\mathbf{k}_{s}$ is the specular reflection constant for the material that is used to shade the object.
• $\alpha$ is the “shininess” constant for the material. The higher the shininess of the material, the smaller the highlight on the material.

Then, using the Phong lighting model, the specular term is calculated as follows:

$\mathbf{L}=normalize(\mathbf{L}_{p}-\mathbf{p})$ $\mathbf{V}=normalize(\mathbf{eye}_{p}-\mathbf{p})$ $\mathbf{R}=2(\mathbf{L}\cdot\mathbf{N})\mathbf{N}-\mathbf{L}$ $\mathbf{specular}=\mathbf{k}_{s} * \mathbf{L}_{s} * (\mathbf{R}\cdot\mathbf{V})^{\alpha}$

The Blinn-Phong lighting model calculates the specular term slightly differently. Instead of calculating the angle between the view vector (V) and the reflection vector (R), the Blinn-Phong lighting model calculates the half-way vector between the view vector (V) and the light vector (L).

In addition to the previous variables, if we also define:

• $\mathbf{H}$ is the half-angle vector between the view vector and the light vector.

Then the new formula for calculating the specular term is:

$\mathbf{L}=normalize(\mathbf{L}_{p}-\mathbf{p})$ $\mathbf{V}=normalize(\mathbf{eye}_{p}-\mathbf{p})$ $\mathbf{H}=normalize(\mathbf{L}+\mathbf{V})$ $\mathbf{specular}=\mathbf{k}_{s} * \mathbf{L}_{s} * (\mathbf{N}\cdot\mathbf{H})^{\alpha}$

As you can see, the Blinn-Phong lighting model requires an additional normalize of the half-angle vector which requires the use of the square-root function which can be expensive on some profiles. The advantage of the Blinn-Phong model becomes apparent if we consider both the light vector (L) and the view vector (V) to be at infinity. For directional lights, the light vector (L) is simply the negated direction vector of the light. If this condition is true, the half-angle vector (H) can be pre-computed for each directional light source and simply passed to the shader program as an argument saving the cost of the computation of the half-angle vector for each fragment.

The graph below shows the specular intensity for different values of $\alpha$.

Specular Intensity

By default (with a specular power of 0), the intensity of the specular highlight is always 1.0 regardless of the dot product of the half-angle vector and the surface normal ($\mathbf{N}\cdot\mathbf{H}$). This is shown on the blue line on the image above.

The red line shows a specular power of 1. In this case, the intensity increases linearly as $\mathbf{N}\cdot\mathbf{H}$ approaches 1. This is equivalent to the diffuse contribution.

As the specular power value increases, the specular highlight becomes more focused. In general, very shiny objects (like glass and polished metal) have a high specular power (between 50 and 128) and dull objects (like plastics and matte materials) have a low specular power (between 2 and 10).

## Attenuation

Attenuation is the gradual fall-off of the intensity of a light as it gets farther away from the object it illuminates.

The attenuation factor of a light is determined by the distance from the point being shaded to the light, and three constant parameters that define the constant, linear and quadratic attenuation of the light source.

If we define the following variables:

• $d$ is the scalar distance between the point being shaded and the light source.
• $k_{c}$ is the constant attenuation factor of the light source.
• $k_{l}$ is the linear attenuation factor of the light source.
• $k_{q}$ is the quadratic attenuation factor of the light source.

The the formula for the attenuation factor is:

$attenuation=\frac{1}{k_{c}+k_{l}d+k_{q}d^{2}}$

For lights that are infinitely far away from the object being rendered such as directional lights, we don’t want any attenuation to occur (if we are simulating the effects of the sun for example). In this case, we set the constant attenuation constant to 1.0 and the linear and quadratic attenuation constants to 0.0. This will alway result in an attenuation factor of 1.0 independent of the distance to the light source.

The result of the attenuation factor is a scalar that is applied to both the diffuse and specular terms.

The image below shows how the intensity of the light is affected as the light source moves further away from the point being lit.

Light Attenuation Graphs

The blue line in the image above represents the falloff of the light with constant attenuation set to 1.0 and both linear and quadratic attenuation set to 0. In this case, the attenuation stays constant regardless of how far away the light source is.

The green line represents the intensity of the light with the constant and linear attenuation set to 1.0 and quadratic attenuation set to 0. In this case, the light’s intensity is 50% when the light is only 1 unit away from the point being lit.

The red line represents the intensity of the light with the constant and quadratic attenuation set to 1.0 and the linear attenuation set to 0. In this case, the light’s intensity approaches 0 much faster. Using quadratic attenuation on your light sources should be used with caution.

## Spotlights

Spotlights have the additional property that they only emit light within a directed cone in the direction of the light. The cone of the light can be defined by two cosine angles, the inner cone angle and the outter cone angle. The intensity of the resulting light is computed by taking the dot product between the normalized vector from the light position to the point being shaded and the light’s direction vector.

If we define the following variables:

• $\mathbf{p}$ is the position of the point we want to shade.
• $\mathbf{L}_{p}$ is the position of the light source.
• $\mathbf{L}_{d}$ is the direction of the light source.
• $\mathbf{L}^{\prime}$ is the normalized direction vector between the light source to the point we want to shade.
• $\theta_{d}$ is the cosine of the angle between the normalized direction vector between the light source and the point we want to shade and the normalized direction vector of the light source.
• $\theta_{inner}$ is the cosine angle of the inner cone of the spotlight.
• $\theta_{outer}$ is the cosine angle of the outer cone of the spotlight.

Then the formula for calculating the intensity of the spotlight is:

$\mathbf{L}^{\prime}=normalize(\mathbf{p}-\mathbf{L}_{p})$ $\theta_{d}=(\mathbf{L}^{\prime}\cdot\mathbf{L}_{d})$ $spotlight=smoothstep(\theta_{outer}, \theta_{inner}, \theta_{d})$

The smoothstep function will smoothly interpolate between two values. This function produces better results then simply doing a linear interpolation between the two values.

It is interesting to note that the cosine angle of the inner cone will be larger than the cosine angle of the outer cone. This is because it measures the dot product of a vector that is relative to the direction vector of the light. If the vectors are parallel (in the same direction) then the dot product is 1.0 and if they are perpendicular, then the dot product is 0.0. Which means that if you want a wider spotlight cone you have to decrease (closer to 0.0) the cosine angle of the cones.

# Transformations and the Importance of Spaces

Now that we’ve seen the general equations for implementing the basic lighting equations, we need to apply it. Before we can apply these formulas we need to know something about the spaces in which all of our objects live. In a previous article titled [3D Math Primer for Game Programmers (Coordinate Systems)] I discussed some of the common spaces that exist in 3D graphics programs. Please refer to that article if you require an explanation of object space, world space, and view space.

The reason I am bringing up spaces in this article is that it is very important that before we can sensibly calculate the lighting that is applied to an object, we must ensure that all positions and directions of objects, lights, and the eye position are in the same space.

In general we will transfer positions and directions from one space to the other by multiplying by the corresponding matrix that represents that space. For example, to get from object space to world space, we need to transform the vertex position and vertex normals by the world-space matrix to get those positions and normals into world space. However to get the positions and normals from object space to view space (the space that has the camera position at the origin), we can’t just simply multiply the object-space positions and normals by the view matrix because the the view matrix will only transform positions and normals from world space into view space.

Let’s consider this simple notation:

If we have positions and directions already expressed in world space and we need to express those positions and directions in object space, we can just multiply by the inverse of the world transform to get the coordinates in object space.

This is a very important point to remember when creating your lighting shaders! If you only remember one thing after reading this article, it should be this: Everything must be in the same space!

We can also combine matrices in order to transform coordinates from object space directly into clip space.

Depending on the API the order of the multiplication may be left-to-right (as shown above and used for DirectX) or right-to-left (shown below and used for OpenGL):

It is usually advisable to pre-compute this combined matrix in advance and pass it as a parameter to the shader program so you avoid doing two matrix multiplies for every vertex of the mesh.

In some cases it may appear that the GPU is better at doing the matrix multiplication than pre-computing it on the CPU. For small meshes with very few vertices, this may be true but when the meshe’s vertex count increases, it becomes clear that pre-computing the combined matrix on the CPU becomes more efficient.

# The Simple Lighting Effect

In this article, I will show a simple Cg shader effect that performs the basic lighting equation described above. This effect performs per-fragment lighting for point lights. This effect could be easily extended to perform lighting for spot-lights and directional lights as well but this is not shown here. You can refer to my previous article titled Transformation and Lighting in Cg to see how to implement a spotlight effect.

Although Cg does allow you to define the vertex program and the fragment program in separate files, I’ve decided to use the CgFX file format which allows you to specify both the vertex and fragment programs in the same file. I will first show how to implement the vertex program and then I will show the fragment program.

## Data Structures

The Cg language is very similar to the C programming languages. Just as in C, we can declare structures in Cg. These structures will be used to group the stream data coming from the application as well as the data that is passed from the vertex program to the fragment program.

### Application Data

First, I’ll define a structure to group all of the vertex attributes that are sent from the application to the shader.

For this demo, I only require two stream attributes to be sent from the application.

• Position: The 3D position of the vertex being lit.
• Normal: The 3D normal vector of the vertex. The surface normal of the vertex being lit is required to perform lighting.

Each one of these parameters is also defined with special notation called the semantic. The semantic tells the shader compiler how to bind these attributes from the application. In this case, I am using the predefined semantic “POSITION” to bind the Position variable to the vertex position attribute sent from the application. The variable named Normal is bound to the normal vertex attribute using the predefined semantic “NORMAL“. The name of the variable has no impact as to which vertex attribute it is assigned to. Only the semantic is used to connect the shader variable to the vertex attributes coming from the application.

The semantic notation will make more sense when I show the implementation of the demo in C++.

### Vertex to Fragment Data

I will also create a struct that is used for both the output from the vertex program and the input to the fragment program.

There are three parameters defined in this struct. The first parameter PosH will be used to output the vertex position transformed to clip-space. This is the only one of the three parameters that will be output from the vertex program that will not be used as input to the fragment program. Every vertex program must output at least one parameter that is bound to the HPOS (or POSITION) semantic and this must always be the vertex position in homogenous clip-space.

The other two parameters Normal and PosW are the vertex normal and vertex position transformed to world-space. These parameters are mapped to the TEXCOORD0 and TEXCOORD1 semantics. The choice of semantic used for these two parameters is arbitrary but the TEXCOORDn semantics are not being used by any other attribute and they support interpolation over the polygon face.

### Material Definition

I also want to send material information from the application to the vertex shader. Material information is specified for an entire mesh and not per-vertex. A variable that doesn’t change it’s value based on a vertex attribute is called a uniform variable. A uniform variable is set once before the mesh is rendered by the application and remains constant until the shader is finished processing all of the mesh’s vertices.

I’ll define a structure which will be used to group all of the material properties together.

The material’s different components were described earlier in the document, so I won’t describe them here again. You’ll notice that none of these parameters have semantics associated with them. In the case of uniform properties, the semantics are optional. You can use semantics if it helps you to identify the variable in the application. In this article, I will not use custom semantics, but if you are curious, you can read my article titled Transformation and Lighting in Cg. In this post, I do define custom material semantics and I use these semantics to automatically connect application parameters to shader parameters.

### Light Definition

We also need to define a struct to group light properties.

On line 26, I need to define a few constant values to define light types since Cg doesn’t support the enum type (neither does C). I will only show how to implement point-lights in this tutorial but implementing the other light types should be easy after reading this tutorial.

The Light structure defines only a single color parameter that is used for the light’s ambient, diffuse, and specular values. Some samples or tutorials may separate the light’s different color values but I prefer to use a single color component for the ambient, diffuse, and specular lighting values.

The position of the light is defined in world-space. I’ve chosen to implement the lighting equations in world-space because it is easy to understand not because it is the most optimized method. The important thing to remember is that everything must be in the correct space for the lighting equations to be correct, either object space, world space, eye space, or even in light space.

The direction component is only applicable to spot-lights and directional lights so it won’t be used here.

The attenuation factors control the intensity of the light as it moves away from the point being lit. If you have many lights in your scene, it makes sense to define your attenuation factors for each light to achieve the right lighting effects, but with only a single light, the default attenuation factors should be sufficient. You should keep in mind that it does not make sense to compute the attenuation of directional lights since they don’t have a position. Only point lights and spot lights have a position.

The spotlight cosine angles define the inner and outer cone angles for spot-lights. This value is used to determine the intensity of the light based on the direction of the spot-light and the direction from the spot-light to he point being lit. If the spot light is pointing directly at the point being lit, then the dot product of the two vectors will be 1.0 and the point will receive full intensity. If the point is at an angle to the spot light, it will receive a factor of the intensity based on the amount of angle between the spotlight and the point being lit and these cosine angles.

### Global Variables

Besides the light and material properties, we also need to define a few global variables that are used by the vertex and fragment shaders.

The GlobalAmbient variable is used to modulate the material’s ambient contribution as described in the Ambient and Emissive Terms section above.

The ModelViewProjection variable is used to hold the 4×4 transformation matrix that will transform the vertex position from local space to clip-space. This matrix will be computed by the application by multiplying the model, view, and projection matrices into a single matrix.

The ModelMatrixIT is a 3×3 matrix that is used to transform the vertex normal to world-space. This matrix is the inverse-transpose of the model matrix. We only need to use the inverse-transpose matrix to transform the normal to world space if the model matrix contains a non-uniform scale in it. If we know the model matrix does not (or cannot) contain a non-uniform scale, then we can substitute this matrix for the standard world-matrix of the object being rendered. Note that this is a 3×3 matrix. This is required because we do not want to consider the translation vector that is in a 4×4 matrix when transforming the normals. This matrix is equivalent to the gl_NormalMatrix in GLSL. The best explanation for using this matrix to transform normals I could find is here: http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/.

The ModelMatrix is a 4×4 matrix that transforms the vertex position into homogeneous world-space. Unlike the inverse-transform version of this matrix, we must consider the translation component of the matrix because we are transforming points instead of vectors.

The EyePosition represents the position of the camera in world-space. The eye position can also be obtained by taking the inverse of the view matrix and extracting the 4th row (for row-major matrices) or the 4th column (for column major matrices).

## The Vertex Program

We will first take a look at the vertex program. This simple program will take the vertex attribute data sent from the application and transform that data into the correct spaces. The transformed vertex attributes will then be sent to the fragment program.

The vertex program’s main entry point takes a parameter of type AppData and returns a variable of type v2f. We will call these variables IN and OUT respectively.

As I mentioned earlier, the minimum functionality of the vertex program must transform the vertex position from object space to clip-space. This is done by multiplying the vertex position by the model-view-projection matrix.

To perform the lighting equations in the fragment program, we also need to convert the object-space vertex position and normal to world-space. This is done by multiplying the vertex position by the model matrix and the vertex normal is multiplied by the inverse-transpose of the model matrix.

The OUT parameter is returned by this function and used as the IN function for the fragment program.

## The Fragment Program

The fragment program is responsible for computing the per-fragment lighting for our objects.

This function shows an example of using a binding semantic to map the return value of the function to an output register of the fragment shader. At a minimum the fragment program must return a color value that is bound to the COLOR output semantic. In more complex shaders, the fragment program can return several color values and optionally a depth value. This shader only outputs a primary color that is mapped to the currently bound color buffer.

On line 88, the incoming vertex normal must be re-normalized because interpolating the surface normal as a texture coordinate can cause unwanted scaling in the vector.

On lines 89 and 90 the emissive and ambient terms are computed according to the equations discussed earlier.

The Attenuation function computes the light’s intensity based on the distance the light is away from the point being shaded according to the function shown below.

On lines 93 and 94 the diffuse and specular terms are initialized to 0 in case the point being shaded is behind the light.

On line 97, the $\mathbf{N}\cdot\mathbf{L}$ value is computed. If this factor is greater than 0 then the point being shaded is facing the light, otherwise it’s facing away from the light and we don’t need to compute the diffuse and specular components.

Otherwise, the diffuse is computed based on the color of the light and the material’s diffuse component modulated by the value of $\mathbf{N}\cdot\mathbf{L}$.

Similarly, the specular component is computed based on $\mathbf{N}\cdot\mathbf{H}$, the specular shininess, the color of the light and the material’s specular component according to the formulas discussed earlier.

The final color value is the sum of all of the lighting contributions. The diffuse and specular components are the only components that should be modulated by the attenuation factor of the light.

## Techniques and Passes

The final thing that must appear in the CgFX source file is at least one technique that defines all of the passes that are required to implement the effect. For this simple effect, I will only define a single technique which defines only a single pass.

The only technique in this effect defines a single pass. The pass can contain any number of state assignments (for a complete list of state assignments that can appear in a pass block, refer to the CgFX State Documentation). For this simple shader, I will only assign the VertexProgram and FragmentProgram state assignments.

The profile that is used to compile the vertex program and the fragment program are the gp4vp and the gp4fp profiles. To see a list of the different profiles that you can use to compile your shaders then please refer to the profile documentation on the NVIDIA website (http://http.developer.nvidia.com/Cg/index_profiles.html.

# The Cg Lighting Demo

Now that we’ve seen how to create the CgFX shader source file, let’s see how we can use this shader in a C++ application.

In this article, I will not show an extensive description of the source code. The code in this sample is based on the Cg template described in the Introduction to Shader Programming article. Since most of the code is explained in full detail there, I will only show the additional code that was added to this demo.

## Data Structures

Similar to the CgFX source file, we need to define a few structures to group material and light properties. I will also add an additional structure that is used to store mesh information.

### The Material Structure

The Material structure is used to group properties that belong to a material definition and initialize the materials with some default values.

This structure should be self explanatory. Since we may want to define several materials, we declare a vector of materials that will be populated when the scene data is loaded.

I’m not using textures in the demo, but if I was, I would probably also want to define a list of texture object ID’s in the material struct (one texture object ID for each texture stage).

### The Light Structure

The Light structure is used to group properties that define a point-, spot-, or directional-light.

The Light structure define the type, color, position, direction, attenuation factors, and spot-light cone angles. This structure matches then one defined in the CgFX file.

In this demo, I will only simulate a single light. I’ll use the g_Light global variable to store the properties of that light.

### The Mesh Structure

The Mesh structure is used to group properties that are required to draw a mesh in the scene.

The VertexXYZNorm structure defines the vertex attributes that are streamed from the application to the vertex shader. At a minimum, the vertices of our mesh must define a position and a vertex normal in object-space.

The Mesh structure defines the m_Vertices and m_Indices variables which are used to store the mesh information in system memory.

The m_vboVertices and the m_vboIndicies variables are used to identify the vertex buffer objects that store the vertex data in graphics memory. If you are unsure how to use VBOs, I recommend you refer to my previous article titled Using OpenGL Vertex Buffer Objects.

The m_VAO member variable is used to refer to the Vertex Array Object that is used to group all of the vertex attribute streams that are required to render this mesh with a single binding call.

The m_Material member variable defines the mesh’s material properties.

## Cg Parameters

Just like OpenGL, we must first define a context before we can use Cg in our application. The reference to the Cg context is stored in a CGcontext variable. The Cg context is required to load shader programs and effect files.

We will also define variables to hold the CgFX effect and the technique that is defined in the effect file.

In order to set the values of the uniform properties defined in the shader, we must define a reference to those properties in the shader effect file. We do that with the CGparameter type.

We also need to define a structure to hold a reference to the material properties defined in the effect file.

As well as a reference to the light properties defined in the effect file.

## Initialize Cg

The initialization routine is similar to the method used in the Introduction to Cg article previously posted. So I will not go into much detail about it here. In summary, we must create a Cg context, register OpenGL state assignments, load the CgFX effect, get a valid technique defined in the effect file, and get the references to the uniform parameters defined in the effect file.

The first thing we’ll do is register the error handler and create the Cg context.

We also need to register the OpenGL state assignments. This is required for the effect to work because the VertexProgram and FragmentProgram keywords used in the pass block in our effect file are OpenGL state assignments.

The cgGLSetManageTextureParameters is not strictly necessary in this demo because I’m not using textures. Setting this parameter to CG_TRUE ensures that the texture objects that are assigned to texture samplers in an effect are automatically bound and enabled when an effect that uses them is bound.

The next step is to load the effect file.

The effect file is loaded using the cgCreateEffectFromFile method which will load, compile, and link the shader programs defined in the effect file. If the effect failed to compile, then the error handler that was previously registered will be invoked and the compiler error will be displayed. If that happens, this function returns an invalid handle and the program exists.

If the CgFX file loaded, then we can query for a valid technique defined in the file. We cannot use the effect unless there is at least one valid technique for the current platform defined in the effect file.

So far this is no different than the InitCG method described in the Introduction to Cg article.

Next, we need to query the uniform effect parameters.

You may notice that we can use the dot-notation to refer to member variables of the struct properties defined in the effect file. Similarly if we had array parameters, we could refer to them using array index operators: “Lights[0].Color” or “Lights[8].Position”.

If everything went okay, then we can start rendering our scene objects using this effect.

For this demo, I am using the Open Asset Import Library to read a scene file. Since this article is not about the Open Asset Import Library, I will not go into much detail here about it here. What I want to show here is how I create the vertex buffer object (VBO) and vertex array objects (VAO) so that I can efficiently render the mesh on the GPU.

To convert a mesh from the Open Asset Import Library format to a format that can be used by my application, I will use the ConvertMesh function. This function takes a pointer to an aiMesh and returns a pointer to a newly created Mesh object. The Mesh structure has been described earlier.

We first check the preconditions in this function. The aiMesh must at least have position and normal attributes. If we’ve confirmed that the incoming aiMesh parameter contains these vertex attributes, we can create a new Mesh object and extract these attributes.

On line 530, the material struct is copied to the new mesh. This material is copied by value so we can modify this mesh’s material after import without effecting any other mesh’s material.

The vertex positions and normals are copied from the aiMesh‘s mVertices and mNormals member variables to the Mesh‘s m_Vertices vector.

We also need to copy the index data from the aiMesh to our Mesh‘s index buffer.

The aiMesh has an array of aiFace values. Each face structure defines a single polygon face of the mesh. If the mesh has been triangulated, then all faces in the mesh should contain only 3 indices. Since I don’t want to support non-triangulated mesh data in this demo, I will only copy the faces that contain exactly 3 indices.

Now that we have the vertex data and index data in system memory, we can use it to create our buffer data in graphics memory. First, we’ll create the Vertex Array Object (VAO) and the Vertex Buffer Object (VBO).

First we generate unique ID’s for the VAO and VBO that is associated with this mesh. Then we load the vertex attributes into the vertices VBO.

The VBO ID is bound to the GL_ARRAY_BUFFER target. This tells OpenGL that we want to use this VBO for operations that use this target.

The VBO data is filled with the glBufferData (or glBufferSubData) function. If the use of VBO’s is still unclear, then please refer to my previous article titled Using OpenGL Vertex Buffer Objects.

We also need to bind and enable the vertex attribute streams that will be used to render the mesh. The vertex attribute stream data will be associated with the currently bound VAO if there is one. If you are not using VAO’s then you will have to bind and enable the attribute arrays before you can render the mesh using the glDrawArrays or glDrawElements in the render function. In this demo, I will use VAO’s.

The glVertexAttribPointer will allow us to bind the vertex attribute arrays to the generic vertex attributes. CgFX uses the binding semantics associated with the vertex data in the CgFX shader file to determine which attribute stream is bound to which parameter in the shader.

If you recall, the AppData in the CgFX file looks like this:

The parameter that is bound with the POSITION semantic is always associated with the generic attribute with ID 0 and the parameter that is bound with the
NORMAL semantic is always associated with the generic attribute with ID 2. For a list of the standard semantics and the associated generic attribute ID’s used by Cg, you can refer to the Cg Semantics table in my previous article titled Introduction to Shader Programming with Cg 3.1.

In the application code, the POSITION_ATTRIBUTE macro resolves to 0 and the NORMAL_ATTRIBUTE macro resolves to 2.

We also need to enable the attribute arrays with glEnableVertexAttribArray otherwise the data will not be sent to the rendering pipeline when our mesh elements are drawn.

And finally, when we are done defining the attribute arrays for the VAO, we can unbind the VAO and VBO objects.

We also need to populate the VBO for our indices.

The index buffer must be defined separately from the vertex attribute arrays because the index data defines the order in which vertices are sent to the rendering pipeline but it does not define vertex attributes.

If everything is correct then we return the pointer to the new mesh.

## The OnDisplay Method

The OnDisplay method is invoked whenever the screen needs to be redrawn. This function will populate the Cg parameters that are constant for that frame, such as camera and light properties.

Every frame starts by clearing the color and depth buffers.

The next few lines (lines 919-922) just draw an “axis” widget at the camera’s pivot point. These functions use deprecated fixed-function pipeline of OpenGL, so I encourage you to ignore them 😉

We need to extract the camera’s position in world space. To do that, we must inverse the view matrix and take the 4th column (or the 4th row for row-major matrices). This gives the “eye” position in world space. For an explanation on the camera transform and the view matrix you can refer to my article titled Understanding the View Matrix. The EyePosition parameter in the CgFX file is set using the cgSetParameter4fv/

The member variables of the Light parameter in the CgFX file is set in the same way according to the parameter type.

With all of the per-frame parameters set, we can render our scene. The RenderNode function will recursively render the nodes of a very simple scene graph implementation.

## The RenderNode Function

The RenderNode function will render all of the meshes associated with a particular node in the scene graph. A scene node consists of a transformation matrix that is used to position and orient a mesh in the scene. Each scene node can contain zero or more meshes and a scene node can also contain child nodes that are positioned and orientated relative to it’s parent node. A scene node without a parent node is called the Root node and a scene node that does not have any child nodes is called a Leaf node.

To render a node, we first set the per-node specific shader parameters. These would be the nodes matrix parameters that determine the position and orientation of the rendered mesh.

Each time a node is rendered, there are 3 parameters that need to be set:

• g_cgModelViewProjection: A 4×4 matrix that is used to transform the vertex position from object-space to clip-space.
• g_cgModelMatrix: A 4×4 matrix that is used to transform the vertex position from object-space to world-space.
• g_cgModelMatrixIT: A 3×3 matrix that is used to transform the vertex normal from object-space to world-space.

Then we need to loop through the meshes that are associated with the node and render each mesh.

For each mesh, we first set the Cg parameters that define the mesh’s material properties.

On line 856 the mesh’s VAO and VBO ids are bound for rendering.

Then, for each pass in the technique the mesh is rendered with glDrawElements.

The cgSetPassState function will set all of the state assignments defined in the pass. For our shader, only the VertexProgram and FrgamentProgram state assignments are defined in the pass.

The cgResetPassState will reset only the state assignments that are defined in the pass. It is important to note that resetting the pass’s state assignments will not set the values of the state assignments back to the values they were before cgSetPassState was called, but instead will reset the state assignments to whatever the default value is for that state assignments. For example, resetting the pass state for this example will set the currently bound vertex program and fragment program to NULL (even if the currently bound vertex program and fragment program was not NULL when cgSetPassState was first called). You should always check the CgFX state assignmnets documentation to verify what value the state will be set to when the pass is reset.

We shouldn’t forget to unbind the VAO and VBO objects when we are done rendering the meshes of the node.

After this node has been rendered, we need to recursively render the children of this node.

If everything works well, we should see something similar to what is shown below.