Environment Mapping with Cg and OpenGL

Environment Mapping with Cg

Environment Mapping

In this article I will demonstrate an effect called Environment Mapping. Environment mapping attempts to simulate the effect of reflective or refractive surfaces in a shader rasterizer. I assume the reader has a basic understanding of OpenGL and Cg. If you require an introduction in OpenGL, you can refer to my article titled [Introduction to OpenGL for Game Programmers]. And for an introduction to Cg, you can refer to my article titled [Introduction to Cg Runtime with OpenGL].


If you have ever tried to create a ray tracer or a path tracer, then you should be familiar with the concept of reflections and refractions. Rendering methods such as ray tracing and path tracing can simulate these effects naturally however GPU rasterization cannot. It is the job of the shader programmer to come up with a method that can somehow simulate the same effect that can be achieved using a global illumination rendering method such as ray tracing and path tracing. The image below shows an example render using ray-tracing.

Arauna - Realtime Ray Tracing

Arauna - Realtime Ray Tracing

The reflection and refraction technique displayed in the image above is an example of the effect that can be achieved from global illumination rendering algorithms. Our goal is to reproduce this effect as closely as possible in real-time.

Using a global illumination rendering algorithm (ray tracing or path tracing), you can achieve effects such as self-reflection and self-refraction. Self-reflection is when parts of the same object are reflected onto itself (for example, the handle of a teapot is reflected in the body of the teapot), and self-refraction is when parts of the object that appear behind the object can be accurately refracted. In GPU shaders, these effects are harder to reproduce.

For an article that describes how to do multiple reflection and refraction in GPU shaders, I recommend you read chapter 17 of the “GPU Gems 3” book (“Robust Multiple Specular Reflections and Refractions” by Tamas Umenhoffer, Gustavo Patow, and Lazlo Szirmay-Kalos).

In this article, I will demonstrate a simple method to simulate environment reflection and refraction techniques using GPU shaders.


The demo shown in this article uses several 3rd party libraries to simplify the development process.

  • The Cg Toolkit (Version 3): The Cg Toolkit provides the tools and API needed to integrate the Cg shader programs in your application.
  • Boost (1.46.1): Boost has some very useful libraries that I use throughout my demo applications. In this demo, I use the Signals, Filesystem, Function, and Bind boost libraries to provide a generic, platform independent functionality that simplifies some of the features used in this demo.
  • Simple DirectMedia Layer (1.2.14): Simple DirectMedia Layer (SDL) is a cross-platform multimedia library that I use to create the main application window, initialize OpenGL, and handle keyboard, mouse and joystick input.
  • OpenGL Mathmatics (GLM): An OpenGL centric mathmatics library for 3D graphics applications.
  • Simple OpenGL Image Library (SOIL): SOIL is a tiny C library used primarily for uploading textures into OpenGL.

All of the dependencies described here are included in the source code example included at the end of this article.

The EnvironmentMapping Demo Application

I will first show how we can setup the application to use the shaders that will be shown later. I am using an effect framework which I created to simplify loading shaders, accessing and update shader parameters, and iterating through the passes of a technique. For a detailed explanation of this framework and the underlying Cg code, you can refer to my article titled [Introduction to Cg Runtime with OpenGL]

Globals and Headers

Let’s start by including the headers and defining the global variables that are used for this demo.

I’m using the effect framework library to create the initial OpenGL application window as well as to load shader effects and access shader parameters. These first headers (with the exception of the precompiled header) are all from the effect framework.

Then we will define a few global parameters that are used throughout the demo.

The g_App parameter is used to startup and run the application. To handle the application logic and rendering, callback functions will be registered with the application that will be invoked in the applications update loop.

The g_Camera parameter implements a simple arc-ball camera class that also supports zooming and panning of the camera. Following the camera parameter are several parameters that define an initial view that will be applied to the camera when the application starts.

Initially I was using glutSolidTorus to render a torus that can be used to demonstrate the effects used in this demo, but glutSolidTorus doesn’t generate texture coordinates. I needed to generate texture coordinates for the geometry so the alternative was to generate a procedural torus with correct texture coordinates and normals. The g_TorusDisplayList is used to store the ID of a display list that can be used to render the same torus multiple times without having to generate the vertex position, texture coordinate, and surface normal for every render frame.

We also need to define a few texture object ID’s for the textures that will be used for this demo. The g_EnvCubeMap parameter stores the ID of the 6-sided cube map texture. The g_BrushedMetalTexture parameter defines an ID for a brushed-metal 2D texture that will be applied to the reflective material and the g_GlassTexture parameter defines a 2D texture ID for a glass texture that will be applied to the refractive material.

The g_bAnimate and g_fRotatePrimitive parameters store some animation data that is used to rotate the scene objects. The animation can be toggled by pressing the [space] bar.

And the final three parameters define information about the mouse position and the state of the mouse buttons. These are used to pan and rotate the view of the camera.

Forward Declarations

The functions that will be used as callbacks for the application class must be forward declared before they can be used.

The first group of methods will be registered as callbacks for the application to invoke when certain events occur.

In InitGL static OpenGL states will be configured.

The LoadResources method is used to load any textures and models that are used by the demo.

The DrawCubeMap method will draw the skybox using the cube map texture that is passed as an argument.

The Main Method

The entry point for our application is the main method. We will use it to register the callback functions that will be used for the application and startup the applications update loop.

The camera’s initial position an orientation are initialized and the callback functions are registered with the application class.

The applications processing loop is initialized by calling the Application::Run method.

The OnInitialize Method

The first thing that happens after the application has initialized is the Application::Initialized event is invoked causing the OnInitialize method to be invoked.

The first thing we do in this function is create and initialize the EffectManager singleton instance and get a reference to that instance.

I decided to use a torus to demonstrate this effect so the CreateTorusDisplayList method will create an OpenGL display list that can be used to quickly render the torus geometry in the scene.

The LoadResources is responsible for loading the texture and model resources used by the demo. It will load the cube map texture and the 2D texture resources.

On line 192 we register a callback method with the effect manager which will be invoked when an effect is loaded. We will use this event to set the static parameters of the effect after they are loaded (or reloaded).

On lines 196-198 the effect files are loaded by the effect manager. The first effect loaded on line 196 is the reflection effect, the refraction shader effect is loaded on line 197, and a shader that combines both the reflection and refraction effect is loaded on line 198.

For this demo, I’ve added a few new parameters to the Material type to define the reflection, refraction, and diffusion parameters that are used to simulate the reflection and refraction phenomenon. I will discuss how these parameters are used in the section where the shader programs are discussed.

On line 212, the InitGL method will initialize a few OpenGL states and parameters.

Let’s first take a look at how the torus display list is generated.

The CreateTorusDisplayList Method

The CreateTorusDisplayList method will generate a display list that can be used to render a torus without having to compute the vertex positions, texture coordinates and normals each frame.

If you would like to know more about the math behind the torus, you can refer to the [Torus] Wikipedia page (http://en.wikipedia.org/wiki/Torus).

Fist a new display list is created using the glGenLists method. The list definition is started with the glNewList method and the definition is finalized with the glEndList method.

This function will simply loop through each side of the torus and generate a “ring” that loops a full circle ([math]2\pi[/math]) around the torus body.

Torus Cycles

Torus Cycles

As shown in the image, the “sides” of the torus wrap around the outside of the torus (shown by the magenta ring on the torus body. The “rings” wrap around the inside of the torus body (show by the red ring on the torus loop).

For each step on the torus, a vertex of the torus is plotted using the TorusVertx inline function.

This function is based on the parametric equation of a torus:

[math]\begin{array}{l} x(u,v)=(R + r\cos(v))\cos(u)) \\ y(u,v)=(R + r\cos(v))\sin(u) \\ z(u,v)=r\sin(v) \end{array}[/math]


  • [math]u, v[/math] are in the interval [math][0, 2\pi][/math].
  • [math]R[/math] is the outer radius of the torus (indicated by the magenta line in the torus image above).
  • [math]r[/math] is the inner radius of the torus (indicated by the red line in the torus image above).

The texture coordinate is determined from a object-planar texture coordinate generation where the x, y, and z components of the position are directly mapped to the texture coordinate. The sTexCoord and tTexCoord static constant arrays determine the axis and scaling of the texture coordinates.

To enable spherical texture coordinate mapping, un-comment lines 142-144.

The LoadResources Method

The LoadResources method is used to load any textures or geometry that is used by the application. Texture loading is a large topic in itself, but I’m using the “Simple OpenGL Image Library” (SOIL) to simplify the image loading. SOIL can be used to load standard texture formats as well as cube maps.

Our cube map texture that will contain the environment that will be reflected and refracted in our shader is stored in the g_EnvCubeMap parameter. The cube map consists of six sides. The sides are named with the assumption that the cube map is rendered at the center of the viewer and the sides are relative to the axis of the world with no rotation.

  • The North side is the side that is in the positive Z axis.
  • The South side is the side that is in the negative Z axis.
  • The East side is the side that is in the positive X axis.
  • The West side is the side that is in the negative X axis.
  • The Up side is the side that is in the positive Y axis.
  • The Down side is the size that is in the negative Y axis.
Depending on the handedness of the coordinate system the definition of positive Z and negative Z may be reversed.

On line 96, and 97 the cube map’s texture wrapping mode is set to GL_CLAMP_TO_EDGE. This reduces the appearance of seams at the edge of the cube map.

Two 2D textures are also loaded. These texture are applied to the torus’s and we want the textures to repeat if the texture coordinate gets out of the range 0 to 1.

When we load a texture using SOIL, it keeps the texture bound to the first texture stage. So before we leave the function, we have to unbind the texture so that we don’t accidentally texture something we didn’t mean to texture.

The InitGL Method

We’ll use the InitGL method to initialize the OpenGL states that are used for this demo.

Since we will be drawing the cube map texture over the entire screen, we don’t need to define a clear color. And since we won’t be using any lights (because the fragment shader will be calculating all the colors we will see) we don’t need to initialize any lights or materials or anything of that sort.

We only need to set the value the depth buffer is cleared to and enable depth testing in the rendering pipeline.

The OnEffectLoaded Method

The OnEffectLoaded method is the event callback that gets invoked when an effect has been loaded. All of the effects used in this demo define a cube map sampler called “envSampler”. We’ll query the effect parameter and set the parameter to the value of the cube map texture that was loaded in the LoadResources method.

The effect that generated the event is accessible from the event parameters. The “envSampler” parameter is quired from the effect and assigned the environment cube map texture object ID.

That should be everything we need to do to initialize the demo. Let’s now take a look at the update and render methods.

The OnUpdate Method

The OnUpdate method will be used to simply update the angle of rotation that will be used to rotate the tori.

Every two seconds the EffectManager will be asked to check all of it’s loaded effects to see if the effect file has changed on disc. If so, the effect will be reloaded.

The fAnimTimer variable is used to keep track of how long the animation has been running thus far and the fRotationRate variable is used to control the speed of rotation of the torus objects.

Every two seconds, the EffectManager will check to see if any of the effects have been updated and disc and if so, the effect will be reloaded.

On line 358, the g_fRotatePrimitive is updated based on the fRotationRate variable which will be used later to rotate the primitives before being rendered.

The OnPreRender Method

The OnPreRender method is used to update any effect variables that can be shared with all other effects. The EffectManager defines a few predefined shared parameters and when an effect is loaded all effect parameters that have a semantic that matches the shared parameter semantic will be automatically connected to that shared parameter.

On line 367 and 368 the camera’s view matrix and projection matrix parameters are set. The EffectManager will automatically calculate any matrices that are dependent on the view and projection matrices (including the inverse, transpose, and inverse-transpose versions of those matrices) and if the effect assigns a matrix parameter with a matching semantic, that value will be automatically updated via the shared parameter.

Other shared parameters include the elapsed time since the previous frame, the total time the application has been running, the current position of the mouse, and the current state of the mouse buttons.

The OnRender Method

The OnRender method will render a sky box using the cube map texture that was loaded earlier and it will also render the three tori that demonstrates the reflection, refraction and reflection-refraction effects.

The first thing we’ll do is initialize some parameters and draw the sky box using the cube map we loaded earlier.

The eye position in world space is derived from the camera’s view matrix and stored in the eyePos variable.

Since the sky box will be overdrawing the entire screen, there is no benefit to clearing the color buffer so on line 420 we only need to clear the depth buffer.

The DrawCubeMap method will draw a sky box around the viewer. If we disable the lighting and disable writing to the depth buffer, we can draw a unit cube around the origin of the model-view matrix using the cube map as a texture for the cube. The effect is view that appears to be infinitely far away from the viewer.

The DrawAxis method will just draw some lines at the origin of the pivot camera’s view. This is useful to align the origin of rotation of our view.

On line 427, we also define a matrix parameter that will be used to position our objects in the world.

Next we’ll draw the three tori. The first torus is a reflective torus with the brushed metal

First we get the C7E1_reflection that was loaded earlier and set some of the non-shared parameters like the eye position (which could be shared but there is currently no method for determining which space the eye positions should be expressed in) and the base texture of the object.

On line 438 and 439 the world matrix is built that places the torus 6 units to the left of the origin and rotates it about the X axis.

The world matrix is assigned to the EffectManager which ensures that any shared parameters that rely on the world matrix are updated (like the parameter defined with the WORLDVIEWPROJECTION semantic for example).

The EffectManager also defines a series of shared parameters that define the different components of the material that should be applied to the object. On line 442 the reflective material is assigned to the shared parameters that are managed by the EffectManager class.

On line 444, and 445, the effect parameters and shared parameters are sent to the GPU using the EffectManager::UpdateSharedParameters and Effect::UpdateParameters methods.

To render the geometry using the effect, we first need to query the technique that is associated with the effect and for each pass defined in the effect, we render the geometry.

We can render the torus display list using the glCallList OpenGL method passing as a parameter the ID to the torus display list we created earlier.

The next torus will be rendered using the reflection/refraction effect. There isn’t much difference in this code block so I won’t outline each step but I will just highlight the changed code.

The only difference here is the effect that is used to render the torus and the world transform of the torus object.

And the third torus is rendered using a purly refractive shader.

Again, very little change from the previous object except the effect that is used to render the torus and the world transform of the object.

And finally, to swap the front and back render buffers and present the view we need to inform the application to present the back buffer.

Now we’ve seen the application side of rendering our scene. Let’s take a look at the shader effect files.

The Shader Effects

This demo uses three different shaders.

  1. C7E1_reflection.cgfx: A reflection shader.
  2. C7E3_refraction.cgfx: A refraction shader.
  3. C7E3_refract_reflect.cgfx: A combination of refraction and reflection effect.

The Reflection Shader

The reflection shader works by calculating an incident vector ([math]\mathbf{I}[/math]) which is the vector from the viewer (the eye position) to the point we are shading. The incident vector is reflected from the surface we are shading about the surface normal ([math]\mathbf{N}[/math]) at the point we are shading.

Computing the Reflected Ray

Computing the Reflected Ray

In the image above, the red vector represents the incident vector ([math]\mathbf{I}[/math]) which is the vector from the viewer to the point we are shading. The green vector represents the surface normal ([math]\mathbf{N}[/math]) at the point we are shading. The yellow vector is the projection of [math]\mathbf{I}[/math] onto [math]\mathbf{N}[/math] and is computed by scaling the surface normal [math]\mathbf{N}[/math] by the dot product of [math]\mathbf{N}[/math] and [math]\mathbf{I}[/math]. The blue vector ([math]\mathbf{R}[/math]) is computed by subtracting the yellow vector twice from [math]\mathbf{I}[/math].

If we make the following definitions:

  • [math]\mathbf{P}_{w}[/math] is the world position of the point we are shading.
  • [math]\mathbf{E}_{w}[/math] is the world position of the camera (or eye position).

Then the general formula for calculating the reflection vector is:

[math]\begin{array}{l} \mathbf{I}=\mathbf{P}_{w}-\mathbf{E}_{w} \\ \mathbf{R}=\mathbf{I}-2\mathbf{N}(\mathbf{N}\cdot\mathbf{I}) \end{array}[/math]

Fortunately, you don’t have to compute this reflection vector yourself in the shader program because Cg provides the reflect function to you.

  • float3 reflect( I, N ): Returns the reflected vector ([math]\mathbf{R}[/math]) from an incoming incident ray ([math]\mathbf{I}[/math]) and the surface normal ([math]\mathbf{N}[/math]). The resulting vector has the same length as [math]\mathbf{I}[/math].

Now let’s take a look at the shader code.

The Vertex Program

The reflection vector is computed per-vertex instead of per-fragment. There is nothing preventing you from moving the reflection calculation to the fragment program but the difference in quality in doing the reflection calculation per-fragment might not be very noticeable if your model is highly tessellated.

First lets define a few global structs and parameters.

For this demo, I’ve added the reflection, refraction, and transmittance parameters to the material struct. For reflection, only the reflection parameter is used and it is a scalar value in the range 0 to 1 which indicates the intensity of the reflection. A value of 0 means the material is not reflective at all while a value of 1 indicates the material is completely reflective.

The baseSampler parameter is used to apply a diffuse texture to the object and the envSampler parameter is used to store the cube map that is used as the environment sampler that will be applied to the objects. These parameters were set in the application before the objects are rendered.

And we also need to store the matrices that are used to transform our object space position into clip space and world space.

The gEyePos vector defines the position of the viewer in world space.

If you followed the previous articles about lighting, you may have noticed that the light positions, eye positions, and vertex positions were all defined in objects space. For the cube map implementation, the vectors need to be in world space because the environment cube map is also defined in world space. If the reflection or refraction vectors are not in the same space as the environment map, then the wrong color value will be returned when sampling the cube map texture.

The incoming parameters supplied by the application are the position parameter with the POSITION semantic, the texCoord parameter with the TEXCOORD0 semantic, and the normal with the NORMAL semantic.

The out parameters which are passed to the fragment progarm are the oTexCoord parameter with TEXCOORD0 semantic and the reflection vector R parameter with TEXCOORD1 semantic.

The oPosition parameter with POSITION semantic is clip-space position of the vertex and it must be computed in the vertex program but it is not bound to any input parameter in the fragment program.

On line 48, the vertex position is transformed from object space to clip space by multiplying the vertex position by the world, view, projection matrix. The texture coordinate is imply passed-through to the fragment program.

On lines 52, and 53 the world-space position and surface normal is computed by multiplying by the world matrix of the object. This is a necessary step before the reflection vector can be computed because the reflection vector must be accurately computed in world-space. This is a requirement for the cube map texture sampler that the (direction) vector that is used is expressed in the same space as the map is defined.

On line 57 the incident direction vector is computed and passed to the reflect function to produce the reflection vector.

The Fragment Program

The only responsibility of the fragment program is to sample the base texture and the cube map and compute the final color of the fragment based on the amount of reflection defined in the material.

The first two parameters texCoord and R are input parameters passed from the vertex shader.

The only output parameter that the fragment program must compute is the color parameter with the COLOR semantic. This is the final color that will be blended with the current fragment in the framebuffer.

On line 73 the environment map is sampled passing the reflection vector R as the second parameter. This direction vector does not need to be normalized before it is used in a cube sampler.

On line 76, the base texture is sampled using the texture coordinate passed from the vertex program.

The final color is a linear interpolation between the base texture and the value of the cube map in the direction R based on the reflection parameter defined in the Material struct. If the reflection parameter is 0, then only the decalColor will be visible and if the reflection parameter is 1, then only the reflectedColor will be visible.

Techniques and Passes

Each effect only defines a single technique with a single pass.

We define the VertexProgram and the FragmentProgarm by telling Cg to compile the two entry points for each program using the latest platform supported.

Now let’s take a look at refraction.

The Refraction Shader

Refraction occurs when light enters a medium that has a different density than the medium the light originated from. For example, when light passes through glass, the light will appear to “bend” at the boundary of the two mediums (air, and glass).

Refraction is defined by Snell’s law which states that the ratio of the sines of the angles of incidence and refraction is equivalent to the ratio of phase velocities in the two media, or equivalent to the opposite ratio of the indices of refraction.

If we define the following:

  • [math]v_{1}[/math] and [math]v_{2}[/math] are the wave velocities in the separate media.
  • [math]\eta_{1}[/math] and [math]\eta_{2}[/math] are the refractive indices of the two medium.



Fortunately, we don’t have to worry too much about how the refraction is calculated because Cg provides the refract function to calculate the refractive vector.

  • float3 refract( float3 i, float3 n, float eta ): Computes the refraction vector from the incident ray i, the surface normal n and the ratio of the indices of refraction between the two medium (eta). The incident vector (i) and the surface normal (n) should be normalized.

Although it is the discretion of the vendor regarding how this method is implemented, but it is possible that this method is implemented in this way:

The Vertex Program

The only difference between the reflection shader and the refraction shader is the computation of the direction vector that is passed to the fragment program.

On lines 58 and 59 the refract vector is computed from the normalized incident vector and the surface normal and the ratio of indices of refraction.

A table of refractive indices can be found at http://en.wikipedia.org/wiki/List_of_indices_of_refraction.

Air at sea-level has an index of refraction of 1.000277 and water at room temperature has an index of refraction of 1.333. So the if the incident vector (I) is going from air to water, the ratio of the indicies would be:


Different types of glass have different index of refraction values but generally a good index of refraction for glass is 1.5. So the ratio between air and glass would be:


And this is the value that is used for the refraction parameter.

The Fragment Program

The fragment program for the refractive texture is also almost identical to that of the reflection program. The only difference is that instead of using the material’s reflect parameter, the transmittance value determines how much of the light is transmitted through the object. You may have noticed that the amount of light that passes through a medium changes with the thickness of the material. We are not concerned with dispersion and attenuation factors in this simple shader program so this is something that you can investigate for yourself.

If the transmittance value is 0, then no light is transmitted through the material and only the base texture is visible. If the transmittance value is 1, then the material is completely transparent and only refracted light is used to color the fragment.

There is one more shader that does both the reflection and refraction effects then blends the final color based on the values of reflection and transmittance parameters but I will leave it up to the reader to implement this.

If everything works well, then the resulting application should look something like this:


The Cg Tutorial

The Cg Tutorial

The Cg Tutorial: The Definitive Guide to Programmable Real-Time Graphics (2003). Randima Fernando and Mark J. Kilgard. Addison Wesley.

Special thanks to Hazel Whorley for creating these great cube maps.

Download the Source

The source code example for this article can be downloaded from the link below.

zip1 EnvironmentMapping.zip

3 thoughts on “Environment Mapping with Cg and OpenGL

  1. I used VS2019(V142 platform) to compile the source code,but it occured an error in the SDL file ,which is “C2118 Negative subscript” .I also get some warning like this” C6386 Buffer overflow when writing to “decoded”: writeable size is “* audio_len” bytes, but “2” bytes may be written”

    • June,

      Thanks for your interest in this tutorial, but the code was written so long ago (9 years ago) and it seems that is suffers from code rot. I’ve tried to fix it but it’s using an older version of SDL and updating to SDL 2.0 might fix the issue, but I’m afraid I don’t want to invest the time to update this. This article (and many older OpenGL articles on this site) should just be totally rewritten!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.