# Texturing and Lighting in OpenGL

Texturing and Lighting

In this article, I will demonstrate how to apply 2D textures to your 3D models. I will also show how to define lights in your scene that are used to illuminate the objects in your scene.
I assume that the reader has a basic knowledge of C++ and how to create and compile C++ programs. If you have never created an OpenGL program, then I suggest that you read my previous article titled “Introduction to OpenGLhere before continuing with this article.

# Introduction

Texturing an object is the process of mapping a 2D (or sometimes 3D) image onto a 3D model so that you can achieve much higher detail in the model without using a lot of vertices. Without textures the only way to acquire the level of detail that can be achieved using texture mapping would be to specify a different colored vertex for each pixel of the screen. As you can imagine, adding detail to a model in this way can be very inefficient.

Lighting is required to set the mood for a scene. Lighting also provides a cue to the eye what the direction and shape of an object has. With the correct use of lighting we can make a scene dark and gloomy or bright and cheery.

Materials are used to determine how the light interacts with the surface of the model. A model can appear very shiny or very dull depending on the material properties of the model.

# Dependencies

For this article, I will rely on several 3rd party libraries to simplify programming.

• GLUT: GLUT is used to quickly setup a render window and to handle keyboard and mouse events. You can obtain pre-compiled GLUT libraries for Win32 here: http://user.xmission.com/~nate/glut.html.
• GLM: GLM is a math library that is designed with the OpenGL specification in mind. You can get the latest version of GLM from here: http://glm.g-truc.net/.
• SOIL: The Simple OpenGL Image Library is the easiest way I am currently aware of that to load textures into OpenGL. With a single method, you can load textures and get an OpenGL texture object back. You can get the latest version of SOIL here: http://www.lonesock.net/soil.html

# Texturing

Texture mapping is the process of applying a 2D (or 3D) image to a 3D model. Textures can provide much more detail to the model that would otherwise only be possible by creating models with a huge number of vertices. In order to correctly map the texture to a model we must specify a texture coordinate to each vertex of the model. A texture coordinate can be either be a single scalar in the case of 1D textures (a single row of texels which may represent a gradient, opacity or weight), a 2-component value in the case of 2D textures, or a 3-component value in the case of 3D textures or even a 4-component value in the case of projected textures. In this article I will only show applying 2D textures using 2-component texture coordinates.

Computing the correct texture coordinate for a vertex can be a complex process and it usually the job of the 3D modeling artists to generate the correct texture coordinates in a 3D content creation program like 3D Studio Max, Maya, or Blender.

The texture coordinate determines what texel is sampled from the texture. When dealing with 2D textures, the axis along the width of the texture is usually referred to as the U (or s coordinate) axis and the axis along the height of the texture is usually referred to as the V (or t coordinate) axis. For this reason, texture mapping can also be called “UV mapping”.

Texture coordinates are usually normalized in the range 0 to 1 in each axis. Normalized texture coordinates are useful because we may not always know the size of the texture in advance (and when using MIP maps, the size of the texture will actually change depending on the view of the object being textured).

The image below shows the texture coordinates and the corresponding axes that are used to refer to the texels.

Texture Coordinates

Since there are so many different image formats, I will not go into much detail about how to load textures into memory. Instead, I will stick to the simple OpenGL image library (SOIL) for loading textures into graphics memory. SOIL is a free, open source image library that can load images from multiple image formats.

To load a texture using SOIL requires very little code:

    GLuint textureObject = 0;


That’s it. The textureObject now refers to a texture that is loaded in graphics memory and can be used to apply textures to models.

This example shows loading a JPEG file from a mythical path, but SOIL has support for BMP, PNG, TGA, DDS, PSD, and HDR file types.

The function shown here will load an image directly from disk and create an OpenGL texture object and return the ID of that texture object, ready to be used by your application. If an error occured while trying to load the texture (for example, the specified file was not found) this function will return 0.

This function has the following signature:

unsigned int SOIL_load_OGL_texture
(
const char *filename,
int force_channels,
unsigned int reuse_texture_ID,
unsigned int flags
);


Where:

• const char *filename: Is the relative or absolute file path to the texture to be loaded.
• int force_channels: Specifies the format of image. This can be any one of the following values:
• SOIL_LOAD_AUTO: Leaves the image format in whatever format it was found.
• SOIL_LOAD_L: Forces the image to be loaded as a Luminous (grayscale) image.
• SOIL_LOAD_LA: Forces the image to be loaded as a Luminous image with an alpha channel.
• SOIL_LOAD_RGB: Forces the image to be loaded as a 3-component (Red, Green, Blue) image.
• SOIL_LOAD_RGBA: Forces the image to be loaded as a 4-component (Red, Green, Blue, Alpha) image.
• unsigned int reuse_texture_ID: Specify 0 or SOIL_CREATE_NEW_ID if you want SOIL to generate a new texture object ID, otherwise the texture ID specified will be reused replacing the existing texture at that ID.
• unsigned int flags: Can be a combination of the following flags:
• SOIL_FLAG_POWER_OF_TWO: Forces the size of the final image to be a power of 2.
• SOIL_FLAG_MIPMAPS: Tells SOIL to generate mipmaps for the texture.
• SOIL_FLAG_TEXTURE_REPEATS: Specifies that SOIL should set the texture to GL_REPEAT in each dimension of the texture.
• SOIL_FLAG_MULTIPLY_ALPHA: Tells SOIL to pre-multiply the alpha value into the color channel of the resulting texture.
• SOIL_FLAG_INVERT_Y: Flips the image on the vertical axis.
• SOIL_FLAG_COMPRESS_TO_DXT: If the graphics card supports it, SOIL will convert RGB images to DXT1 and RGBA images to DXT5.
• SOIL_FLAG_DDS_LOAD_DIRECT: Specify this flag to load DDS textures directly without any additional processing. Using this flag will cause all other flags to be ignored (with the exception of SOIL_FLAG_TEXTURE_REPEATS).

## Mipmaps

A mipmap (also called MIP maps) is a method to define smaller versions of a base image in a single file. Using mipmaps increases the amount of memory required to store the base image alone by 33% but the benefit is increased rendering speeds and reducing aliasing effects.

The base image has a mipmap index of 0 and each successive image in the mipmap array is generated by halving the width and height of the previous mipmap level. The final mipmap level is reached when the resulting texture is a single pixel in one or both dimensions of the texture.

The image below shows a texture that has had the mipmap levels generated.

Mipmaps

The SOIL library can be used to generate the mipmap levels automatically if you supply the SOIL_FLAG_MIPMAPS flag to the load function.

## Texture Properties

Using the SOIL function described above to load a texture can also be used to automatically specify a few properties for the texture (for example, the texture wrap mode) but you can also modify the properties of the texture yourself after the texture is loaded.

Texture properties are specified using the glTexParameter family of functions. Before you can modify the properties of a texture, you must first bind the texture to an appropriate texture target. To bind a texture to a texture target, you use the glBindTexture method.

This method has the following signature:

void glBindTexture (GLenum target, GLuint texture);


Where:

• GLenum target: Used to specify the texture target to bind the texture to. Valid values are:
• GL_TEXTURE_1D: A texture target that is used to bind to a 1D texture object.
• GL_TEXTURE_2D: A texture target that is used to bind to a 2D texture object.
• GL_TEXTURE_3D: A texture target that is used to bind to a 3D texture object.
• GL_TEXTURE_CUBE_MAP: A texture target that is used to bind a cube map texture.
• GLuint texture
• : Specifies the texture object ID to bind to the current target.

After a texture object is bound to a texture target several functions can be used to read from the texture, or write to the texture, or modify the properties of the texture.

The texture can be unbound by binding the default texture object ID of 0 to the appropriate texture target.

How a texel is fetched from a texture resource is often dependent on the properties that are associated with the texture object. Textures have several properties that can be modified using the glTexParameter family of functions.

The glTexParameter function has the following signature:

void glTexParameter[f|i]{v}(GLenum target, GLenum pname, [GLfloat|GLint]{*} param);


Where:

• GLenum target: Specify the texture target. This can be any of the targets specified when a valid texture was previously bound using the glBindTexture method.
• GLenum pname: Specify the symbolic name for the texture parameter. This can be one of the following values:
• GL_TEXTURE_MIN_FILTER: This parameter allows you to specify the function that is used to sample a texture when several texels from the texture fit within a single screen pixel (or pixel fragment).
• GL_TEXTURE_MAG_FILTER: This parameter allows you to specify the function that is used to sample a texture when a single texel fits within multiple screen pixels (or pixel fragments).
• GL_TEXTURE_MIN_LOD: This parameter allows you to specify a floating point value that determines the selection of the highest resolution mipmap (the lowest mipmap level). The default value is -1000.0f.
• GL_TEXTURE_MAX_LOD: This parameter allows you to specify a floating point value that determines the selection of the lowest resolution mipmap (the highest mipmap level). The default value is 1000.0f.
• GL_TEXTURE_BASE_LEVEL: This parameter allows you to specify an integer index of the lowest defined mipmap level. The default value is 0.
• GL_TEXTURE_MAX_LEVEL: This parameter allows you to specify an integer index that defines the highest defined mipmap level. The default value is 1000.
• GL_TEXTURE_WRAP_S: This parameter allows you to determine how a texture is sampled when the S texture coordinate is out-of the [0..1] range. By default this value is set to GL_REPEAT.
• GL_TEXTURE_WRAP_T: This parameter allows you to determine how a texture is sampled when the T texture coordinate is out of the [0..1] range. By default this value is set to GL_REPEAT.
• GL_TEXTURE_WRAP_R: This parameter allows you to determine how a texture is sampled when the R texture coordinate is out of the [0..1] range. By default this value is set to GL_REPEAT.
• GL_TEXTURE_BORDER_COLOR: This 4-component color parameter allows you to specify the color that is used when the texture is sampled on it’s border.
• GL_TEXTURE_COMPARE_MODE: This parameter allows you to specify the texture comparison mode for the currently bound depth texture. This parameter is useful when you want to perform projected shadow mapping or other effects that require depth comparison.
• GL_TEXTURE_COMPARE_FUNC: This parameter allows you to specify the depth comparison function that is used when the GL_TEXTURE_COMPARE_MODE parameter is set to GL_COMPARE_R_TO_TEXTURE.
• GL_GENERATE_MIPMAP: This boolean parameter specifies if all levels of a mipmap array should be automatically updated when any modification to the base level mipmap is made. The default value is GL_FALSE.
• [GLfloat|GLint] param: Specifies the value for the parameter.

### GL_TEXTURE_MIN_FILTER

The GL_TEXTURE_MIN_FILTER texture parameter allows you to specify the minification filter function that is applied when several texels from the texture fit within the same pixel (or fragment) that is rendered to the current color buffer. The image below shows an example of when this happens:

GL_TEXTURE_MIN_FILTER

The GL_TEXTURE_MIN_FILTER parameter can have one of the following values:

• GL_NEAREST: Returns the texel that is nearest to the center of the pixel being rendered.
• GL_LINEAR: Returns the weighted average of the four texture elements that are closest to the center of the pixel being rendered.
• GL_NEAREST_MIPMAP_NEAREST: Choose the mipmap that most closely matches the size of the pixel being textured and then apply the GL_NEAREST method to produce the sampled texture value.
• GL_LINEAR_MIPMAP_NEAREST: Choose the mipmap that most closely matches the size of the pixel being textured and then apply the GL_LINEAR method to produce the sampled texture value.
• GL_NEAREST_MIPMAP_LINEAR: Choose the two mipmaps that most closely matches the size of the pixel being textured. Each of the two textures are sampled using the GL_NEAREST method and the weighted average of the two samples are used to produce the final value.
• GL_LINEAR_MIPMAP_LINEAR: Choose the two mipmaps that most closely matches the size of the pixel being textured. Each of the two textures are sampled using the GL_LINEAR method and the weighted average of the two samples are used to produce the final value.

### GL_TEXTURE_MAG_FILTER

The GL_TEXTURE_MAG_FILTER texture parameter allows you to specify the magnification filter function that is applied when one texel from the sampled texture is mapped to multiple screen pixels (or pixel fragments). The image below shows this phenomenon.

GL_TEXTURE_MAG_FILTER

Since this particular parameter only effects textures using the lowest mipmap level (the highest resolution) so the only valid values for the GL_TEXTURE_MAG_FILTER parameter are GL_NEAREST and GL_LINEAR. These values have the same meaning as for GL_TEXTURE_MIN_FILTER explained above.

### GL_TEXTURE_WRAP

The GL_TEXTURE_WRAP_S, GL_TEXTURE_WRAP_T, and GL_TEXTURE_WRAP_R texture parameters allow you to control how out-of-bound texture coordinates are treated. Texture coordinates in the range of [0..1] will be interpreted as-is, but in some cases you may actually want to define texture coordinates outside of this allowed range. This may actually occur without intention when a transformation is applied to the texture matrix (texture scaling could scale texture coordinates out-of-range).

The GL_TEXTURE_WRAP parameters can have the following values:

GL_REPEAT: This the default texture wrap mode for all 3 coordinates. Using this mode will tell OpenGL to ignore the integer part of the texture coordinate and only use the fractional part to determine the sampled texel. For example, a texture coordinate of 2.05 will simply be interpreted as 0.05 and a texture coordinate of -3.5 will be interpreted as 0.5.

The image below shows an example of using GL_REPEAT for both GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T texture parameters:

GL_REPEAT

GL_CLAMP: Will cause the texture coordinate to be clamped in the range [0..1]. The image below shows the result of using GL_CLAMP for both GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T texture parameters:

GL_CLAMP

## Texture Environment

The glTexEnv family of functions allows you to specify the texture environment parameters that will effect all texture operations when a fragment is textured. The currently bound texture does not have any influence on the texture environment settings.

The glTexEnv has the following signature:

void glTexEnv[f|i]{v}(GLenum target, GLenum pname, [GLfloat|GLint]{*} param);


Where:

• GLenum target: Specifies the texture environment this is effected by the parameter setting. The texture environment can be either GL_TEXTURE_ENV, or GL_TEXTURE_FILTER_CONTROL.
• GLenum pname: Specifies the texture environment parameter to set.
• [GLfloat|GLint]{*} param: Used to specify the value of the parameter being set.

### GL_TEXTURE_ENV

When the target parameter is GL_TEXTURE_ENV then the valid values of the pname paramater are GL_TEXTURE_ENV_MODE, GL_TEXTURE_ENV_COLOR.

#### GL_TEXTURE_ENV_COLOR

If pname is GL_TEXTURE_EVN_COLOR the the params argument should point to a 4-component floating-point color value that will henceforth be referred to as the environment color. The default value for the GL_TEXTURE_EVN_COLOR parameter is transparent black (0, 0, 0, 0).

#### GL_TEXTURE_ENV_MODE

If pname is GL_TEXTURE_ENV_MODE, then params can be set to one of the five texture functions:

• GL_ADD: Adds the color of the incoming fragment with the color of the sampled texture. The alpha value of the incoming fragment is multiplied with the alpha value of the sampled texture.
• GL_MODULATE: Both the color and the alpha channel of the incoming fragment are multiplied by the sampled texture.
• GL_DECAL: The alpha value of the sampled texture is used to blend the incoming fragment color with the color of the sampled texture. The alpha value of the incoming fragment is used for the alpha value of the resulting fragment. This results in the texture replacing the original color of the fragment only if the alpha of the decal texture is not 0.
• GL_BLEND: The sampled texture color is used to determine the per-component blending weight between the incoming fragment color and the environment color (as determined by the value of the GL_TEXTURE_EVN_COLOR parameter. The final alpha value is determined by multiplying the incoming fragments alpha value and the sampled texture’s alpha value.
• GL_REPLACE: The sampled texture’s color and alpha value will replace the incoming fragment’s color and alpha values.

The tables below show the texture functions that are applied specific to the value of the GL_TEXTURE_ENV_MODE parameter.

For the following tables, we will make the following definitions:

• $C_p$ refers to the Red, Green, and Blue color components of the previous texture stage (or the incoming fragment if the current texture stage is 0).
• $C_s$ refers to the Red, Green, and Blue color components of the sampled texture (the texture source color).
• $C_c$ refers to the Red, Green, and Blue color components of the texture environment color. This is determined by the value of the GL_TEXTURE_EVN_COLOR parameter.
• $A_p$ refers to the Alpha component of the previous texture stage (or the incoming fragment if the current texture stage is 0).
• $A_s$ refers to the Alpha component of the sampled texture in the case of textures with GL_RGBA internal texture formats.

The functions for textures with GL_RGBA internal texture format:

Function Color Value Alpha Value
GL_REPLACE $C_s$ $A_s$
GL_MODULATE $C_pC_s$ $A_pA_s$
GL_DECAL $C_p(1-A_s)+C_sA_s$ $A_p$
GL_BLEND $C_p(1-C_s)+C_cC_s$ $A_pA_s$
GL_ADD $C_p+C_s$ $A_pA_s$

For textures with GL_RGB internal texture format:

Function Color Value Alpha Value
GL_REPLACE $C_s$ $A_p$
GL_MODULATE $C_pC_s$ $A_p$
GL_DECAL $C_s$ $A_p$
GL_BLEND $C_p(1-C_s)+C_cC_s$ $A_p$
GL_ADD $C_p+C_s$ $A_p$

The default value for the GL_TEXTURE_ENV_MODE parameter is GL_MODULATE.

It should be noted that the glTexEnv family of functions have been deprecated
in OpenGL 3.0 and removed in core GL 3.1. An alternative for using the texture environment parameters is to implement the texture blending in fragment shaders.

### GL_TEXTURE_FILTER_CONTROL

When the target parameter is set to GL_TEXTURE_FILTER_CONTROL then pname must be GL_TEXTURE_LOD_BIAS. This is an integer parameter whose value is added to the chosen mipmap level depending on the selected value of the GL_TEXTURE_MIN_FILTER parameter.

For example, if GL_TEXTURE_LOD_BIAS is set to 1 and the GL_TEXTURE_MIN_FILTER parameter is set to one of the filtering methods that uses mipmaps (GL_NEAREST_MIPMAP_NEAREST, GL_LINEAR_MIPMAP_NEAREST, GL_NEAREST_MIPMAP_LINEAR, GL_LINEAR_MIPMAP_LINEAR) then the chosen mipmap level will be added by 1, resulting in a lower mipmap level when the texture is sampled.

# The Basic Lighting Model

When lighting is enabled, OpenGL applies a specific lighting model to compute the color of a vertex being lit. Keep in mind that when using the fixed-function pipeline, OpenGL does not perform per-fragment lighting. For this reason, shapes with visibly low polygon counts will not be as nicely shaded as an object that are highly tesselated (contains many vertices). The only way to resolve these lighting artifacts created by low tesselated objects is by generating enough vertices so that there is one vertex per screen pixel when the geometry is rendered. The alternative is to implement your lighting equations in a fragment shader program however this is beyond the scope of this article.

The final color for the vertex being lit is computed as follows:

$Color_{final}=emissive+ambient+\sum_{i=0}^{Lights}(ambient+diffuse+specular)$

The final color is then clamped in the range $[0,1]$ before it is passed to the texture stage.

## The Emissive Term

The emissive term in the above formula refers to the current materials emissive component. Material properties will be described later.

This component has the effect of adding color to a vertex even if there is no ambient contribution or any lights illuminating the vertex.

Keep in mind, this will not cause the vertex to emit light and no emissive light transfer will occur between objects.

This component can be written as:

$emissive=Material_e$

Where $Material_e$ refers to the material’s emissive color component.

## The Ambient Term

The ambient term is computed by multiplying the global ambient color (as determined by the GL_LIGHT_MODEL_AMBIENT parameter) by the material’s ambient property.

This component can be written as:

$ambient=(Global_a*Material_a)$

Where $Global_a$ refers to the global ambient term and $Material_a$ refers to the material’s ambient color component.

## The Light’s Contribution

Every light that is currently active will contribute to the final vertex color. The total light contribution is the sum of all active light contributions.

This formula can be written as such:

$contribution=\sum_{i=0}^{Lights}{Light_{attenuation}*(ambient+diffuse+specular)}$

### Attenuation Factor

The light’s attenuation factor defines how the intensity of the light gradually falls-off as the light’s position moves farther away from the vertex position.

The light’s final attenuation is a function of three attenuation constants. These three constants define the constant, linear and quadratic attenuation factors.

If we define the following variables:

• $d$ is the scalar distance between the point being shaded and the light source.
• $k_{c}$ is the constant attenuation factor of the light source.
• $k_{l}$ is the linear attenuation factor of the light source.
• $k_{q}$ is the quadratic attenuation factor of the light source.

The the formula for the attenuation factor is:

$attenuation=\frac{1}{k_{c}+k_{l}d+k_{q}d^{2}}$

The attenuation factor is only taken into account when we are dealing with positional lights (point lights and spot lights). If the light is defined as a directional light, then the attenuation factor is always 1.0 and no attenuation will occur.

We can also force positional light sources to 1.0 by setting the constant attenuation to 1.0 and the linear and quadratic attenuation factors to 0.0. The method to set these factors in OpenGL will be discussed in the following sections.

### Ambient

The Light’s ambient contribution is computed by multiplying the light’s ambient value with the materials ambient value.

$ambient=(Light_{a}*Material_{a})$

Where $Light_{a}$ is the light’s ambient color and $Material_{a}$ is the material’s ambient color.

### Diffuse

The Light’s diffuse contribution (also known as Lambert reflectance) is the light that is diffused evenly across the surface.

If we consider the following definitions:

• $\mathbf{p}$ is the position of the surface we want to shade.
• $\mathbf{N}$ is the surface normal at the location we want to shade.
• $Light_{p}$ is the position of the light source.
• $Light_{d}$ is the diffuse contribution of the light source.
• $\mathbf{L}$ is the normalized direction vector pointing from the point we want to shade to the light sorce.
• $Material_{d}$ is the diffuse reflectance of the material.

Then the diffuse term of the vertex is:

$\mathbf{L}=normalize(Light_{p}-\mathbf{p})$
$diffuse=max(\mathbf{L}\cdot\mathbf{N},0)*Light_{d}*Material_{d}$

The $max(\mathbf{L}\cdot\mathbf{N},0)$ ensures that for surfaces that are pointing away from the light (when $\mathbf{L}\cdot\mathbf{N}<0$), the fragment will not have any diffuse lighting contribution.

### Specular

The specular term is the effect that produces the shiny bright spot on the surface of a material. This term is not only dependent on the position of the light but also on the position of the viewer when observing the material.

If we consider the following definitions:

• $\mathbf{p}$ is the position of the position of the vertex we want to shade.
• $\mathbf{N}$ is the surface normal at that vertex.
• $Light_{p}$ is the position of the light source.
• $Eye_p$ is the position of the eye (or camera’s position).
• $\mathbf{V}$ is the normalized direction vector from the point we want to shade to the eye. This is also called the view vector.
• $\mathbf{L}$ is the normalized direction vector pointing from the point we want to shade to the light source. This is also called the light vector.
• $\mathbf{H}$ is the half-angle vector between the view vector and the light vector.
• $Light_{s}$ is the specular contribution of the light source.
• $Material_{s}$ is the specular reflection constant for the material that is used to shade the object.
• $\alpha$ is the “shininess” constant for the material. The higher the shininess of the material, the smaller the highlight on the material.

Then, using the Blinn-Phong lighting model, the specular term is calculated as follows:

$\mathbf{L}=normalize(\mathbf{L}_{p}-\mathbf{p})$
$\mathbf{V}=normalize(Eye_{p}-\mathbf{p})$
$\mathbf{H}=normalize(\mathbf{L}+\mathbf{V})$
$specular=max(\mathbf{N}\cdot\mathbf{H},0)^{\alpha}*Light_{s}*Material_{s}$

Similar to the diffuse term, the specular term must also consider the direction of the normal vector. If the vector is pointing away from the light (when $\mathbf{N}\cdot\mathbf{L}<0$) then the specular term is 0.

# Materials

When lighting is enabled, the color vertex attribute that was specified with glColor family of functions is ignored. This color by itself doesn’t provide enough information to determine the final color of a fragment.

Instead of defining the color of each vertex, we must define a material definition that will be used to render an object.

Material parameter are specified using the glMaterial family of functions which has the following signature:

void glMaterial[f|i]{v}(GLenum face, GLenum pname, [GLfloat|GLint]{*} param);


Where:

• GLenum face: Specifies the faces that are being updated. This parameter must be:
• GL_FRONT: The material parameter is being specified for front-facing polygons.
• GL_BACK: The material parameter is being specified for back-facing polygons.
• GL_FRONT_AND_BACK: The material parameter is being specified for both front- and back-facing polygons.
• GLenum pname: Specifies the material parameter being updated. This parameter can be one of the following values:
• GL_AMBIENT: A 4-component RGBA vector that defines the ambient color component of the material. The default value is (0.2, 0.2, 0.2, 1.0).
• GL_DIFFUSE: A 4-component RGBA vector that defines the diffuse color component of the material. The default value is (0.8, 0.8, 0.8, 1.0).
• GL_SPECULAR:A 4-component RGBA vector that defines the specular color component of the material. The default value is (0, 0, 0, 1).
• GL_EMISSION: A 4-component RGBA vector that defines the emissive color component of the material. The default value is (0, 0, 0, 1).
• GL_SHININESS: A scalar integer or floating point value that defines the specular exponent of the material. Only values in the range [0, 128] are accepted. The default value is 0.
• GL_AMBIENT_AND_DIFFUSE: A single parameter that allows you to specify both the GL_AMBIENT and GL_DIFFUSE with the same color value.
• [GLfloat|GLint]{*} param: Specifies the value (or values) to set the pname parameter to.

# Lighting

By default, OpenGL does not perform lighting calculations in the scene. To enable lighting, you must perform at least three steps:

1. All geometry that will be lit with dynamic lights must define vertex normals. Lighting will not look correct on vertices that don’t define correct vertex normals.
2. The lighting calculations must be enabled by calling glEnable(GL_LIGHTING). If you want to render geometry that should not be lit (for example a skybox or user interface elements), you must disable lighting again using glDisable(GL_LIGHTING).
3. Enable at least one light source using glEnable(GL_LIGHT0+i) where i is within the range [0,GL_MAX_LIGHTS-1].

You can specify a number of dynamic lights in an OpenGL scene. The number of lights that you can specify is dependent on the capabilities of the graphics hardware. You can query the maximum number of dynamic lights by using the following function:

GLint maxLights = 0;
glGetIntegerv( GL_MAX_LIGHTS, &maxLights );


Your hardware implementation is required to have support for at least 8 lights.

In the section titled “The Basic Lighting Model” I discussed the different properties of light and how they effect the final color of the vertex being lit. The light properties for each active light can be specified using the glLight family of functions. This function has the following signature:

void glLight[f|i]{v}(GLenum light, GLenum pname, [GLfloat|GLint]{*} param);


Where:

• GLenum light: Specifies the light number. Light numbers are specified as offsets from the symbolic name GL_LIGHT0. For example, to set the properties for the third dynamic light, you would specify GL_LIGHT0+2.
• GLenum pname: Specifies the light parameter being set. This value must be one of the following parameters:
• GL_AMBIENT: This 4-component RGBA vector specifies the light’s ambient intensity. The default value is (0, 0, 0, 1).
• GL_DIFFUSE: This 4-component RGBA vector specifies the light’s diffuse intensity. The default value for GL_LIGHT0 is (1, 1, 1, 1) and (0, 0, 0, 1) for all other lights.
• GL_SPECULAR: This 4-component RGBA vector specifies the light’s specular component. The default value for GL_LIGHT0 is (1, 1, 1, 1) and (0, 0, 0, 1) for all other lights.
• GL_POSITION: This 4-component XYZW vector specifies the position of the light in homogeneous object coordinates. The light’s position is transformed by the current GL_MODELVIEW matrix when the glLight method is called. This implies that lights are transformed by the same matrix transformation that standard objects are transformed by. If the w-component of the GL_POSITION parameter is 0, then the light is treated as a directional light and it’s placed at an infinite distance away from the scene. When treating the light as a directional light, the attenuation factor is ignored. The default value of this parameter is (0, 0, 1, 0). Initially light sources are directional light sources whose direction points parallel to the z-axis.
• GL_SPOT_DIRECTION: This 3-component XYZ direction vector specifies the direction of a spotlight. This parameter is transformed by the current GL_MODELVIEW matrix when glLight is called. This parameter is ignored unless GL_SPOT_CUTOFF is not 180. By default, this parameter is set to (0, 0, -1).
• GL_SPOT_EXPONENT: The spotlight intensity is attenuated by the cosine of the angle between the direction of the light ($Light_{dir}$) and the direction from the position of the light ($Light_p$) to the position of the vertex ($\mathbf{p}$) being lit, raised to the power of the spot exponent.
• GL_SPOT_CUTOFF: This scalar value specifies the maximum spread angle of the spotlight measured in degrees. Valid values for this parameter are in the range [0,90] and 180. Lights with a cutoff angle of 180 degrees are point-lights and lights with a cutoff angle in the range [0,90] are computed as spotlights.
• GL_CONSTANT_ATTENUATION: This scalar parameter represents the constant attenuation factor of the light.
• GL_LINEAR_ATTENUATION: This scalar parameter represents the linear attenuation factor of the light.
• GL_QUADRATIC_ATTENUATION: This scalar parameter represents the quadratic attenuation factor of the light.
• [GLfloat|GLint]{*} param: The value to set the parameter to.

# Putting it All Together

Now that we’ve seen how we can use textures, materials and lighting to render the objects in our scene, let’s examine an example that renders the moon rotating around the earth. The objects are realistically light by a point-light that represents the sun.

## Initialize OpenGL

Using GLUT we can easily initialize the OpenGL context and the rendering window.

We must also register the window callbacks so we can respond to keyboard events and window resizing events.

void InitGL( int argc, char* argv[] )
{
std::cout << "Initialise OpenGL..." << std::endl;

glutInit(&argc, argv);
int iScreenWidth = glutGet(GLUT_SCREEN_WIDTH);
int iScreenHeight = glutGet(GLUT_SCREEN_HEIGHT);

glutInitDisplayMode( GLUT_RGBA | GLUT_ALPHA | GLUT_DOUBLE | GLUT_DEPTH );

glutInitWindowPosition( ( iScreenWidth - g_iWindowWidth ) / 2,
( iScreenHeight - g_iWindowHeight ) / 2 );
glutInitWindowSize( g_iWindowWidth, g_iWindowHeight );

g_iGLUTWindowHandle = glutCreateWindow( "OpenGL" );

// Register GLUT callbacks
glutDisplayFunc(&DisplayGL);
glutIdleFunc(&IdleGL);
glutMouseFunc(&MouseGL);
glutMotionFunc(&MotionGL);
glutPassiveMotionFunc(&PassiveMotionGL);
glutKeyboardFunc(&KeyboardGL);
glutReshapeFunc(&ReshapeGL);

// Setup initial GL State
glClearColor( 0.0f, 0.0f, 0.0f, 1.0f );
glClearDepth( 1.0f );

glEnable( GL_DEPTH_TEST );

// Renormalize scaled normals so that lighting still works properly.
glEnable( GL_NORMALIZE );

std::cout << "Initialise OpenGL: Success!" << std::endl;
}


From lines 293-312 are standard GLUT initialization code. If this code is unfamiliar to you then refer to my previous article titled Introduction to OpenGL.

On lines 315-322 a few OpenGL states are initialized. On line 322, we enable the GL_NORMALIZE state. This is done because the objects in the scene will be scaled which means that the object’s vertex normals will also be scaled. Scaled normals will effect the lighting calculations and the lighting will not look right. Enabling the GL_NORMALIZE state will instruct OpenGL to renormalize normal vectors before the lighting contribution is computed.

We’ll use the SOIL library to load the textures used by this demo. We’ll also specify a few texture properties when after the texture has been loaded.

GLuint LoadTexture( const char* texture )
{

glBindTexture( GL_TEXTURE_2D, textureID );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glBindTexture( GL_TEXTURE_2D, 0 );

return textureID;
}


The SOIL_load_OGL_texture method can be used to load a texture and generate an OpenGL texture object that can then be used to texture the objects in our scene.

We also want to specify the texture filtering mode to GL_LINEAR and the texture wrap mode to GL_REPEAT.

On line 274, the default texture object ID of 0 is bound to the GL_TEXTURE_2D texture target and thus un-binding any previously bound texture object.

## Creating a Sphere

Since all of the objects in our scene (the earth, the moon, and the sun) can all be represented by a sphere, we will generate a single sphere display list and use it to render each object in the scene.

I will use a GLU Quadric to generate a sphere that contains the texture coordinates and the vertex normals that are necessary to render the objects. I haven’t discussed quadrics in this article but I’ll leave it up to the reader to investigate the usage of quadrics.

    g_SphereDisplayList = glGenLists(1);
glNewList( g_SphereDisplayList, GL_COMPILE );
{
gluSphere( pSphereQuadric, 1.0, 360, 180 );
}
glEndList();


On lines 223-228 we define the quadric object. We must specify that we want the quadric to generate texture coordinates and smooth normals at each vertex.

On line 229, we create a new display list (display lists were discussed in a previous article titled Rendering Primitives with OpenGL). A sphere quadric is rendered into the display list.

Once we have captured the sphere definition into a display list, we don’t need the quadric shape any longer. On line 234 we delete the quadric object but retain the display list so that we can quickly render the sphere again.

## Global Ambient

As discussed previously, the global ambient value is multiplied by the material’s ambient term to produce the ambient contribution. The global ambient value is specified with the glLightModelfv method passing GL_LIGHT_MODEL_AMBIENT as the name of the parameter.

    // Specify a global ambient
GLfloat globalAmbient[] = { 0.2, 0.2, 0.2, 1.0 };
glLightModelfv( GL_LIGHT_MODEL_AMBIENT, globalAmbient );


Specifying a global ambient value will allow materials with an ambient term (greater than 0) to show some detail even in the absence of any light source.

## Enable Lighting

Before any lighting calculations will be performed, we must also enable lighting somewhere. To do that, we must call the glEnable method with GL_LIGHTING as the only argument.

    glEnable( GL_LIGHTING );


## Define a Light Source

For convenience, I will define a light source object that can be used to encapsulate all of the properties of lights discussed in this article.

struct Light
{
Light( GLenum lightID = GL_LIGHT0
, color4 ambient = color4( 0.0, 0.0, 0.0, 1.0 )
, color4 diffuse = color4( 1.0, 1.0, 1.0, 1.0 )
, color4 specular = color4( 1.0, 1.0, 1.0, 1.0 )
, float4 position = float4( 0.0, 0.0, 1.0, 0.0 )
, float3 spotDirection = float3( 0.0, 0.0, 1.0 )
, float  spotExponent = 0.0
, float  spotCutoff = 180.0f
, float  constantAttenuation = 1.0
, float  linearAttenuation = 0.0
, float  quadraticAttenuation = 0.0 )
: m_LightID ( lightID )
, m_Ambient( ambient )
, m_Diffuse( diffuse )
, m_Specular( specular )
, m_Position( position )
, m_SpotDirection( spotDirection )
, m_SpotExponent( spotExponent )
, m_SpotCutoff( spotCutoff )
, m_ConstantAttenuation( constantAttenuation )
, m_LinearAttenuation( linearAttenuation )
{}

void Activate()
{
glEnable( m_LightID );
glLightfv( m_LightID, GL_AMBIENT, &(m_Ambient.r) );
glLightfv( m_LightID, GL_DIFFUSE, &(m_Diffuse.r) );
glLightfv( m_LightID, GL_SPECULAR, &(m_Specular.r) );
glLightfv( m_LightID, GL_POSITION, &(m_Position.x) );
glLightfv( m_LightID, GL_SPOT_DIRECTION, &(m_SpotDirection.x) );
glLightf( m_LightID, GL_SPOT_EXPONENT, m_SpotExponent );
glLightf( m_LightID, GL_SPOT_CUTOFF, m_SpotCutoff );
glLightf( m_LightID, GL_CONSTANT_ATTENUATION, m_ConstantAttenuation );
glLightf( m_LightID, GL_LINEAR_ATTENUATION, m_LinearAttenuation );
}

void Deactivate()
{
glDisable( m_LightID );
}

GLenum m_LightID;
color4 m_Ambient;
color4 m_Diffuse;
color4 m_Specular;

float4 m_Position;
float3 m_SpotDirection;
float  m_SpotExponent;
float  m_SpotCutoff;
float  m_ConstantAttenuation;
float  m_LinearAttenuation;
};


The light is activated and positioned in the scene using the Light::Activate method and the light can be deactivated using the Light::Deactivate method.

We will define a single point-light object that will represent the sun.

Light g_SunLight( GL_LIGHT0, color4(0,0,0,1), color4(1,1,1,1), color4(1,1,1,1), float4(0,0,0,1) );


## Defining Materials

Similar to the light source, I wanted to define a material object that can be used to define the different materials used in the scene.

struct Material
{
Material( color4 ambient = color4(0.2, 0.2, 0.2, 1.0)
, color4 diffuse = color4(0.8, 0.8, 0.8, 1.0)
, color4 specular = color4(0.0, 0.0, 0.0, 1.0)
, color4 emission = color4(0.0, 0.0, 0.0, 1.0)
, float shininess = 0 )
: m_Ambient( ambient )
, m_Diffuse( diffuse )
, m_Specular( specular )
, m_Emission( emission )
, m_Shininess( shininess )
{}

void Apply()
{
glMaterialfv( GL_FRONT_AND_BACK, GL_AMBIENT, &(m_Ambient.r) );
glMaterialfv( GL_FRONT_AND_BACK, GL_DIFFUSE, &(m_Diffuse.r) );
glMaterialfv( GL_FRONT_AND_BACK, GL_SPECULAR, &(m_Specular.r) );
glMaterialfv( GL_FRONT_AND_BACK, GL_EMISSION, &(m_Emission.r) );
glMaterialf( GL_FRONT_AND_BACK, GL_SHININESS, m_Shininess );
}

color4 m_Ambient;
color4 m_Diffuse;
color4 m_Specular;
color4 m_Emission;
float  m_Shininess;

};


The material properties have been discussed already. The material is applied to all subsequent objects by calling the Material::Apply method.

We want to render three objects in the scene, each object having it’s own material. The Earth’s material should be brighter and produce a higher specular shine while the moon material should be dull with a subtle dull specular shine. The object that represents the sun should just be an un-shaded white ball.

// Material properties
Material g_SunMaterial( color4(0,0,0,1), color4(1,1,1,1), color4(1,1,1,1) );
Material g_EarthMaterial( color4( 0.2, 0.2, 0.2, 1.0), color4( 1, 1, 1, 1), color4( 1, 1, 1, 1), color4(0, 0, 0, 1), 50 );
Material g_MoonMaterial( color4( 0.1, 0.1, 0.1, 1.0), color4( 1, 1, 1, 1), color4( 0.2, 0.2, 0.2, 1), color4(0, 0, 0, 1), 10 );


## Render the Scene

To render this scene, we’ll draw 3 spheres. The first sphere will represent the sun. This will be an unlit white sphere that will rotate about 90,000 Km around the center of the scene. The position of the only light in the scene will be the same as the object that represents the sun. The sun will also only define a constant attenuation of 1.0 thus making sure that no attenuation takes place and every object in the scene is rendered with the light’s full intensity regardless of the distance away from the sun.

Placed at the center of the scene will be the earth. The earth rotates around it’s poles, but it’s positions stays fixed at the center of the scene.

The final object will be the moon. The moon appears to rotate around the earth but at a distance of 60,000 Km away from the earth.

In this scene, we make the agreement that 1 unit is approximately 1,000 Km. Although the units used are completely arbitrary, it makes sense to choose the units so that it makes sense in you position and scale objects in your scene.

void RenderScene1()
{
glMatrixMode( GL_MODELVIEW );                                           // Switch to modelview matrix mode

// Move the scene back so we can see everything
glTranslatef( 0.0f, 0.0f, -100.0f );

// First draw the sun
glPushMatrix();
// In this simulation, the sun rotates around the earth!
glRotatef( g_fRotate3, 0.0f, -1.0f, 0.0f );
glTranslatef( 90.0f, 0.0f, 0.0f );

g_SunLight.Activate();

glDisable( GL_TEXTURE_2D );
glDisable( GL_LIGHTING );
glColor3f( 1.0f, 1.0f, 1.0f );
glCallList(g_SphereDisplayList );
glPopMatrix();
glEnable( GL_LIGHTING );

// Draw the Earth
glPushMatrix();
glRotatef( 90.0f, 1.0f, 0.0f, 0.0f ); // Rotate the earth so the poles are on the top and bottom.
glRotatef( g_fRotate1, 0.0f, 0.0f, -1.0f ); // Rotate the earth around it's axis
glScalef( 12.756f, 12.756f, 12.756f );  // The earth's diameter is about 12,756 Km

glEnable( GL_TEXTURE_2D );
glBindTexture( GL_TEXTURE_2D, g_EarthTexture );
g_EarthMaterial.Apply();
glCallList( g_SphereDisplayList );  // Render a sphere with the earth texture applied

glPopMatrix();

// Draw the moon
glPushMatrix();
// Rotate the view around the Earth's axis
glRotatef( g_fRotate2, 0.0f, 1.0f, 0.0f );
// Translate the moon away from the earth
glTranslatef( 60.0f, 0.0f, 0.0f );
glRotatef( 90.0f, 1.0f, 0.0f, 0.0f ); // Rotate the moon so it's poles are in the right direction.
// Rotate the moon around it's axis
glRotatef( g_fRotate2, 0.0f, 0.0f, -1.0f );
glScalef( 3.476f, 3.476f, 3.476f );

glBindTexture( GL_TEXTURE_2D, g_MoonTexture );
// Render the sphere with the moon texture.
g_MoonMaterial.Apply();
glCallList( g_SphereDisplayList );

glPopMatrix();

glBindTexture( GL_TEXTURE_2D, 0 );
}


This first thing we do before rendering the scene is zero-out the model view matrix. On line 539, we set the model-view matrix to identity, resetting any transformations that may still be in the matrix stack.

We want to place the camera view sufficiently far enough away so that we can capture all of the scene elements in view. Translating the camera 100 units in the positive z-axis is equivalent to moving the world origin 100 units in the negative z-axis.

We first want to position our light correctly in the scene. If we don’t do this, then any object in the scene that is rendered before the light is positioned and activated will not be lit correctly. On lines 548-553, the world-view matrix is manipulated so that the origin is placed at the position we want the light to appear.

On line 555-560 we draw a white sphere with the lighting disabled at the exact position of the sun light point light that was activated on line 553. This gives us the impression that the point light is emitting from some object (in this case, the white sphere). This also has the result of demonstrating the fact that lights are positioned in the scene in the same way as regular geometry. Using this technique, we must ensure that lighting is disabled when drawing this sphere otherwise it will be rendered completely black since the light is inside this sphere, the outside faces of the white sphere will always be facing away from the light.

On line 563-573 the earth is rendered. Again, the origin of the world is positioned and rotated accordingly. This time, before we render the sphere display list again, we enable the earth texture that we loaded earlier. We also apply the material that we want to use to render the earth. After the lighting calculations are performed, the texture will be modulated (multiplied) with the lit vertices.

The moon object is drawn on lines 576-591. This is almost identical to the earth model, except the moon rotates around the earth which requires an additional translation and a rotation.

On line 593, we should not forget to unbind the texture that we previously bound to the GL_TEXTURE_2D texture target.

What is not shown in this method is the final call that tells GLUT to swap the front and back buffers so that we can see the results of our render method. GLUT provides the glutSwapBuffers method for this purpose. You should not forget to call this method after you are finished rendering your scene.

# View the Demo

The demo below demonstrates the example shown in this article. The demo uses WebGL to render the scene. The WebGL demo will render in the latest FireFox browser as well as the latest Chrome browser. If you are using Internet Explorer however, then you are probably not going to see the beautiful demo but instead you will see the YouTube video.

Texture and Lighting Demo

# References

 Beginning OpenGL Game Programming - Second Edition (2009) Benstead, Luke with Astle, D. and Hawkins, K. (2009). Beginning OpenGL Game Programming. 2nd. ed. Boston, MA: Course Technology. OpenGL Programming Guide - 3rd Edition Mason Woo, Jackie Neider, Tom Davies, Dave Shreiner (1999). OpenGL Programming Guide. 3rd. ed. Massachusetts, USA: Addison Wesley. OpenGL 2.1 Reference Pages [online]. (1991-2006) [Accessed 27 January 2012]. Available from: http://www.opengl.org/sdk/docs/man/.