Yury Samkevich
Posted on August 28, 2022
Welcome to the second part of Learn OpenGL with Rust tutorial. In the last article we learned a bit of OpenGL theory and discovered how to create a window, initialize OpenGL context and call some basic api to clear a window with a color of our choice.
In this article we'll briefly discuss modern OpenGL graphics pipeline and how to configure it using shaders. All the source code for the article you can find on my github.
Graphics pipeline
We consider screen of the devise as a 2D array of pixels, but usually we want to draw objects in 3D space, so a large part of OpenGL's work is about transforming 3D coordinates to 2D pixels that fit on a screen. The process of transforming 3D coordinates to 2D pixels is managed by the graphics pipeline
.
The graphics pipeline can be divided into several steps where each step requires the output of the previous step as its input. All of these steps have their own specific function and can be executed in parallel. Each step of the pipeline runs small programs on the GPU called shader
.
As input to the graphics pipeline we pass vertices, list of points from which shapes like triangles will be constructed later. Each of these points is stored with certain attributes and it's up to programmer to decide what kind of attributes they want to store. Commonly used attributes are 3D position and color value.
The first part of the pipeline is the vertex shader
that takes as input a single vertex. The main purpose of the vertex shader is transformation of 3D coordinates. It also passes important attributes like color and texture coordinates further down the pipeline.
The primitive assembly
stage takes as input all the vertices from the vertex shader and assembles all the points into a primitive shape.
The output of the primitive assembly stage is passed to the geometry shader
. The geometry shader takes the primitives from the previous stage as input and can either pass a primitive down to the rest of the pipeline, modify it, completely discard or even replace it with other primitives.
After that final list of shapes is composed and converted to screen coordinates, the rasterization stage
turns the visible parts of the shapes into pixel-sized fragments.
The main purpose of the fragment shader
is to calculate the final color of a pixel. In more advanced scenarios, there could also be calculations related to lighting and shadowing and special effects in this program.
Finally, the end result is composed from all these shape fragments by blending them together and performing depth and stencil testing
. So even if a pixel output color is calculated in the fragment shader, the final pixel color could still be something different when rendering multiple triangles one over another.
The graphics pipeline is quite complex and contains many configurable parts. In modern OpenGL we are required to define at least a vertex and fragment shader (geometry shader is optional).
Shaders
As discussed earlier in modern OpenGL, it's up to us to instruct the graphics card what to do with the data. And we can do it writing shader programs. We will configure two very simple shaders to render our first triangle.
Shaders are written in a C-style language called GLSL (OpenGL Shading Language). OpenGL will compile your program from source at runtime and copy it to the graphics card. Below you can find the source code of a vertex shader in GLSL:
#version 330
in vec2 position;
in vec3 color;
out vec3 vertexColor;
void main() {
gl_Position = vec4(position, 0.0, 1.0);
vertexColor = color;
}
Each shader begins with a declaration of its version. Since OpenGL 3.3 and higher the version numbers of GLSL match the version of OpenGL.
Next we declare all the input vertex attributes in the vertex shader with the in
keyword. We have two vertex attributes: one for vertex position and another one for vertex color. Apart from the regular C types, GLSL has built-in vector and matrix types: vec
and mat
with a number at the end which stands for number of components. The final position of the vertex is assigned to the special gl_Position
variable.
The output from the vertex shader is interpolated over all the pixels on the screen covered by a primitive. These pixels are called fragments and this is what the fragment shader operates on. The fragment shader only requires one output variable and that is a vector of size 4 that defines the final color output, FragColor
in our case. Here is an example of our simple fragment shader:
#version 330
out vec4 FragColor;
in vec3 vertexColor;
void main() {
FragColor = vec4(vertexColor, 1.0);
}
Shader can specify inputs and outputs using in
and out
keywords. If we want to send data from one shader to another we have to declare an output in the first shader and a similar input in the second shader. OpenGL will link those variables together and send data between shaders. In our case we pass vertexColor
from vertex shader to the fragment shader. This value will be interpolated among all the fragments of our triangle.
In order for OpenGL to use the shader it has to dynamically compile it at run-time from a source code. But first we declare shader struct which will store shader object id
:
pub struct Shader {
pub id: GLuint,
}
To create an object we will use gl::CreateShader
function which takes type of shader (gl::VERTEX_SHADER
or gl::FRAGMENT_SHADER
) as a first argument. Then we attach the shader source code to the shader object and compile the shader:
let source_code = CString::new(source_code)?;
let shader = Self {
id: gl::CreateShader(shader_type),
};
gl::ShaderSource(shader.id, 1, &source_code.as_ptr(), ptr::null());
gl::CompileShader(shader.id);
To check if compilation was successful and to retrieving the compile log we can use gl::GetShaderiv
and gl::GetShaderInfoLog
accordingly. The final version of shader creation function looks like this:
impl Shader {
pub unsafe fn new(source_code: &str, shader_type: GLenum) -> Result<Self, ShaderError> {
let source_code = CString::new(source_code)?;
let shader = Self {
id: gl::CreateShader(shader_type),
};
gl::ShaderSource(shader.id, 1, &source_code.as_ptr(), ptr::null());
gl::CompileShader(shader.id);
// check for shader compilation errors
let mut success: GLint = 0;
gl::GetShaderiv(shader.id, gl::COMPILE_STATUS, &mut success);
if success == 1 {
Ok(shader)
} else {
let mut error_log_size: GLint = 0;
gl::GetShaderiv(shader.id, gl::INFO_LOG_LENGTH, &mut error_log_size);
let mut error_log: Vec<u8> = Vec::with_capacity(error_log_size as usize);
gl::GetShaderInfoLog(
shader.id,
error_log_size,
&mut error_log_size,
error_log.as_mut_ptr() as *mut _,
);
error_log.set_len(error_log_size as usize);
let log = String::from_utf8(error_log)?;
Err(ShaderError::CompilationError(log))
}
}
}
To delete a shader once we don't need it anymore we implement trait Drop
and will call gl::DeleteShader
function with shader id
as an argument:
impl Drop for Shader {
fn drop(&mut self) {
unsafe {
gl::DeleteShader(self.id);
}
}
}
Shader program
So far vertex and fragment shaders have been two separate objects. We will use shader program
to link them together. When linking the shaders into a program it links the outputs of each shader to the inputs of the next shader. Hence we can have program linking errors if outputs and inputs do not match.
Similar to Shader
we will declare Program
struct, which holds program id
generated by gl::CreateProgram
function. To link all shaders together we need to attach them first with gl::AttachShader
and then use gl::LinkProgram
for linking. Like with shaders we can check for and retrieve linking errors if we have any.
pub struct ShaderProgram {
pub id: GLuint,
}
impl ShaderProgram {
pub unsafe fn new(shaders: &[Shader]) -> Result<Self, ShaderError> {
let program = Self {
id: gl::CreateProgram(),
};
for shader in shaders {
gl::AttachShader(program.id, shader.id);
}
gl::LinkProgram(program.id);
let mut success: GLint = 0;
gl::GetProgramiv(program.id, gl::LINK_STATUS, &mut success);
if success == 1 {
Ok(program)
} else {
let mut error_log_size: GLint = 0;
gl::GetProgramiv(program.id, gl::INFO_LOG_LENGTH, &mut error_log_size);
let mut error_log: Vec<u8> = Vec::with_capacity(error_log_size as usize);
gl::GetProgramInfoLog(
program.id,
error_log_size,
&mut error_log_size,
error_log.as_mut_ptr() as *mut _,
);
error_log.set_len(error_log_size as usize);
let log = String::from_utf8(error_log)?;
Err(ShaderError::LinkingError(log))
}
}
}
To prevent resources liking we implement Drop
trait for shader program as well:
impl Drop for ShaderProgram {
fn drop(&mut self) {
unsafe {
gl::DeleteProgram(self.id);
}
}
}
Finally we can use our declared types to compile shaders and link them into a program, which we will use during rendering:
let vertex_shader = Shader::new(VERTEX_SHADER_SOURCE, gl::VERTEX_SHADER)?;
let fragment_shader = Shader::new(FRAGMENT_SHADER_SOURCE, gl::FRAGMENT_SHADER)?;
let program = ShaderProgram::new(&[vertex_shader, fragment_shader])?;
To use the program while rendering we declare function apply
for Program
type that uses gl::UseProgram
under the hood:
pub unsafe fn apply(&self) {
gl::UseProgram(self.id);
}
Every rendering call after apply
will use this program object.
Summary
Today we've learned how graphics pipeline of modern OpenGL works and how we can use shaders to configure it.
Next time we are going to learn what vertex buffer and vertex array objects are and how we can use knowledge we've got so far to render a first triangle.
If you find the article interesting consider hit the like button and subscribe for updates.
Posted on August 28, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.