Triangles On Web Ch1 Draw Something

fingerbone

Zend

Posted on November 24, 2024

Triangles On Web Ch1 Draw Something

This series introduces WebGPU, and computer graphics in general.

First let's look what we are going to build,

Life Game

Life Game

3D Rendering

3D Rendering

3D Rendering, but with lighting

3D Rendering, but with lighting

Rendering 3D Model

Rendering 3D Model

Except for the basic knowledge of JS, no prior knowledge is needed.

The tutorial is already finished on my github, along with the source code.

WebGPU is a relatively new API for the GPU. Albeit named as WebGPU, it can actually be considered a layer on top of Vulkan, DirectX 12, and Metal, OpenGL and WebGL. It is designed to be a low-level API, and is intended to be used for high-performance applications, such as games and simulations.

In this chapter, we will draw something on the screen. The first part will refer to the Google Codelabs Tutorial. We will create a life game on the screen.

Starting Point

We will just create an empty vanilla JS project in vite with typescript enabled. Then clear all the extra codes, leaving only the main.ts.

const main = async () => {
    console.log('Hello, world!')
}

main()
Enter fullscreen mode Exit fullscreen mode

Before actual coding, please check if your browser has WebGPU enabled. You can check it on WebGPU Samples.

Chrome now defaults to enabled. On Safari, you should go to developer settings, flag settings and enable WebGPU.

We also need to enable thee types for WebGPU, install @webgpu/types, and in tsc compiler options, add "types": ["@webgpu/types"].

Furthermore, we replace the <div id="app"></div> with <canvas id="app"></canvas> in the index.html.

Drawing a Triangle

There are many boilerplate codes to WebGPU, here is how it looks like.

Requesting Device

First we need access to the GPU. In WebGPU, it is done by the concept of an adapter, which is a bridge between the GPU and the browser.

const adapter = await navigator.gpu.requestAdapter();
Enter fullscreen mode Exit fullscreen mode

Then we need to request a device from the adapter.

const device = await adapter.requestDevice();
console.log(device);
Enter fullscreen mode Exit fullscreen mode

Configure the Canvas

We draw our triangle on the canvas. We need to get the canvas element and configure it.

const canvas = document.getElementById('app') as HTMLCanvasElement;
const context = canvas.getContext("webgpu")!;
const canvasFormat = navigator.gpu.getPreferredCanvasFormat();
context.configure({
    device: device,
    format: canvasFormat,
});
Enter fullscreen mode Exit fullscreen mode

Here, we use getContext to get relative information about the canvas. By specifying webgpu, we will get a context that is responsible for rendering with WebGPU.

CanvasFormat is actually the color mode, for example, srgb. We usually just use the preferred format.

Lastly, we configure the context with the device and the format.

Understanding GPU Rendering Pipeline

Before diving further into the engineering details, we first must understand how GPU handles rendering.

The GPU rendering pipeline is a series of steps that the GPU takes to render an image.

The application run on GPU is called a shader. The shader is a program that runs on the GPU. The shader has a special programming language that we will discuss later.

The render pipeline has the following steps,

  1. CPU loads the data into the GPU. CPU may removed some invisible objects to save GPU resources.
  2. CPU sets all the colors, textures, and other data that the GPU needs to render the scene.
  3. CPU trigger a draw call to the GPU.
  4. GPU gets the data from the CPU and starts rendering the scene.
  5. GPU run into the geometry process, which processes the vertices of the scene.
  6. In the geometry process, the first step is the vertex shader, which processes the vertices of the scene. It may transform the vertices, change the color of the vertices, or do other things to the vertices.
  7. The next step is the tessellation shader, which processes the vertices of the scene. It performs subdivision of the vertices, whose purpose is to increase the detail of the scene. It also has many procedures but it's too complex to explain here.
  8. The next step is the geometry shader, which processes the vertices of the scene. In contrast to the vertex shader, where the developer could only define how one vertex is transformed, the geometry shader can define how multiple vertices are transformed. It can also create new vertices, which can be used to create new geometry.
  9. The last step of geometry process contains clipping, removing the undue parts that exceed the screen, and culling, removing the invisible parts that are not visible to the camera.
  10. The next step is the rasterization process, which converts the vertices into fragments. A fragment is a pixel that is going to be rendered on the screen.
  11. The next step is iteration of triangles, which iterates over the triangles of the scene.
  12. The next step is the fragment shader, which processes the fragments of the scene. It may change the color of the fragments, change the texture of the fragments, or do other things to the fragments. In this part, the depth test and stencil test are also performed. Depth test means to confer each fragment with the depth value, and the fragment with the smallest depth value will be rendered. Stencil test means to confer each fragment with the stencil value, and the fragment that passes the stencil test will be rendered. The stencil value is decided by the developer.
  13. The next step is the blending process, which blends the fragments of the scene. For example, if two fragments are overlapping, the blending process will blend the two fragments together.
  14. The last step is the output process, which outputs the fragments to the swap chain. The swap chain is a chain of images that are used to render the scene. To put it more simply, it is a buffer that holds the image that is going to be displayed on the screen.

Depending on the primitives, the smallest unit that GPU can render, the pipeline may have different steps. Typically, we use triangles, which signals the GPU to treat every 3 group of vertices as a triangle.

Creating Render Pass

Render Pass is a step of the full GPU rendering. When a render pass is created, the GPU will start rendering the scene, and vice versa when it finishes.

To create a render pass, we need to create an encoder that is responsible for compiling the render pass to GPU codes.

const encoder = device.createCommandEncoder();
Enter fullscreen mode Exit fullscreen mode

Then we create a render pass.

const pass = encoder.beginRenderPass({
  colorAttachments: [{
     view: context.getCurrentTexture().createView(),
     loadOp: "clear",
     storeOp: "store",
  }]
});
Enter fullscreen mode Exit fullscreen mode

Here, we create a render pass with a color attachment. Attachment is a concept in GPU that represents the image that is going to be rendered. An image may have many aspect which the GPU need to process, and each of them is an attachment.

Here we only have one attachment, which is the color attachment. The view is the panel that the GPU will render on, here we set it to the texture of the canvas.

loadOp is the operation that the GPU will do before the render pass, clear means GPU will first clear all the previously data from the last frame, and storeOp is the operation that the GPU will do after the render pass, store means GPU will store the data to the texture.

loadOp can be load, which preserves the data from the last frame, or clear, which clears the data from the last frame. storeOp can be store, which stores the data to the texture, or discard, which discards the data.

Now, just call pass.end() to end the render pass. Now, the command is saved in the command buffer of the GPU.

To get the compiled command, use the following code,

const commandBuffer = encoder.finish();
Enter fullscreen mode Exit fullscreen mode

And, finally, submit the command to the render queue of the GPU.

device.queue.submit([commandBuffer]);
Enter fullscreen mode Exit fullscreen mode

Now, you should see an ugly black canvas.

Based on our stereotypical concepts about 3D, we would expect empty space to be a blue color. We can done that by setting the clear color.

const pass = encoder.beginRenderPass({
    colorAttachments: [{
        view: context.getCurrentTexture().createView(),
        loadOp: "clear",
        clearValue: { r: 0.1, g: 0.3, b: 0.8, a: 1.0 },
        storeOp: "store",
    }]
});
Enter fullscreen mode Exit fullscreen mode

Drawing a Triangle Using Shader

Now, we will draw a triangle on the canvas. We will use a shader to do that. The shader language will be wgsl, WebGPU Shading Language.

Now, suppose we want to draw a triangle with the following coordinates,

(-0.5, -0.5), (0.5, -0.5), (0.0, 0.5)
Enter fullscreen mode Exit fullscreen mode

As we stated before, to complete a render pipeline, we need a vertex shader and a fragment shader.

Vertex Shader

Use the following code to create shader modules.

const cellShaderModule = device.createShaderModule({
  label: "shader",
  code: `
    // Shaders
  `
});
Enter fullscreen mode Exit fullscreen mode

label here is simply a name, which is meant for debugging. code is the actual shader code.

Vertex shader is a function that takes any parameter and returns the position of the vertex. However, contrary to what we might expect, the vertex shader returns a four dimensional vector, not a three dimensional vector. The fourth dimension is the w dimension, which is used for perspective division. We will discuss it later.

Now, you can simply regard a four dimensional vector (x, y, z, w) as a three dimensional vector (x / w, y / w, z / w).

However, there is another problem- how to pass the data to the shader, and how to get the data out from the shader.

To pass the data to the shader, we use the vertexBuffer, a buffer that contains the data of the vertices. We can create a buffer with the following code,

const vertexBuffer = device.createBuffer({
  size: 24,
  usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
  mappedAtCreation: true,
});
Enter fullscreen mode Exit fullscreen mode

Here we create a buffer with a size of 24 bytes, 6 floats, which is the size of the vertices.

usage is the usage of the buffer, which is VERTEX for vertex data. GPUBufferUsage.COPY_DST means this buffer is valid as a copy destination. For all buffer whose data are written by the CPU, we need to set this flag.

The map here means to map the buffer to the CPU, which means the CPU can read and write the buffer. The unmap means to unmap the buffer, which means the CPU can no longer read and write the buffer, and thus the content is available to the GPU.

Now, we can write the data to the buffer.

new Float32Array(vertexBuffer.getMappedRange()).set([
  -0.5, -0.5,
  0.5, -0.5,
  0.0, 0.5,
]);
vertexBuffer.unmap();
Enter fullscreen mode Exit fullscreen mode

Here, we map the buffer to the CPU, and write the data to the buffer. Then we unmap the buffer.

vertexBuffer.getMappedRange() will return the range of the buffer that is mapped to the CPU. We can use it to write the data to the buffer.

However, these are just raw data, and the GPU doesn't know how to interpret them. We need to define the layout of the buffer.

const vertexBufferLayout: GPUVertexBufferLayout = {
    arrayStride: 8,
    attributes: [{
        format: "float32x2",
        offset: 0,
        shaderLocation: 0,
    }],
};
Enter fullscreen mode Exit fullscreen mode

Here, arrayStride is the number of bytes the GPU needs to skip forward in the buffer when it's looking for the next input. For example, if the arrayStride is 8, the GPU will skip 8 bytes to get the next input.

Since here, we use float32x2, the stride is 8 bytes, 4 bytes for each float, and 2 floats for each vertex.

Now we can write the vertex shader.

const shaderModule = device.createShaderModule({
  label: "shader",
  code: `
    @vertex
    fn vertexMain(@location(0) pos: vec2f) -> @builtin(position) vec4f {
        return vec4f(pos, 0, 1);
    }
  `
});
Enter fullscreen mode Exit fullscreen mode

Here, @vertex means this is a vertex shader. @location(0) means the location of the attribute, which is 0, as previously defined. Please note that in the shader language, you are dealing with the layout of the buffer, so whenever you pass a value, you need to pass either a struct, whose fields had defined @location, or just a value with @location.

vec2f is a two dimensional float vector, and vec4f is a four dimensional float vector. Since vertex shader is required to return a vec4f position, we need to annotate that with @builtin(position).

Fragment Shader

Fragment shader, similarly, is something that takes the interpolated vertex output and output the attachments, color in this case. The interpolated means that although only certain pixel on the vertices have decided value, for every other pixel, the values are interpolated, either linear, averaged, or other means. The color of fragment is a four dimensional vector, which is the color of the fragment, respectively red, green, blue, and alpha.

Please note that the color is in the range of 0 to 1, not 0 to 255. In addition that, fragment shader defines the color of every vertex, not the color of the triangle. The color of the triangle is determined by the color of the vertices, by interpolation.

Since we currently does not bother to control the color of the fragment, we can simply return a constant color.

const shaderModule = device.createShaderModule({
  label: "shader",
  code: `
    @vertex
    fn vertexMain(@location(0) pos: vec2f) -> @builtin(position) vec4f {
        return vec4f(pos, 0, 1);
    }
    @fragment
    fn fragmentMain() -> vec4<f32> {
        return vec4<f32>(1.0, 1.0, 0.0, 1.0);
    }
  `
});
Enter fullscreen mode Exit fullscreen mode

Render Pipeline

Then we define the customized render pipeline by replacing the vertex and fragment shader.

const pipeline = device.createRenderPipeline({
    label: "pipeline",
    layout: "auto",
    vertex: {
        module: shaderModule,
        entryPoint: "vertexMain",
        buffers: [vertexBufferLayout]
    },
    fragment: {
        module: shaderModule,
        entryPoint: "fragmentMain",
        targets: [{
            format: canvasFormat
        }]
    }
});
Enter fullscreen mode Exit fullscreen mode

Note that in fragment shader, we need to specify the format of the target, which is the format of the canvas.

Draw Call

Before render pass ends, we add the draw call.

pass.setPipeline(pipeline);
pass.setVertexBuffer(0, vertexBuffer);
pass.draw(3);
Enter fullscreen mode Exit fullscreen mode

Here, in setVertexBuffer, the first parameter is the index of the buffer, in the pipeline definition field buffers, and the second parameter is the buffer itself.

When calling draw, the parameter is the number of vertices to draw. Since we have 3 vertices, we draw 3.

Now, you should see a yellow triangle on the canvas.

Draw Life Game Cells

Now we tweak our codes a bit- since we want to build a life game, so we need to draw squares instead of triangles.

A square is actually two triangles, so we need to draw 6 vertices. The changes here are simple and you don't need a detailed explanation.

const main = async () => {
    const adapter = await navigator.gpu.requestAdapter();
    if (!adapter) {
        console.error('WebGPU not supported');
        return;
    }
    const device = await adapter.requestDevice();
    console.log(device);
    const canvas = document.getElementById('app') as HTMLCanvasElement;
    const context = canvas.getContext("webgpu")!;
    const canvasFormat = navigator.gpu.getPreferredCanvasFormat();
    context.configure({
        device: device,
        format: canvasFormat,
    });
    const vertices = [
        0.0, 0.0,
        0.0, 1.0,
        1.0, 0.0,

        1.0, 1.0,
        0.0, 1.0,
        1.0, 0.0,
    ]
    const vertexBuffer = device.createBuffer({
        size: vertices.length * 4,
        usage: GPUBufferUsage.VERTEX | GPUBufferUsage.COPY_DST,
        mappedAtCreation: true,
    });
    new Float32Array(vertexBuffer.getMappedRange()).set(vertices);
    const vertexBufferLayout: GPUVertexBufferLayout = {
        arrayStride: 8,
        attributes: [{
            format: "float32x2",
            offset: 0,
            shaderLocation: 0,
        }],
    };
    const shaderModule = device.createShaderModule({
        label: "shader",
        code: `
            @vertex
            fn vertexMain(@location(0) pos: vec2f) -> @builtin(position) vec4f {
                return vec4f(pos, 0, 1);
            }
            @fragment
            fn fragmentMain() -> vec4<f32> {
                return vec4<f32>(1.0, 1.0, 0.0, 1.0);
            }
        `
    });
    vertexBuffer.unmap();
    const pipeline = device.createRenderPipeline({
        label: "Cell pipeline",
        layout: "auto",
        vertex: {
            module: shaderModule,
            entryPoint: "vertexMain",
            buffers: [vertexBufferLayout]
        },
        fragment: {
            module: shaderModule,
            entryPoint: "fragmentMain",
            targets: [{
                format: canvasFormat
            }]
        }
    });
    const encoder = device.createCommandEncoder();
    const pass = encoder.beginRenderPass({
        colorAttachments: [{
            view: context.getCurrentTexture().createView(),
            loadOp: "clear",
            clearValue: { r: 0.1, g: 0.3, b: 0.8, a: 1.0 },
            storeOp: "store",
        }]
    });
    pass.setPipeline(pipeline);
    pass.setVertexBuffer(0, vertexBuffer);
    pass.draw(vertices.length / 2);
    pass.end();
    const commandBuffer = encoder.finish();
    device.queue.submit([commandBuffer]);
}
Enter fullscreen mode Exit fullscreen mode

Now, you should see a yellow square on the canvas.

Coordinate System

We didn't discuss the coordinate system of the GPU. It is, well, rather simple. The actual coordinate system of the GPU is a right-handed coordinate system, which means the x-axis points to the right, the y-axis points up, and the z-axis points out of the screen.

The range of the coordinate system is from -1 to 1. The origin is at the center of the screen. z-axis is from 0 to 1, 0 is the near plane, and 1 is the far plane. However, z-axis is for depth. When you do 3D rendering, you can not just use z-axis to determine the position of the object, you need to use the perspective division. This is called the NDC, normalized device coordinate.

For example, if you want to draw a square at the top left corner of the screen, the vertices are (-1, 1), (-1, 0), (0, 1), (0, 0), though you need to use two triangles to draw it.

💖 💪 🙅 🚩
fingerbone
Zend

Posted on November 24, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related

Triangles On Web Ch1 Draw Something
javascript Triangles On Web Ch1 Draw Something

November 24, 2024