Build own Kubernetes - Pods creation
Jonatan Ezron
Posted on October 7, 2022
At first, we will focus on the smallest unit of Kubernetes - Pod, this chapter will focus on creating and running a pod. Each container will run in containerd (you need to install containerd in your environment) and will be managed through it.
At the start of this project, we will implement basic tasks with cmd commands using Cobra, and later we will proceed to more advanced stuff.
We initialize the project using cobra-cli and add new commands pod for pods command, list for listing existing pods, create for creating and running new pods (for now).
cobra-cli init
cobra-cli add pod
cobra-cli add list
cobra-cli add create
We will move the commands to a single pod file command inside the cmd file so that we can execute pod list or pod create:
// cmd/pod.go
package cmd
import (
"github.com/spf13/cobra"
)
var podCmd = &cobra.Command{
Use: "pod",
Short: "The command line tool to run commands on pods",
}
var createCmd = &cobra.Command{
Use: "create",
Short: "Create new pod",
// Run: ,
}
var listCmd = &cobra.Command{
Use: "list",
Short: "lists existing pods",
// Run: ,
}
func init() {
rootCmd.AddCommand(podCmd)
podCmd.AddCommand(listCmd)
podCmd.AddCommand(createCmd)
}
Let's implement a pod creation in pkg/pod/pod.go
for now we only support 1 container on 1 pod.
We are using containerd, so make sure it is running on your system.
Source on how to use the containerd package.
We will define a few structs. the first two would be: a Pod struct which will represent an existing Pod instance in containerd, and a RunningPod struct which will represent a pod instance that currently running.
type Pod struct {
Id string
client *containerd.Client
ctx *context.Context
container *containerd.Container
}
Id - generated id for the pod
client - the containerd client to communicate with
ctx - context to use, will calls to client methods
container - the pod's container instance
type RunningPod struct {
Pod *Pod
task *containerd.Task
exitStatusC <-chan containerd.ExitStatus
}
Pod - the configured Pod instance which was created from
task - the current process task that is running
exitStatusC - channel to get the exit status from
Now after we defined the structs let's create a NewPod method for Pod creation, in the following function we create a new containerd client, a new context, pull the image, generate a new id, and create a new container:
func NewPod(registryImage string, name string) (*Pod, error) {
client, err := containerd.New("/run/containerd/containerd.sock")
if err != nil {
return nil, err
}
ctx := namespaces.WithNamespace(context.Background(), "own-kubernetes")
image, err := client.Pull(ctx, registryImage, containerd.WithPullUnpack)
if err != nil {
return nil, err
}
id := generateNewID(name)
container, err := client.NewContainer(
ctx,
id,
containerd.WithImage(image),
containerd.WithNewSnapshot(id+"-snapshot", image),
containerd.WithNewSpec(oci.WithImageConfig(image)),
)
if err != nil {
return nil, err
}
return &Pod{
Id: id,
container: &container,
ctx: &ctx,
client: client,
}, nil
}
func generateNewID(name string) string {
id := uuid.New()
return fmt.Sprintf("%s-%s", name, id)
}
The id generation is generated with the google uuid package and with the name given as seen above.
Next, we implement the Run pod method, at first we create a new task which will be a process of a running pod, wait for the creation and start the task:
func (pod *Pod) Run() (*RunningPod, error) {
task, err := (*pod.container).NewTask(*pod.ctx, cio.NewCreator(cio.WithStdio))
if err != nil {
return nil, err
}
exitStatusC, err := task.Wait(*pod.ctx)
if err != nil {
fmt.Println(err)
}
if err := task.Start(*pod.ctx); err != nil {
return nil, err
}
return &RunningPod{
Pod: pod,
task: &task,
exitStatusC: exitStatusC,
}, nil
}
Next, we implement the Kill method for the running pod, it will kill the existing process, delete the existing task, and returns the status code. And implement the Delete method for an existing pod which will delete the existing container and close the client connection
func (pod *RunningPod) Kill() (uint32, error) {
// kill the process and get the exit status
if err := (*pod.task).Kill(*pod.Pod.ctx, syscall.SIGTERM); err != nil {
return 0, err
}
// wait for the process to fully exit and print out the exit status
status := <-pod.exitStatusC
code, _, err := status.Result()
if err != nil {
return 0, err
}
(*pod.task).Delete(*pod.Pod.ctx)
return code, nil
}
func (pod *Pod) Delete() {
(*pod.container).Delete(*pod.ctx, containerd.WithSnapshotCleanup)
pod.client.Close()
}
On cmd/pod.go we implement a simple command for creating and running a pod, we define the flags --registry
and --name
:
func init() {
rootCmd.AddCommand(podCmd)
podCmd.AddCommand(listCmd)
podCmd.AddCommand(createCmd)
createCmd.Flags().StringVar(&imageRegistry, "registry", "", "image registry to pull (required)")
createCmd.MarkFlagRequired("registry")
createCmd.Flags().StringVar(&name, "name", "nameless", "the pod name")
}
and created a simple implementation for creating and running the pod and after 3 seconds kill it and delete:
var (
imageRegistry string
name string
)
var createCmd = &cobra.Command{
Use: "create",
Short: "Create new pod",
RunE: func(cmd *cobra.Command, args []string) error {
pod, err := pod.NewPod(imageRegistry, name)
if err != nil {
return err
}
fmt.Printf("pod created: %s\n", pod.Id)
fmt.Printf("starting pod\n")
runningPod, err := pod.Run()
if err != nil {
return err
}
fmt.Printf("pod started: %s\n", pod.Id)
time.Sleep(3 * time.Second)
fmt.Printf("killing pod\n")
code, err := runningPod.Kill()
if err != nil {
return err
}
fmt.Printf("pod killed: %s\n", pod.Id)
fmt.Printf("%s exited with status: %d\n", runningPod.Pod.Id, code)
pod.Delete()
fmt.Printf("container deleted: %s\n", pod.Id)
return nil
},
}
So we implemented all the necessary things to create and run a pod, let's see it running! (make sure the containerd is running)
In the terminal build the go project:
go build main.go
and let's create a new Redis pod:
❯ sudo ./main pod create --registry docker.io/library/redis:alpine --name redis
pod created: redis-73c4d234-2fe1-4b8f-bfe4-aa9044dc064a
starting pod
pod started: redis-73c4d234-2fe1-4b8f-bfe4-aa9044dc064a
1:C 07 Oct 2022 10:53:31.804 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 07 Oct 2022 10:53:31.804 # Redis version=7.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 07 Oct 2022 10:53:31.804 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 07 Oct 2022 10:53:31.805 # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
1:M 07 Oct 2022 10:53:31.805 # Server can't set maximum open files to 10032 because of OS error: Operation not permitted.
1:M 07 Oct 2022 10:53:31.805 # Current maximum open files is 1024. maxclients has been reduced to 992 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
1:M 07 Oct 2022 10:53:31.805 * monotonic clock: POSIX clock_gettime
1:M 07 Oct 2022 10:53:31.805 * Running mode=standalone, port=6379.
1:M 07 Oct 2022 10:53:31.805 # Server initialized
1:M 07 Oct 2022 10:53:31.805 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 07 Oct 2022 10:53:31.806 * Ready to accept connections
killing pod
1:signal-handler (1665140014) Received SIGTERM scheduling shutdown...
1:M 07 Oct 2022 10:53:34.816 # User requested shutdown...
1:M 07 Oct 2022 10:53:34.816 * Saving the final RDB snapshot before exiting.
1:M 07 Oct 2022 10:53:34.895 * DB saved on disk
1:M 07 Oct 2022 10:53:34.895 # Redis is now ready to exit, bye bye...
pod killed: redis-73c4d234-2fe1-4b8f-bfe4-aa9044dc064a
redis-73c4d234-2fe1-4b8f-bfe4-aa9044dc064a exited with status: 0
container deleted: redis-73c4d234-2fe1-4b8f-bfe4-aa9044dc064a
It works!
On the next article we will list and delete pods on command.
The full source code can be found here, the changes were in pkg/pod/service.go and cmd/pod.go.
Posted on October 7, 2022
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.