Mini gRPC Project (2): Deploying the gRPC API on k8s

greenteabiscuit

Reishi Mitani

Posted on October 16, 2020

Mini gRPC Project (2): Deploying the gRPC API on k8s

This is part 2 of Mini gRPC Project (1): Creating a Simple Increment API on Go

Prerequisites

  • MacOS Catalina
  • Already have gopaths configured
  • Have a GCP project of your own

Overview of the Directory

$ tree
.
├── README.md
├── infrastructure
│   ├── backend-deployment.yml
│   ├── backend-service.yml
│   ├── frontend-deployment.yml
│   └── frontend-service.yml
├── proto
│   ├── calc.proto
│   └── gen
│       └── calc.pb.go
└── src
    ├── backend
    │   ├── Dockerfile
    │   └── main.go
    └── frontend
        ├── Dockerfile
        └── main.go

6 directories, 11 files
Enter fullscreen mode Exit fullscreen mode

Create Dockerfiles

src/frontend/Dockerfile

Make sure to push your protos to your github account so that you can import them.

FROM golang:1.15

ENV HOME /root
ENV PATH $PATH:/usr/local/go/bin
ENV GOPATH /go 
RUN echo $GOPATH
RUN go get -u github.com/golang/protobuf/protoc-gen-go
RUN go get -u github.com/grpc-ecosystem/go-grpc-middleware/logging/zap
RUN go get -u go.uber.org/zap
RUN go get -u github.com/YOURACCOUNT/micro-prac/proto/gen
WORKDIR /go/src/micro-sample-frontend
COPY . .

RUN go build -o /usr/local/bin/micro-sample-frontend

CMD ["micro-sample-frontend"]
Enter fullscreen mode Exit fullscreen mode

src/backend/Dockerfile

FROM golang:1.15

ENV HOME /root
ENV PATH $PATH:/usr/local/go/bin
ENV GOPATH /go 
RUN echo $GOPATH
RUN go get -u github.com/golang/protobuf/protoc-gen-go
RUN go get -u github.com/grpc-ecosystem/go-grpc-middleware/logging/zap
RUN go get -u go.uber.org/zap

# Make sure to push your protos to your github account
RUN go get -u github.com/YOURACCOUNT/micro-prac/proto/gen

WORKDIR /go/src/micro-sample-backend
COPY . .

RUN go build -o /usr/local/bin/micro-sample-backend

CMD ["micro-sample-backend"]
Enter fullscreen mode Exit fullscreen mode

Create the Docker Images

To build, use the following command.

docker build -t gcr.io/$PROJECT_ID/micro-sample-frontend:v0.1 .
docker build -t gcr.io/$PROJECT_ID/micro-sample-backend:v0.1 .
Enter fullscreen mode Exit fullscreen mode

Create GCP project

I created a project and in my GKE created a cluster called micro-sample.

Alt Text

In my local machine, we will run the following

$ gcloud init
Welcome! This command will take you through the configuration of gcloud.

// Choose the correct configurations for your project.
.......

$ gcloud container clusters get-credentials micro-sample --zone="asia-northeast1-a"
Fetching cluster endpoint and auth data.
kubeconfig entry generated for micro-sample.

$ kubectl config current-context
gke_$PROJECT_ID_asia-northeast1-a_micro-sample
Enter fullscreen mode Exit fullscreen mode

Push the images to GCR

We will push the built images to GCR, now the GCP project is created.

docker push gcr.io/$PROJECT_ID/micro-sample-frontend:v0.1
docker push gcr.io/$PROJECT_ID/micro-sample-backend:v0.1
Enter fullscreen mode Exit fullscreen mode

You should be able to see your repositories in your GCR.

Alt Text

Create service yaml files.

Make sure to replace the $PROJECT_ID with your own GCP project id. We will create a new folder named infrastructure and store all the yaml files in it.

infrastructure$ tree
.
├── backend-deployment.yml
├── backend-service.yml
├── frontend-deployment.yml
└── frontend-service.yml

0 directories, 4 files
Enter fullscreen mode Exit fullscreen mode

infrastructure/backend-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: micro-sample-backend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: micro-sample
  template:
    metadata:
      labels:
        app: micro-sample
        tier: backend
        track: stable
    spec:
      containers:
      - name: micro-sample
        image: gcr.io/$PROJECT_ID/micro-sample-backend:v0.1
        ports:
        - containerPort: 8000
Enter fullscreen mode Exit fullscreen mode

infrastructure/backend-service.yml

kind: Service
apiVersion: v1
metadata:
  name: micro-sample-service-backend
spec:
  selector:
    app: micro-sample
    tier: backend
  ports:
  - protocol: TCP
    port: 8000
    targetPort: 8000
Enter fullscreen mode Exit fullscreen mode

infrastructure/frontend-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: micro-sample-frontend-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: micro-sample
  template:
    metadata:
      labels:
        app: micro-sample
        tier: frontend
        track: stable
    spec:
      containers:
      - name: micro-sample-frontend
        image: gcr.io/$PROJECT_ID/micro-sample-frontend:v0.1
        ports:
          - containerPort: 8080
            name: http
        env:
          - name: BACKEND_SERVICE_NAME
            value: micro-sample-service-backend.default
Enter fullscreen mode Exit fullscreen mode

infrastructure/frontend-service.yml

kind: Service
apiVersion: v1
metadata:
  name: micro-sample-service-frontend
spec:
  type: LoadBalancer
  selector:
    app: micro-sample
    tier: frontend
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
Enter fullscreen mode Exit fullscreen mode

Configurations in GCP

$ kubectl apply -f frontend-service.yml
service/micro-sample-service-frontend created

$ kubectl apply -f frontend-deployment.yml
deployment.apps/micro-sample-frontend-deployment created

$ kubectl apply -f backend-service.yml
service/micro-sample-service-backend created

$ kubectl apply -f backend-deployment.yml
deployment.apps/micro-sample-backend-deployment created
Enter fullscreen mode Exit fullscreen mode

Check the pods in your cluster

In your local machine when you run kubectl get pods, you should be able to see that all the pods are up and running.

$ kubectl get pods
NAME                                                READY   STATUS    RESTARTS   AGE
micro-sample-backend-deployment-854c888c95-46dp5    1/1     Running   0          98s
micro-sample-backend-deployment-854c888c95-l4g48    1/1     Running   0          98s
micro-sample-frontend-deployment-5cf875f7c9-2htft   1/1     Running   0          3m33s
micro-sample-frontend-deployment-5cf875f7c9-g4kpn   1/1     Running   0          3m33s
Enter fullscreen mode Exit fullscreen mode

Check the external IP of your cluster and run curl. You should be able to get the results from the API.

$ kubectl get svc
NAME                            TYPE           CLUSTER-IP   EXTERNAL-IP    PORT(S)        AGE
kubernetes                      ClusterIP      10.4.0.1     <none>         443/TCP        15m
micro-sample-service-backend    ClusterIP      10.4.8.222   <none>         8000/TCP       51s
micro-sample-service-frontend   LoadBalancer   10.4.15.80   EXTERNAL_IP   80:31017/TCP   7m44s

$ curl "http://EXTERNAL_IP/increment?val=1"
{"val":2}
Enter fullscreen mode Exit fullscreen mode

Cleaning Up

We will delete both the frontend and backend services.

$ kubectl delete svc micro-sample-service-frontend
service "micro-sample-service-frontend" deleted

//check if they are deleted
$ kubectl get services
NAME                           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
kubernetes                     ClusterIP   10.4.0.1     <none>        443/TCP    14h
Enter fullscreen mode Exit fullscreen mode

Finally we will delete the cluster.

$ gcloud container clusters delete micro-sample
The following clusters will be deleted.
 - [micro-sample] in [asia-northeast1-a]

Do you want to continue (Y/n)?  Y

Deleting cluster micro-sample...done.
Enter fullscreen mode Exit fullscreen mode

We should see in GKE that the cluster has been deleted.

References

Sorry, only Japanese available.

雰囲気でgRPC,GKE+kubernetes使ってマイクロサービス作る

💖 💪 🙅 🚩
greenteabiscuit
Reishi Mitani

Posted on October 16, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related