Things I did to run Kubernetes Pods with Terraform
Yasunori Tanaka
Posted on November 17, 2019
Terraform
I created an infrastructure in my project and configured the Kubernetes config for Google Cloud Kubernetes Engine.
gcloud container clusters list
gcloud config set project <your-project-name>
gcloud config list
gcloud container clusters list
gcloud container clusters get-credentials dev-cluster
kubectl config current-context
kubectl config view
kubectl config use-context gke_<your-project-name>_asia-northeast1-a_dev-cluster
kubectl apply -f ./manifests/generated/dev -R
We need to create credentials.json for Cloud SQL Proxy and then apply manifests using it.
What I did for applying manifest
- Create a service account for Cloud SQL Proxy's credential and encode using base64.
- Create a secret in the Kubernetes
- Create dev.env
$ kubectl create secret generic env-config --from-env-file envs/dev.env
kubectl logs dev-delivery-74b5975c64-cr2bx -c delivery
kubectl logs dev-delivery-74b5975c64-cr2bx -c cloudsql-proxy
Change Cloud SQL Proxy’s instance option to “-instances=:asia-northeast1:.Values.env-master01-4eacaf8e22e3aa79=tcp:3306”
Use the Instance connection name.
Migration
- lunch Cloud SQL Proxy
- change the data source in dbconfig.yml
- create database rdn
- apply migration files
Create a docker image and push it.
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o dist/app ./microservices/device/main.go
docker build -t asia.gcr.io/<your-project-name>/device . --build-arg MICROSERVICE_NAME=device
docker push asia.gcr.io/<your-project-name>/device
It uses for determining an image tag pulling from GCR.
I don't know how to determine the value of CPU and Memory resources.
Set enough resource value for CPU and Memory. If you be cheap, you will be stuck in some problems.
occurred upload error when CreateResource call repository: Post https://www.googleapis.com/upload/storage/v1/b/dev-delivery-resources/o?alt=json&prettyPrint=false&projection=full&uploadType=multipart: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: net/http: TLS handshake timeout
UploadGcs Error ; msg objectName: d55bddf8d62910879ed9f605522149a81569317440.mp4 errors: Post https://www.googleapis.com/upload/storage/v1/b/dev-delivery-resources/o?alt=json&prettyPrint=false&projection=full&uploadType=multipart: oauth2: cannot fetch token: Post https://oauth2.googleapis.com/token: net/http: TLS handshake timeout
I thought above errors are authentication error; however, lack of resource (CPU, Memory) occur the errors.
Build delivery, docker build and push
CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o dist/app ./microservices/device/main.go
sh docker.sh register delivery
An unknown error occurred, please check the HTTP result code and inner exception for server response.
If this error showed up, you should integrate a bucket between Firebase Storage and GCS.
Tips
If we upgrade the version, we will need to change a template. So we should not update any version usually.
We use Ingress when to publish an IP outside a cluster.
Either Service type Node port or Load balancer can use.
Helm can use a conditional template such as
{- if eq .Values.env "dev" }
"-instances=.Values.project_name:asia-northeast1:.Values.env-master01=tcp:3306,,<your-project-name>:asia-northeast1:.Values.env-slave01=tcp:3307",
{- else }
"-instances=<your-project-name>:asia-northeast1:.Values.env-master01=tcp:3306,<your-project-name>:asia-northeast1:.Values.env-slave01=tcp:3307,<your-project-name>:asia-northeast1:.Values.env-slave02=tcp:3308",
{- end }
"-credential_file=/secrets/credentials.json"]
I used random ID as a suffix to Cloud SQL. However, we usually don't apply manifest repeatedly. So I removed the random id. If we need it soon, I will add it again.
Q: I want to install a specific version of Terraform.
It solved using by tfswitch
Error: Error, failed to create instance dev-master01: googleapi: Error 400: The incoming request contained invalid data., invalidRequest
on modules/google-cloud-sql/main.tf line 1, in resource "google_sql_database_instance" "master":
1: resource "google_sql_database_instance" "master" {
Cloud SQL and Terraform documents don't have information on the minimum disk size of Cloud SQL Second-generation. This information appears in the Cloud SQL API Document that the value is 10 GB. If we specify the disk size less than 10 GB, it occurs error.
"dataDiskSizeGb": "A String", # The size of data disk, in GB. The data disk size minimum is 10GB. Not used for First Generation instances.
I often run into the error "The incoming request contained invalid data." This error was caused by invalid parameters when I requested. At first, We should check the below list with GCP and GCP API document.
- Is the value in a valid range?
- Correct format?
Posted on November 17, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.