Configuring an isolated network in Google Cloud

chabane

Chabane R.

Posted on June 23, 2021

Configuring an isolated network in Google Cloud

In the first part we introduced the security patterns that could be implemented to secure the connectivity between Google Kubernetes Engine and Cloud SQL. In this part we will implement the network isolation by deploying the following GCP resources:

  • VPC with 2 subnets
    • 1 web subnet for GKE.
    • 1 data subnet.
  • Cloud NAT attached to the web subnet.
  • Firewall rules to restrict access to subnets to only authorized networks.

It is recommended to group similar applications into fewer, more manageable and larger subnets.

VPC isolated network

If you have multiples GKE clusters per environment, Google Cloud recommends to use Shared VPC to reduce management and topology complexity.

VPC

Let's start with the Virtual Private Cloud.

Create a terraform file infra/plan/vpc.tf:

  • A simple VPC resource
  • The web subnet. It will host our Google Kubernetes Engine
  • The data subnet. It could host your Cloud Dataflow jobs, Cloud Composer environments, etc.
resource "google_compute_network" "custom" {
  name                    = "custom"
  auto_create_subnetworks = "false" 
  routing_mode            = "GLOBAL"
}

resource "google_compute_subnetwork" "web" {
  name          = "web"
  ip_cidr_range = "10.10.10.0/24"
  network       = google_compute_network.custom.id
  region        = var.region

  secondary_ip_range  = [
    {
        range_name    = "services"
        ip_cidr_range = "10.10.11.0/24"
    },
    {
        range_name    = "pods"
        ip_cidr_range = "10.1.0.0/20"
    }
  ]

  private_ip_google_access = true
}

resource "google_compute_subnetwork" "data" {
  name          = "data"
  ip_cidr_range = "10.20.10.0/24"
  network       = google_compute_network.custom.id
  region        = var.region

  private_ip_google_access = true
}
Enter fullscreen mode Exit fullscreen mode

Cloud NAT

In order to allow our web subnet used by GKE to access the internet, we need to create a Cloud NAT. We associate Cloud NAT with the subnet using Cloud Router.

Create a terraform file infra/plan/nat.tf

resource "google_compute_address" "web" {
  name    = "web"
  region  = var.region
}

resource "google_compute_router" "web" {
  name    = "web"
  network = google_compute_network.custom.id
}

resource "google_compute_router_nat" "web" {
  name                               = "web"
  router                             = google_compute_router.web.name
  nat_ip_allocate_option             = "MANUAL_ONLY"
  nat_ips                            = [ google_compute_address.web.self_link ]
  source_subnetwork_ip_ranges_to_nat = "LIST_OF_SUBNETWORKS" 
  subnetwork {
    name                    = google_compute_subnetwork.web.id
    source_ip_ranges_to_nat = ["ALL_IP_RANGES"]
  }
  depends_on                         = [ google_compute_address.web ]
}
Enter fullscreen mode Exit fullscreen mode

Firewall

Firewall allows us to restrict the inbound and outbound network traffic to and from a VM instance, a network tag or a service account. In our case, we can implement the following rules:

  • A rule to restrict access to the Cloud SQL MySQL instance port to only GKE nodes.
  • A rule to restrict network access to only authorized networks.

Create a terraform file infra/plan/firewall.tf

resource "google_compute_firewall" "mysql" {
  name    = "allow-only-gke-cluster"
  network = google_compute_network.custom.name

  allow {
    protocol = "tcp"
    ports    = ["3306"]
  }

  priority = 1000

  source_ranges = ["10.10.10.0/24"]
}

resource "google_compute_firewall" "web" {
  name    = "allow-only-authorized-networks"
  network = google_compute_network.custom.name

  allow {
    protocol = "tcp"
  }

  priority = 1000

  source_ranges = var.authorized_source_ranges
}
Enter fullscreen mode Exit fullscreen mode

Let's configure our terraform.

Create a terraform file infra/plan/variable.tf

variable "region" {
  type = string
  default = "europe-west1"
}

variable "authorized_source_ranges" {
  type        = list(string)
  description = "Addresses or CIDR blocks which are allowed to connect to GKE API Server."
}
Enter fullscreen mode Exit fullscreen mode

Add a infra/plan/version.tf file

terraform {
  required_providers {
    google = {
      source = "hashicorp/google"
      version = "3.71.0"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Add a infra/plan/provider.tf file

provider "google" {
  region  = "europe-west1"
}
Enter fullscreen mode Exit fullscreen mode

And a infra/plan/backend.tf

terraform {
  backend "gcs" {
  }
}
Enter fullscreen mode Exit fullscreen mode

Now, export the following variables and create a bucket to save your terraform states.

export PROJECT_ID=<PROJECT_ID>
export REGION=<REGION>
export TERRAFORM_BUCKET_NAME=<BUCKET_NAME>

gcloud config set project ${PROJECT_ID}

gsutil mb -c standard -l ${REGION} gs://${TERRAFORM_BUCKET_NAME}
gsutil versioning set on gs://${TERRAFORM_BUCKET_NAME}
Enter fullscreen mode Exit fullscreen mode

Create a infra/plan/terraform.tfvars and deploy the infrastructure:

authorized_source_ranges    = ["<AUTHORIZED_NETWORK>"]
Enter fullscreen mode Exit fullscreen mode
cd infra/plan

sed -i "s,<AUTHORIZED_NETWORK>,$AUTHORIZED_NETWORK,g" terraform.tfvars

terraform init \
  -backend-config="bucket=${TERRAFORM_BUCKET_NAME}" \
  -backend-config="prefix=state"

terraform apply
Enter fullscreen mode Exit fullscreen mode

Let's check if all the resources have been created and are working correctly

VPC & Subnets

VPC & Subnets

Cloud NAT

Cloud NAT

Firewalls

Firewall rules

Conclusion

Our network is now ready to host our GCP resources. In the next part, we will focus on setting up GKE Autopilot.

💖 💪 🙅 🚩
chabane
Chabane R.

Posted on June 23, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related