Multi-region YugabyteDB deployment on AWS EKS with Istio

vishnuhd

Vishnu Hari Dadhich

Posted on May 2, 2024

Multi-region YugabyteDB deployment on AWS EKS with Istio

In today’s distributed cloud landscape, deploying applications across multiple regions and clusters is crucial for scalability, reliability, and performance. This blog post will guide you through setting up a multi-region, multi-cluster YugabyteDB deployment on AWS EKS with Istio service mesh.

WHY YUGABYTEDB?

YugabyteDB is a transactional database that brings together four must-have needs of cloud native apps – namely SQL as a flexible query language, low-latency performance, continuous availability, and globally-distributed scalability. Other databases do not serve all 4 of these needs simultaneously.

  • Monolithic SQL databases offer SQL and low-latency reads, but neither have the ability to tolerate failures, nor can they scale writes across multiple nodes, zones, regions, and clouds.
  • Distributed NoSQL databases offer read performance, high availability, and write scalability, but give up on SQL features such as relational data modelling and ACID transactions.

WHY AWS EKS AND ISTIO?

AWS EKS provides a managed Kubernetes service, simplifying cluster management and deployment. Istio, an open-source service mesh, enables traffic management, security, and observability across microservices.

Combining Yugabyte with AWS EKS and Istio creates a robust, scalable, and secure cloud-native architecture that spans across multiple regions.

DEPLOYMENT OVERVIEW

Our deployment consists of:

  • Three AWS regions (Singapore, Mumbai and Hyderabad).
  • One EKS cluster in each region.
  • One YugabyteDB master and one YugabyteDB tserver are deployed in each EKS cluster.
  • Istio is deployed in each cluster with an east-west gateway to provide a multi-cluster service mesh.

Image description

DEPLOYMENT STEPS

Pre-requisites

  • AWS account with at least three regions enabled
  • AWS user with access to create VPC and EKS using eksctl eksctl, aws cli, kubectl and git installed on your local system

Clone the following repo to follow along with this blog:

git clone https://github.com/vishnuhd/yugabyte-multiregion-aws-eks-istio.git
cd yugabyte-multiregion-aws-eks-istio
Enter fullscreen mode Exit fullscreen mode

Deploy AWS EKS clusters

  • Deploy EKS clusters in three different regions (namely Singapore, Mumbai and Hyderabad) using eksctl:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Creating EKS cluster in ${region}...\n"
    eksctl create cluster -f ${region}/cluster-config.yaml
    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Rename the kube contexts for the simplicity of this demo:
kubectl config rename-context 'yb-mumbai.ap-south-1.eksctl.io' mumbai
kubectl config rename-context 'yb-singapore.ap-southeast-1.eksctl.io' singapore
kubectl config rename-context 'yb-hyderabad.ap-south-2.eksctl.io' hyderabad
Enter fullscreen mode Exit fullscreen mode

NOTE: By default, EKS does not provide EBS permissions, follow this article to enable EKS PVC dynamic provisioning.

Setup Istio

When configuring a production deployment of Istio, key considerations include whether the mesh will be in single or multiple clusters, Istio control plane setup for high availability, and the choice between a single multicluster service mesh or federated multi-mesh deployment. These factors represent independent dimensions of configuration for Istio deployment.

This guide describes the various options and considerations when configuring your Istio deployment. For this demo, we are gonna Install Multi-Primary on different networks.

Download Istio:

curl -L https://istio.io/downloadIstio | sh -
cd istio-1.21.0
export PATH=$PWD/bin:$PATH
Enter fullscreen mode Exit fullscreen mode

Plug in CA Certificates for Istio

In a multi-cluster environment, we would want to set up one root CA and use the root CA to issue intermediate certificates to the Istio CAs that run in each cluster. This would ensure the services behind it can only be accessed by services with a trusted mTLS certificate.

  • Create a cert directory :
mkdir -p istio-1.21.0/certs
Enter fullscreen mode Exit fullscreen mode
  • Generate the root CA certificate and key:
cd istio-1.21.0/certs
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
Enter fullscreen mode Exit fullscreen mode
  • For each cluster, generate an intermediate certificate and key for the Istio CA:
cd istio-1.21.0/certs

{
  for region in mumbai singapore hyderabad; do
    echo -e "Generating certs for cluster - ${region}...\n"
    make -f ../tools/certs/Makefile.selfsigned.mk yb-${region}-cacerts
    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • In each cluster, create a secret called cacerts including all the input files ca-cert.pem, ca-key.pem, root-cert.pem and cert-chain.pem :
{
  for region in mumbai singapore hyderabad; do
    echo -e "Creating namespace and secret for cluster - ${region}...\n"

    kubectl --context ${region} create namespace istio-system
    kubectl --context ${region} create secret generic cacerts -n istio-system \
          --from-file=istio-1.21.0/certs/yb-${region}/ca-cert.pem \
          --from-file=istio-1.21.0/certs/yb-${region}/ca-key.pem \
          --from-file=istio-1.21.0/certs/yb-${region}/root-cert.pem \
          --from-file=istio-1.21.0/certs/yb-${region}/cert-chain.pem

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode

With this step completed, we are now prepared to install Istio on every cluster.

Install Istio

  • Install Istio using istioctl:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Installing istio for cluster - ${region}...\n"

    istioctl install --context ${region} -f ./${region}/istio.yaml

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Install a gateway in each cluster, that is dedicated to east-west traffic. By default, this gateway will be public on the Internet. Production systems may require additional access restrictions (e.g. via firewall rules) to prevent external attacks.
{
  for region in mumbai singapore hyderabad; do
    echo -e "Installing the east-west gateway for cluster - ${region}...\n"

    ./istio-1.21.0/samples/multicluster/gen-eastwest-gateway.sh \
        --mesh mesh1 --cluster yb-${region} --network network-${region} | \
        istioctl --context ${region} install -y -f -

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Since the clusters are on separate networks, we need to expose all services (*.local) on the east-west gateway in all three clusters. While this gateway is public on the Internet, services behind it can only be accessed by services with a trusted mTLS certificate and workload ID, just as if they were on the same network.
{
  for region in mumbai singapore hyderabad; do
    echo -e "Exposing the services for cluster - ${region}...\n"

    kubectl --context ${region} apply -n istio-system -f \
        ./istio-1.21.0/samples/multicluster/expose-services.yaml

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Install a remote secret in each cluster that provides access to the other cluster’s Kube API server.
{
  for region1 in mumbai singapore hyderabad; do
    for region2 in mumbai singapore hyderabad; do
      if [[ "${region1}" == "${region2}" ]]; then continue; fi
      echo -e "Create remote secret of ${region1} in ${region2}...\n"

      istioctl create-remote-secret \
        --context ${region1} \
        --name=yb-${region1} | \
        kubectl apply -f - --context ${region2}

      echo -e "-------------\n"
    done
  done
}
Enter fullscreen mode Exit fullscreen mode

Install YugabyteDB

  • To install YugabyteDB using helm charts, add the chart repo:
helm repo add yugabytedb https://charts.yugabyte.com
helm repo update
Enter fullscreen mode Exit fullscreen mode
  • Create a YugabyteDB namespace (yb-demo) in each cluster and enable Istio injection:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Creating namespace for cluster - ${region}...\n"

    kubectl --context ${region} create namespace yb-demo
    kubectl label --context ${region} namespace yb-demo istio-injection=enabled

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Install YugabyteDB:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Installing YugabyteDB in cluster - ${region}...\n"

    helm upgrade --install ${region} yugabytedb/yugabyte \
        --version 2.19.3 \
        --namespace yb-demo \
        -f ${region}/overrides.yaml \
        --kube-context ${region} --wait

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode

This will install YugabyteDB in each EKS cluster with 1 master and 1 tserver connected with each other through the Istio service mesh. At this point, it is important to understand each parameter being set in the overrides.yaml file. Each master and tserver pod needs to know all the master addresses to replicate data and elect leaders.

  • In addition to the Istio setup, an extra step is required, which involves creating identical Kubernetes services in all clusters to enable DNS service discovery. More information can be found here. Therefore, we need to replicate the yugabyte-master and yugabyte-tserver services present in the Mumbai region to both the Singapore and Hyderabad regions, and vice versa.
{
  for region1 in mumbai singapore hyderabad; do
    for region2 in mumbai singapore hyderabad; do
      if [[ "${region1}" == "${region2}" ]]; then continue; fi
      echo -e "Creating services of ${region2} in ${region1}...\n"

      kubectl --context ${region1} apply -f ${region1}/services-${region2}.yaml -n yb-demo

      echo -e "-------------\n"
    done
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Check the YugabyteDB pods and services, all of them should be up and running:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Checking YugabyteDB pods and svcs for cluster - ${region}...\n"

    kubectl --context ${region} get pods,svc -A

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Finally, we need to configure global data distribution, for Yugabyte to handle the data distribution properly across regions:
kubectl --context mumbai exec -n yb-demo mumbai-yugabyte-yb-master-0 -- bash \
-c "/home/yugabyte/master/bin/yb-admin --master_addresses mumbai-yugabyte-yb-master-0.yb-demo.svc.cluster.local,hyderabad-yugabyte-yb-master-0.yb-demo.svc.cluster.local,singapore-yugabyte-yb-master-0.yb-demo.svc.cluster.local modify_placement_info aws.ap-south-1.ap-south-1a,aws.ap-south-2.ap-south-2a,aws.ap-southeast-1.ap-southeast-1a 3"
Enter fullscreen mode Exit fullscreen mode

Voila, your YugabyteDB multi-regional setup is now complete!

Access the YugabyteDB UI

  • Find the yb-master-ui service in yb-demo namespace for any cluster and open it in the browser along with the port 7000:

Image description

As we can see the masters are spread across regions, with Hyderabad one as the leader.

  • Explore the tablet servers:

Image description

Similarly, we can see the tablet servers being distributed across multiple regions, each of them able to hand synchronous reads and writes.

  • Run a sample Yugabyte application in any of the clusters:
kubectl run yb-sample-apps \
    -it --rm \
    --image yugabytedb/yb-sample-apps \
    --namespace yb-demo \
    --context singapore \
    --command -- sh

java -jar yb-sample-apps.jar java-client-sql \
    --workload SqlInserts \
    --nodes yb-tserver-common.yb-demo.svc.cluster.local:5433 \
    --num_threads_write 1 \
    --num_threads_read 2
Enter fullscreen mode Exit fullscreen mode

Here, we are targeting yb-tserver-common service for reads and writes, which will choose any of the tserver in any of the regions randomly. This also helps in load-balancing the traffic across regions.

  • We can also see the tables created by this sample app:

Image description

DISASTER RECOVERY

Disasters can happen anytime, YugabyteDB provides us with a feature called Replication Factor (RF). Configurations usually include a Replication Factor (RF) of 3. In this setup, a write to the leader requires an acknowledgement from one follower before being committed, as the leader and one follower together constitute the majority. In the event of a failure, operational replicas in the Raft group can support consistent reads and writes, while those that have become separated from the Raft consensus quorum cannot make progress. We will now see this feature of Yugabyte in action.

  • Current leaders for master and tservers:

Image description

Image description

Currently, the master pod in the Hyderabad region is the LEADER, while most of the transactions are handled by the tablet server in the Mumbai region.

  • Let’s make the Hyderabad region go down:
kubectl scale sts hyderabad-yugabyte-yb-master-0 --replicas 0 -n yb-demo --context hyderabad
kubectl scale sts hyderabad-yugabyte-yb-tserver-0 --replicas 0 -n yb-demo --context hyderabad
Enter fullscreen mode Exit fullscreen mode

As soon as we decrease the pod replicas to 0 for both master and tserver, we can see the errors in the Yugabyte master UI:

Image description

Image description

Here we can see that the master server from the Mumbai region is elected as a new Leader and all transactions are still operating even if one whole region is down.

  • Yugabyte also keeps track of the under-replicated tables, so that they can be replicated to the region as soon as it comes back online:

Image description

  • When the region comes online again, the data is replicated back to the Hyderabad region:

Image description

Image description

Image description

Awesome, now you have a multi-regional fault-tolerant YugabyteDB setup on AWS EKS clusters using Istio as a service mesh.

BONUS: SETUP KIALI FOR OBSERVABILITY

The Istio download package comes by default with Kiali and Prometheus, let’s set it up to have a nice view of all the services in the mesh.

  • Install Kiali and Prometheus:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Checking Kiali and Prometheus for cluster - ${region}...\n"

    kubectl apply -f istio-1.21.0/samples/addons/prometheus.yaml --context ${region}
    kubectl apply -f istio-1.21.0/samples/addons/kiali.yaml --context ${region}

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Open Kiali dashboard:
istioctl dashboard kiali --context singapore
Enter fullscreen mode Exit fullscreen mode

Image description

The above Kiali graph shows the various clusters and services from the POV of EKS cluster in the Singapore region.

CLEANING UP THE RESOURCES

  • Uninstall YugabyteDB:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Un-installing YugabyteDB in cluster - ${region}...\n"

    helm uninstall ${region} --namespace yb-demo --kube-context ${region}
    kubectl delete pvc --namespace yb-demo \
      --selector component=yugabytedb,release=${region} \
      --context ${region}

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Delete additional YB services:
{
  for region1 in mumbai singapore hyderabad; do
    for region2 in mumbai singapore hyderabad; do
      if [[ "${region1}" == "${region2}" ]]; then continue; fi
      echo -e "Deleting services of ${region2} in ${region1}...\n"

      kubectl --context ${region1} delete -f ${region1}/services-${region2}.yaml -n yb-demo

      echo -e "-------------\n"
    done
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Un-install Kiali and Prometheus:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Checking Kiali and Prometheus for cluster - ${region}...\n"

    kubectl delete -f istio-1.21.0/samples/addons/prometheus.yaml --context ${region}
    kubectl delete -f istio-1.21.0/samples/addons/kiali.yaml --context ${region}

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Uninstall Istio:
{
  for region in mumbai singapore hyderabad; do
    echo -e "Un-installing Istio in cluster - ${region}...\n"

    istioctl uninstall --purge -y --context ${region}

    echo -e "-------------\n"
  done
}
Enter fullscreen mode Exit fullscreen mode
  • Delete EKS clusters:
eksctl delete cluster yb-mumbai --region ap-south-1
eksctl delete cluster yb-singapore --region ap-southeast-1
eksctl delete cluster yb-hyderabad --region ap-south-2
Enter fullscreen mode Exit fullscreen mode

CONCLUSION

A multi-region, multi-cluster YugabyteDB deployment on AWS EKS with Istio provides a highly available, scalable, and secure architecture for distributed applications. By leveraging YugabyteDB deployed on multiple AWS regions and EKS clusters, this setup ensures redundancy and failover capabilities, minimizing downtime and ensuring business continuity. Istio’s service mesh capabilities provide advanced traffic management, security, and observability features, allowing for fine-grained control and monitoring of the application traffic. This setup is ideal for organizations requiring a robust and resilient infrastructure for their critical applications.

The original tech blog is here, please follow/subscribe to get notifications directly in your inbox when new content goes live. You can also find me on LinkedIn @ in/vishnuhd.

💖 💪 🙅 🚩
vishnuhd
Vishnu Hari Dadhich

Posted on May 2, 2024

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related