Containers in the Cloud: What Are Your Options?

coder_society

Kentaro Wakayama

Posted on June 30, 2021

Containers in the Cloud: What Are Your Options?

container-workloads-aws-azure-gcp

The cloud has opened up new possibilities for the deployment of microservice architectures. Managed and unmanaged container services, along with serverless hosting options, have revolutionized the deployment of container workloads in the cloud.

While an unmanaged or build-your-own approach for your container ecosystem in the cloud gives you greater control over the stack, you need to own the end-to-end lifecycle, security, and operations of the solution. Managed container services, on the other hand, are hassle free and more popular due to their built-in integration with your current cloud ecosystem, best practices, security, and modularity.

All three leading cloud providers—AWS, Azure, and GCP—have a strong portfolio of products and services to support containerized workloads for cloud-native as well as hybrid deployments. Kubernetes (K8s) remains the most popular container orchestration solution in the cloud. This enterprise-class solution is also the preferred platform for production deployments. Each of the major cloud service providers offer a native managed Kubernetes service as well as standalone container solutions. Both of these solutions easily integrate with the robust ecosystem of supporting services offered by your cloud platform, including container registries, identity and access management, and security monitoring.

In this article, we’ll explore some of the more popular options for deploying containers in the cloud and use cases.

Popular Options for Container Workloads in the Cloud

Each of the major cloud service providers offers a number of available options for container workload hosting, which we’ll examine in the following sections.

Amazon Web Services

AWS provides a diverse set of services for container workloads, the most popular being EKS, ECS, and Fargate. It also offers an extended ecosystem of services and tools, like AWS Deep Learning Containers for machine learning, Amazon Elastic Container Registry, and EKS Anywhere (coming in 2021) for hybrid deployments.

1. Amazon Elastic Kubernetes Service

Amazon Elastic Kubernetes Service (EKS) can be used to create managed Kubernetes clusters in AWS, where the deployment, scaling, and patching are all managed by the platform itself. The service is Kubernetes-certified and uses Amazon EKS Distro, an open-source version of Kubernetes.

Since the control plane is managed by AWS, the solution automatically gets the latest security updates with zero downtime and ensures a safe hosting environment for containers. The service also has assured high availability, with an SLA of 99.95% uptime—achieved by deploying the control plane of Kubernetes across multiple AWS Availability Zones. 

AWS charges a flat rate of $0.10 per hour for EKS clusters, as well as additional charges for the EC2 instances or EBS volumes used by the worker nodes. The cost can be reduced by opting for EC2 Spot Instances for development and testing environments, and Reserved Instances for production deployments.

EKS is most beneficial if you’re planning for a production-scale deployment of microservices-based applications on AWS, easily scalable web applications, integration with machine learning models, batch processing jobs, and the like.

2. AWS Fargate

AWS Fargate is a serverless compute service for containers that can be integrated with Amazon EKS and Amazon ECS. It reduces operational overhead, as you don’t have to deploy and configure the underlying infrastructure for hosting containers, and you’re charged only for the compute capacity being used to run the workloads.

The containers run in an isolated environment with a dedicated kernel runtime, thereby ensuring improved security for the workloads. You can also leverage Spot Instances (for dev/test environments) and compute savings plans for committed usage to reduce the overall cost. If you’re looking to switch from a monolithic to a microservices-based architecture, with minimal development and management overhead, AWS Fargate offers great benefits.

3. Amazon Elastic Container Service

Amazon Elastic Container Service (ECS) can be used to host container services either in a self-managed cluster of EC2 instances or on a serverless infrastructure managed by Fargate. The former approach provides better control over the end-to-end stack hosting the container workloads.

In addition, it provides centralized visibility of your services and the capability to manage them via API calls. If you’re using EC2 for the underlying cluster, the same management features can be used for ECS as well. However, the cluster management, scaling, and operations layer are all handled by the platform, thereby eliminating that overhead.

ECS is a regional service that is highly available across Availability Zones within an AWS region, ensuring the availability of your hosted container workloads.

Microsoft Azure

Azure offers a managed Kubernetes service as well as options for deploying standalone container instances. Azure Container Registry, integration with Security Center, and container image scanning are just a few other Azure value-added services to support container workload ecosystems.

1. Azure Kubernetes Service

The managed Kubernetes service, Azure Kubernetes Service (AKS), is one of the most popular container hosting services in the public cloud today. It consists of a control plane hosting the master nodes, which is managed by the Azure platform that exposes the Kubernetes APIs; then there are the customer-managed agent nodes, where the container workloads are deployed.

The platform handles all cluster management activities, such as health monitoring and maintenance. It also offers easy integration with Azure RBAC and Azure AD for cluster management, built-in integration with Azure Monitor, and flexibility to use Docker Registry or Azure Container Registry for retrieving container images.

AKS can be used without a cluster management fee while maintaining an SLA of 99.5%. You pay only for the VM instances, storage, and networking resources used for the AKS cluster on a per-second billing model. There is also an option to purchase uptime SLA for $0.10 per cluster per hour. High availability can be configured by using Azure Availability Zones during cluster deployments. If optional uptime SLA is purchased, such clusters will have an assured SLA of 99.95% and the clusters that do not use Availability Zones will have an SLA of 99.9%. In both cases, the customer is required to pay for the agent nodes that host the workloads.

2. Azure Container Instances

Azure Container Instances provides an easy-to-use solution for deploying containers in Azure without deploying an orchestration platform. Because Container Instances does not require the provisioning of any VMs, the instances are started in a matter of seconds. The service also gives you the flexibility to configure the CPU core and memory required for the workloads and are charged only for that. The service can integrate with Azure Files for persistent storage, connect to Azure Virtual Network, and also integrate with Azure Monitor for resource-usage monitoring.

Azure Container Instances is best suited for deploying isolated container instances for simple applications that don't require advanced capabilities such as on-demand scaling or multi-container service discovery.

3. Azure Web App for Containers 

Azure Web App allows you to deploy containers on the service using container images from Docker Hub or Azure Container Registry. The backend OS patching, capacity management, and load balancing of services are handled by the platform, and the service enables on-demand scaling, either through scale-up or scale-out options based on configured scaling rules. This also helps with cost management, where costs are automatically reduced during off-peak hours. The service ensures high availability as well since the container services can be deployed across multiple Azure Regions.

Google Cloud Platform

Kubernetes, which originated as a Google in-house project, offers a strong suite of products for managed container hosting both in the cloud and on-premises.

1. Google Kubernetes Engine 

Google Kubernetes Engine (GKE) is the managed Kubernetes service from GCP that can be used to host highly available and scalable container workloads. It also provides a GKE sandbox option, if you need to run workloads prone to security threats in an isolated environment.

GKE clusters can be deployed as both multi-zonal and regional to protect workloads from cloud outages. GKE also comes with many out-of-the-box security features, such as data encryption and vulnerability scanning for container images through integration with the Container Analysis service.

As with the managed Kubernetes services provided by the other cloud service providers, GKE offers automated repair of faulty nodes, upgrades, and on-demand scaling. It can also be integrated with GCP monitoring services for in-depth visibility into the health of deployed applications. 

If you’re planning to host graphic-intensive, HPC, and ML workloads, you can augment GKE through specialized hardware accelerators like GPU and TPU during deployment. Finally, GKE offers per-second billing and—for development environments that can tolerate downtime—an option to use preemptible VMs for your cluster to further reduce costs.

2. Cloud Run

If you’re looking to run containerized applications in GCP without the overhead of managing the underlying infrastructure, Cloud Run is one option. With this fully managed serverless hosting service for containers, you’re charged only for the resources consumed by the containers.

Cloud Run can also be deployed into Anthos GKE clusters or on-premises workloads. Well integrated with other GCP services such as Cloud Code, Cloud Logging, Monitoring, Artifact Registry, and Cloud Build, Cloud Run meets all your containerized application development needs.

Hybrid Deployments

For many organizations, the first step of their cloud adoption journey is implementing hybrid deployments, where some of the components for their containerized application remain on-premises and others are moved to the cloud. There are several popular tools and services available to help meet the needs of hybrid and multicloud deployments, and all leading cloud providers have focused investments in this space.

1. Azure Arc

Azure Arc provides a unified Azure-based management platform for servers, data services, and Kubernetes clusters deployed on-premises as well as across multicloud environments.

Azure Arc-enabled Kubernetes clusters support multiple popular distributions of Kubernetes that are certified by the Cloud Native Computing Foundation (CNCF). The service allows you to list Kubernetes clusters across heterogeneous environments in Azure for a unified view and enables integration with Azure management capabilities, such as Azure Policy and Azure Monitor.

2. Google Anthos 

Google Cloud’s Anthos is a comprehensive and advanced solution that can be leveraged to deploy managed Kubernetes clusters in the cloud and on-premises. Anthos provides a GKE on-premises option you can use to deploy new GKE clusters into your private cloud on-premises. It's also possible to register existing non-GKE clusters with Anthos. GKE on AWS helps in multicloud scenarios, where a compatible GKE environment in AWS can be created, updated, or deleted using a management service from the Anthos UI. Meanwhile, Anthos Config Management and Service Mesh solutions help with policy automation, security management, and visibility into your applications deployed across multiple clusters for a unified management experience.

3. AWS Outposts

AWS Outposts is a hybrid cloud service that brings AWS services, including container services like EKS, to your on-premises data center. Currently shipped as a 42U rack unit, this service is installed, updated, and fully managed by AWS. The solution can be connected to a local AWS Region for a hybrid experience, where the services in Outposts can connect directly to the services in the cloud.

AWS Outposts targets customers who prefer to deploy containerized workloads on-premises for data residency, local processing, and more, while having the flexibility to use AWS Cloud’s supporting services for their applications. 

The recently announced EKS Anywhere is another option, designed to deliver the same experience as Amazon EKS for Hybrid deployments. EKS Anywhere is expected to be available in 2021.

Choosing the Right Container Service

With the wide spectrum of services available in the cloud for hosting containerized workloads, the first step in choosing the right service for your requirements is to map your specific application requirements to features of a given service. Managed container services from the major cloud service providers are recommended, as they offer better integration with their own cloud platform services.

When choosing the best container service for your application, opt for production-ready services while avoiding potential vendor lock-in. You should also take into consideration the solution’s future roadmap as well as ease of monitoring, logging, availability, scalability, security management, and automation.

Starting with a managed Kubernetes service is a great option, as Kubernetes is best suited for scalable, secure, and highly available production deployments. If there are clear requirements for hybrid integration, where some of your containerized workloads might remain on-premises, opt for hybrid solutions like Azure Arc or Google Anthos. And finally, if you are looking for simple isolated container deployments, serverless solutions may be the best fit.

💖 💪 🙅 🚩
coder_society
Kentaro Wakayama

Posted on June 30, 2021

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related