AWS CDK - use your favorite language to define cloud infrastructure

kokospapa8

Jinwook Baek

Posted on November 12, 2020

AWS CDK - use your favorite language to define cloud infrastructure

Introduction

Infrastructure as code(IaC) is a popular term in cloud environment. I have been using CloudFormation to automate AWS infrastructure deployment. However I always had hard time writing out the configuration file since they are written in string json or yaml file. There are some best practice for writing out CloudFormation out there, I always have to validate and revalidate the stacks and had to double check whether the infrastructure was deployed as I intended to. Recently I have stumbled upon AWS CDK and I fell in love with it. Here are some plus using CDK.

  • Organize your project into logical modules
  • Use logic (if statements, for-loops, etc) when defining your infrastructure
  • Use object-oriented techniques to create a model of your system
  • Type-safety, code-completion, and open-source
  • Define high level abstractions, share them, and publish them to your team, company, or community
  • Share and reuse your infrastructure as a library
  • Code completion within your IDE → This was huge plug for me!

In short, AWS CDK provides high level abstraction of CloudFormation json/yaml syntax. In this blog post, I will rewrite my previous ecs-sample architecture(beta stage) in CDK. Please refer to my previous blog post

for infrastructure that I will be deploying with CDK.

kokospapa8/ecs-fargate-sample-app

Before diving into writing CDK code, let's take a quick look at CDK.

CDK Concepts

Alt Text

There are three main components in CDK: contruct, stack and app. Everything in the AWS CDK is a construct. You can think of constructs as cloud components that can represent architectures of any complexity: a single resource, such as an S3 bucket or an SNS topic, a static website, or even a complex, multi-stack application that spans multiple AWS accounts and regions. To foster reusability, constructs can include other constructs. You compose constructs together into stacks, that you can deploy into an AWS environment, and apps, a collection of one of more stacks.

Alt Text

App

Written in TypeScript, JavaScript, Python, Java, or C# that uses the AWS CDK to define AWS infrastructure

Stack

  • Equivalent to AWS CloudFormation stacks
  • Contains construct
  • Defines one or more concrete AWS resources

Constructs

  • Represented as types in programming.
  • Three fundamental flavors
    • L1 (AWS CloudFormaiton only) - directly to resource types defined by AWS CloudFormation
      • Always have names that begin with Cfn
    • L2 - Encapsulate L1 modules, providing sensible defaults and best-practice security policies, supporting resources needed by the primary resource
    • L3 - Patterns declare multiple resources to create entire AWS architectures for particular use cases. All the plumbing is already hooked up, and configuration is boiled down to a few important parameters

Core module

  • Imported into code as core or cdk
  • Contains constructs used by the AWS CDK itself as well as base classes for constructs, apps, resources, and other AWS CDK objects.

Supported Languages

AWS CDK is developed in one language (TypeScript) and language bindings are generated for the other languages through the use of a tool called JSII. In this post, I will be using python .

  • TypeScript
  • JavaScript
  • Python
  • Java
  • C#

Getting started CDK

Setup AWS Credential

aws configure
cat ~/.aws/credentials
---
[default]
aws_access_key_id=AKIAI44QH8XXXXXXXX
aws_secret_access_key=je7MtGbClwBFXXXXXXXXXXXXXXXXXXXXX

Enter fullscreen mode Exit fullscreen mode

Install on Mac OSX

Let's use brew 🍺 to install on Mac OSX.

brew update
brew doctor
brew install aws-cdk
cdk --version
Enter fullscreen mode Exit fullscreen mode

If you are using other OS, please refer to following link.

Getting started with the AWS CDK

Python environment

you need to have python 3.6 or later including pip and virtualenv .

Pycharm for IDE

Alt Text

I use pycharm for python development with AWS toolkit, I strongly recommend use IDE instead of good old vim or common text editors. Especially when you are devloping CDK, you don't need to go back and forth digging through huge list of API references.

AWS Toolkit for PyCharm

Alternatively you can use VS Code with AWS toolkit.

AWS Toolkit for Visual Studio Code

Project Setup

git clone https://github.com/kokospapa8/ecs-fargate-sample-app.git
cd config/cdk
source .env/bin/activate

pip install -r requirements.txt
# set env with CDK_DEFAULT_ACCOUNT
export CDK_DEFAULT_ACCOUNT=123456789
# set env with CDK_DEFAULT_REGION
export CDK_DEFAULT_REGION=us-east-1

cdk synth 
Enter fullscreen mode Exit fullscreen mode

(Alternatively) setup sample app instead of pulling github project

cdk init -a sample-app --language python
Enter fullscreen mode Exit fullscreen mode

cdk synth will generate cloudformation stack file in cdk.out folder. You can just take this CloudFormation files and deploy manually. But there is a easier way to deploy with CDK. We will get into that on later part of the post

Alt Text

App structure

Before deploying the stack, let's review the infrastructure and review stacks one by one.

We will build following resources

  • VPC with public and private subnet
  • API EC2 instance on ASG (in public subnet)
  • Worker EC2 instance on ASG (in public subnet)
  • ALB in front of API ASG
  • RDS mysql (in private subnet)
  • ElastiCache - Redis (private subnet)

Alt Text

Now let's look at the file structures.

app.py - application wrapper

requirements.txt - required CDK pip libraries

ecs_sample_cdk folder consists of following stack files. sample_stack is wrapper for the whole infrastructure; alb_stack , rds_stack , redis_stack and vpc_stack are nested stack of sample_stack

Deep-dive into codes

app.py

Nothing special here. If you want to know what's going on behind the scene please refer to following document.

Apps

#!/usr/bin/env python3
import os

from aws_cdk import core
from ecs_sample_cdk.sample_stack import SampleStack

env = core.Environment(
    account=os.environ.get("CDK_DEFAULT_ACCOUNT", ""),
    region=os.environ.get("CDK_DEFAULT_REGION", "")
)

app = core.App()
SampleStack(app, "ecs-sample", env=env)

app.synth()
Enter fullscreen mode Exit fullscreen mode

Sample Stack

This is a class that hold other nested stack and I have used props to pass around important resource reference such as vpc and security groups. You can also notice add_dependency method which enforces dependency between different nested stacks. Here we create VPC, ALB stacks then create RDS and Elasticache because we need vpc_id, subnets and security groups in order create these resources.

class SampleStack(core.Stack):

    def __init__(self, scope: core.Construct, id: str, env, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)
        props = {'namespace': 'sample'}
        vpc_stack = VPCStack(self, f"{id}-vpc", env=env, props=props)
        props.update(vpc_stack.output_props)

        alb_stack = ALBStack(self, f"{id}-alb", env=env, props=props)
        alb_stack.add_dependency(vpc_stack)
        props.update(alb_stack.output_props)

        rds_stack = RDSStack(self, f"{id}-rds", env=env, props=props)
        rds_stack.add_dependency(vpc_stack)
        rds_stack.add_dependency(alb_stack)
        props.update(rds_stack.output_props)

        redis_stack = RedisStack(self, f"{id}-redis", env=env, props=props)
        redis_stack.add_dependency(vpc_stack)
        redis_stack.add_dependency(alb_stack)
Enter fullscreen mode Exit fullscreen mode

VPC_STACK

This code produces a VPC with 2 public and private subnets each in 2 different AZs with single NAT gateway. Using L2 construct, less than 10 lines of code produces route tables, subnets, NAT gateway, IGW, VPC. It is that simple to produce produce a valid VPC for app development with less than 10 lines of codes.

Notice that I have added CfnOutput for cloudformation output for vpc id and output_props to pass around vpc resource I have created in this stack

class VPCStack(core.NestedStack):

    def __init__(self, scope: core.Construct, id: str, env, props, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        subnets = []

        public_subnet = ec2.SubnetConfiguration(
                           cidr_mask=24,
                           name=f"{id}-public",
                           subnet_type=ec2.SubnetType.PUBLIC
                       )

        private_subnet = ec2.SubnetConfiguration(
                           cidr_mask=24,
                           name=f"{id}-private",
                           subnet_type=ec2.SubnetType.PRIVATE
                       )
        subnets.append(public_subnet)
        subnets.append(private_subnet)

        # The code that defines your stack goes here
        vpc = ec2.Vpc(self, f"{id}",
                           cidr="172.0.0.0/16",
                           enable_dns_hostnames=True,
                           enable_dns_support=True,
                           nat_gateways=1,
                           nat_gateway_provider=ec2.NatProvider.gateway(),
                           max_azs=2,
                           subnet_configuration=subnets
                           )
        #Be aware that environment-agnostic stacks will be created with access to only 2 AZs, so to use more than 2 AZs, be sure to specify the account and region on your stack

        core.CfnOutput(self, "vpcid",
                       value=vpc.vpc_id)

        # Prepares output attributes to be passed into other stacks
        # In this case, it is our VPC and subnets.
        self.output_props = props.copy()
        self.output_props['vpc'] = vpc
        self.output_props['public_subnets'] = vpc.public_subnets
        self.output_props['private_subnets'] = vpc.private_subnets

        @property
    def outputs(self):
        return self.output_props
Enter fullscreen mode Exit fullscreen mode

ALB_stack

AutoScalingGroup requires KEY_PAIR_NAME which you need to create on Console manually.

Amazon EC2 key pairs and Linux instances

The stack creates following resources

  • EC2 instance role
  • API AutoScalingGroup
  • Worker AutoScalingGroup
  • Security groups
    • api EC2 (public subnet)
      • allows ssh from public
    • worker EC2 (private subnet)
      • allows ssh from API EC2
    • redis
      • allows 6379 from EC2 security group
    • rds
      • allows 3306 from EC2 security group

We need add security group on props to use them on RDS and Redis stack

class ALBStack(core.NestedStack):

    def __init__(self, scope: core.Construct, id: str, env, props, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        #create ec2role
        #get from env or create
        role = iam.Role(self, "ecs-sample-ec2-role",
                        assumed_by=iam.ServicePrincipal('ec2.amazonaws.com'),
                        )
        role.add_managed_policy(
            iam.ManagedPolicy.from_aws_managed_policy_name("AmazonEC2ContainerRegistryPowerUser")
        )

        asg_api = autoscaling.AutoScalingGroup(
            self,
            "ecs-sample-api-asg",
            vpc=props['vpc'],
            instance_type=ec2.InstanceType.of(
                ec2.InstanceClass.BURSTABLE3, ec2.InstanceSize.MICRO
            ),
            machine_image=ec2.AmazonLinuxImage(),
            key_name=KEY_PAIR_NAME,
            vpc_subnets=ec2.SubnetSelection(subnet_type=SubnetType.PUBLIC),
            desired_capacity=1,
            max_capacity=1,
            min_capacity=1,
            role=role
            # userdata=userdata

        )

        asg_worker = autoscaling.AutoScalingGroup(
            self,
            "ecs-sample-worker-asg",
            vpc=props['vpc'],
            instance_type=ec2.InstanceType.of(
                ec2.InstanceClass.BURSTABLE3, ec2.InstanceSize.MICRO
            ),
            machine_image=ec2.AmazonLinuxImage(),
            key_name=f"ecs-sample-key",
            vpc_subnets=ec2.SubnetSelection(subnet_type=SubnetType.PRIVATE),
            desired_capacity=1,
            max_capacity=1,
            min_capacity=1,
            role=role
            # userdata=userdata

        )

        # Creates a security group for our application
        sg_api = ec2.SecurityGroup(
                self,
                id="ecs-sample-ec2-api",
                vpc=props['vpc'],
                security_group_name="ecs-sample-ec2-api"
        )
        sg_worker = ec2.SecurityGroup(
                self,
                id="ecs-sample-ec2-worker",
                vpc=props['vpc'],
                security_group_name="ecs-sample-ec2-worker"
        )

        # to access this security group for SSH
        sg_api.add_ingress_rule(
            peer=ec2.Peer.ipv4(SSH_IP),
            connection=ec2.Port.tcp(22)
        )

        # use ec2 api as bastion
        sg_worker.connections.allow_from(
                sg_api, ec2.Port.tcp(22), "Allow from ec2 api")

        asg_api.add_security_group(sg_api)
        asg_worker.add_security_group(sg_worker)

        # Creates a security group for the application load balancer
        sg_alb = ec2.SecurityGroup(
                self,
                id="ecs-sample-loadbalancer",
                vpc=props['vpc'],
                security_group_name="ecs-sample-loadbalancer"
        )

        sg_api.connections.allow_from(
                sg_alb, ec2.Port.tcp(80), "Ingress")

        # Creates an application load balance
        lb = elbv2.ApplicationLoadBalancer(
                self,
                f"{id}-ALB",
                vpc=props['vpc'],
                security_group=sg_alb,
                internet_facing=True)

        listener = lb.add_listener("Listener", port=80)
        # Adds the autoscaling group's (asg_api) instance to be registered
        # as targets on port 8080
        listener.add_targets("Target", port=80, targets=[asg_api])
        # This creates a "0.0.0.0/0" rule to allow every one to access the
        # application
        listener.connections.allow_default_port_from_any_ipv4(
                "Open to the world"
                )

        # create RDS sg
        sg_rds = ec2.SecurityGroup(
                self,
                id="ecs-sample-mysql",
                vpc=props['vpc'],
                security_group_name="ecs-sample-mysql"
        )
        sg_api.connections.allow_from(
                sg_rds, ec2.Port.tcp(3306), "allow from rds to ec2 api")
        sg_rds.connections.allow_from(
                sg_api, ec2.Port.tcp(3306), "allow from ec2 api to rds")
        sg_worker.connections.allow_from(
                sg_rds, ec2.Port.tcp(3306), "allow from rds to ec2 worker")
        sg_rds.connections.allow_from(
                sg_worker, ec2.Port.tcp(3306), "allow from ec2 worker to rds")

        # create Redis SG
        sg_redis = ec2.SecurityGroup(
                self,
                id="ecs-sample-redis",
                vpc=props['vpc'],
                security_group_name="ecs-sample-redis"
        )
        sg_api.connections.allow_from(
                sg_rds, ec2.Port.tcp(6379), "allow from redis to ec2 api")
        sg_rds.connections.allow_from(
                sg_api, ec2.Port.tcp(6379), "allow from ec2 api to redis")
        sg_worker.connections.allow_from(
                sg_rds, ec2.Port.tcp(6379), "allow from redis to ec2 worker")
        sg_rds.connections.allow_from(
                sg_worker, ec2.Port.tcp(6379), "allow from ec2 worker to redis")
Enter fullscreen mode Exit fullscreen mode

RDS stack

  • You can either choose cluster mode or instance mode by passing cluster parameter on the stack argument
  • You cannot pass around plaintext credentials so CDK enforces you to use Credential This code actually produces password in SecertManager (These are L2 contruct that CDK enforces for best practice)

    class Credentials · AWS CDK

class RDSStack(core.NestedStack):

    def __init__(self, scope: core.Construct, id: str, env, props, cluster=False, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        #TEMP without ASG
        # security_groups = [ec2.SecurityGroup(
        #         self,
        #         id="ecs-sample-mysql",
        #         vpc=props['vpc'],
        #         security_group_name="ecs-sample-mysql"
        # )]

        vpc = props['vpc']
        security_groups=[props['sg_rds']]
        credential = rds.Credentials.from_username(username="admin")
        private_subnet_selections = ec2.SubnetSelection(subnet_type=ec2.SubnetType.PRIVATE)
        subnet_group = rds.SubnetGroup(self, "sample-rds-subnet-group",
                                       vpc=vpc,
                                       subnet_group_name="sample-rds-subnet-group",
                                       vpc_subnets=private_subnet_selections,
                                       description="sample-rds-subnet-group")
        self.output_props = props.copy()

        if not cluster:
            rds_instance = rds.DatabaseInstance(
                self, "RDS-instance",
                database_name="sample",
                engine=rds.DatabaseInstanceEngine.mysql(
                    version=rds.MysqlEngineVersion.VER_8_0_16
                ),
                credentials=credential,
                instance_identifier="ecs-sample-db",

                vpc=vpc,
                port=3306,
                instance_type=ec2.InstanceType.of(
                    ec2.InstanceClass.BURSTABLE3,
                    ec2.InstanceSize.MICRO,
                ),
                subnet_group=subnet_group,
                vpc_subnets=private_subnet_selections,
                removal_policy=core.RemovalPolicy.DESTROY,
                deletion_protection=False,
                security_groups=security_groups

            )
            core.CfnOutput(self, "RDS_instnace_endpoint", value=rds_instance.db_instance_endpoint_address)
            self.output_props['rds'] = rds_instance
            else:
            instance_props = rds.InstanceProps(
                vpc=vpc,
                security_groups=security_groups,
                vpc_subnets=private_subnet_selections
            )
            rds_cluster = rds.DatabaseCluster(
                self, "RDS-cluster",
                cluster_identifier="ecs-sample-db-cluster",
                instance_props=instance_props,
                engine=rds.DatabaseClusterEngine.aurora_mysql(
                    version=rds.AuroraMysqlEngineVersion.VER_2_07_1
                ),
                credentials=credential,
                default_database_name="sample",
                instances=1,
                subnet_group=subnet_group,
                removal_policy=core.RemovalPolicy.DESTROY,
                deletion_protection=False
            )
            core.CfnOutput(self, "RDS_cluster_endpoint", value=rds_cluster.cluster_endpoint.hostname)
            self.output_props['rds'] = rds_cluster
Enter fullscreen mode Exit fullscreen mode

Redis Stack

  • It seems like Redis ReplicationGroup only supports L1 construct currently. There for I have used CfnReplicationGroup in order to create Elasticache for Redis.
redis = cache.CfnReplicationGroup(self,
                                              f"{id}-replication-group",
                                              replication_group_description=f"{id}-replication group",
                                              cache_node_type="cache.t3.micro",
                                              cache_parameter_group_name=cache_parameter_group_name,
                                              security_group_ids=[sg_redis.security_group_id],
                                              cache_subnet_group_name=subnets_group.cache_subnet_group_name,
                                              engine="redis",
                                              engine_version="5.0.4",
                                              # node_group_configuration
                                              num_node_groups=1, #shard
                                              replicas_per_node_group=1 #one replica
                                              )
        redis.add_depends_on(subnets_group)
Enter fullscreen mode Exit fullscreen mode

Deploy

Let's deploy by typing two commands.

Bootstrap

This command will create a bucket for CDK to stage cloudformation files.

cdk bootstrap
#check cloudformation for CDKtoolkit
Enter fullscreen mode Exit fullscreen mode

Alt Text
Alt Text

Deploy

cdk deploy
Enter fullscreen mode Exit fullscreen mode

You will be able to monitor nested stacks being deployed on shell. You can also monitor detailed progress on CloudFormation console.

Alt Text

Alt Text

If you get any errors, go to CloudFormation stack on console to review events tab to look for reason for failure.

Destroy

once you have finished working with the sample you can easily destroy with single command.

cdk destroy
Enter fullscreen mode Exit fullscreen mode

or just delete the stack in CloudFormation console.

Takeaway

I have reduced 900 lines of json code into python code with less than 500 lines of code. Even more, they are modular and easy to revise whenever necessary. AWS CDK enables you to boost your productivity on IaC. If you have been using CloudFormation, you should definitely try out AWS CDK.

Take advantage of IDE!

Thank you for reading!

References

AWS Cloud Development Kit (CDK) - TypeScript and Python are Now Generally Available | Amazon Web Services

Examples

aws-samples/aws-cdk-examples

aws-samples/startup-kit-templates

cdkworkshop.com

AWS CDK · AWS CDK Reference Documentation

AWS CDK for Beginners

Source

This article was originally posted on here

💖 💪 🙅 🚩
kokospapa8
Jinwook Baek

Posted on November 12, 2020

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related