Setting up local AWS development environment with Localstack

oksanah

Oksana Horlock

Posted on January 20, 2023

Setting up local AWS development environment with Localstack

When Cloud services are used in an application, it might be tricky to mock them during local development. Some approaches include: 1) doing nothing thus letting your application fail when it makes a call to a Cloud service; 2) creating sets of fake data to return from calls to AWS S3, for example; 3) using an account in the Cloud for development purposes. A nice in-between solution is using Localstack, a Cloud service emulator. Whereas the number of services available and the functionality might be a bit limited compared to the real AWS environment, it can still work very well for local development.

This article will describe how to set Localstack up for local development in Docker.

Docker-compose setup:

In the services section of our docker-compose.yml we have Localstack container definition:

localstack:
    image: localstack/localstack:latest
    hostname: localstack
    environment:
      - SERVICES=s3,sqs
      - HOSTNAME_EXTERNAL=localstack
      - DATA_DIR=/tmp/localstack/data
      - DEBUG=1
      - AWS_ACCESS_KEY_ID=test
      - AWS_SECRET_ACCESS_KEY=test
      - AWS_DEFAULT_REGION=eu-central-1
    ports:
      - "4566:4566"
    volumes:
      - localstack-data:/tmp/localstack:rw
      - ./create_localstack_resources.sh:/docker-entrypoint-initaws.d/create_localstack_resources.sh
Enter fullscreen mode Exit fullscreen mode

Although we don’t need to connect to any AWS account, we do need dummy AWS variables (with any value). We specify which services we want to run using Localstack – in this case it’s SQS and S3.

We also need to set HOSTNAME_EXTERNAL because SQS API needs the container to be aware of the hostname that it can be accessed on.

Another point is that that we cannot use the entrypoint definition because Localstack has a directory docker-entrypoint-initaws.d from where shell scripts are run when the container starts up. That’s why we’re mapping the container volume to a folder wherer those scripts are. In our case create_localstack_resources.sh will create all the necessary S3 buckets and the SQS queue:

EXPECTED_BUCKETS=("bucket1" "bucket2" "bucket3")
EXISTING_BUCKETS=$(aws --endpoint-url=http://localhost:4566 s3 ls --output text)

echo "creating buckets"
for BUCKET in "${EXPECTED_BUCKETS[@]}"
do
  echo $BUCKET
  if [[ $EXISTING_BUCKETS != *"$BUCKET"* ]]; then
    aws --endpoint-url=http://localhost:4566 s3 mb s3://$BUCKET
  fi
done

echo "creating queue"
if [[ $EXISTING_QUEUE != *"$EXPECTED_QUEUE"* ]]; then
    aws --endpoint-url=http://localhost:4566 sqs create-queue --queue-name my-queue\
    --attributes '{
      "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:eu-central-1:000000000000:my-dlq-queue\",\"maxReceiveCount\":\"3\"}",,
      "VisibilityTimeout": "120"
    }'
fi
Enter fullscreen mode Exit fullscreen mode

Note that AWS CLI command syntax is different to the real AWS CLI (otherwise you’d create resources in the account for which you have the credentials set up!), and includes Localstack endoint flag: –endpoint-url=http://localhost:4566.

Configuration files

In this example I'm using use Scala with Play framework, and therefore have .conf files. In local.conf file we have the following:

aws {
   localstack.endpoint="http://localstack:4566"
   region = "eu-central-1"
   s3.bucket1 = "bucket1"
   s3.bucket2 = "bucket2"
   sqs.my_queue = "my-queue"
   sqs.queue_enabled = true
}
Enter fullscreen mode Exit fullscreen mode

The real application.conf file has resource names injected at the instance startup. They live in an autoscaling group launch template where they are created by Terraform (out of scope of this post).

Initializing SQS client based on the environment

The example here is for creating an SQS client. Below are snippets most relevant to the topic.

In order to initialize the SQS Service so that it can be injected into other services we can do this:

lazy val awsSqsService: QueueService = createsSqsServiceFromConfig()
Enter fullscreen mode Exit fullscreen mode

In createsSqsServiceFromConfig.scala we check if the configuration has a Localstack endpoint and if so, we build LocalStack client:

protected def createsSqsServiceFromConfig(): QueueService = {
  readSqsClientConfig().map { config =>
  val sqsClient: SqsClient = config.localstackEndpoint match {
    case Some(endpoint) => new LocalStackSqsClient(endpoint, config.region)
    case None => new AwsSqsClient(config.region)
  }
  new SqsQueueService(config.queueName, sqsClient)
 }.getOrElse(fakeAwsSqsService)
}
Enter fullscreen mode Exit fullscreen mode

readSqsClientConfig.scala is used to get configuration values from .conf files:

private def readSqsClientConfig = {
   val sqsName = config.get[String]("aws.sqs.my_queue")
   val sqsRegion = config.get[String]("aws.region") 
   val localStackEndpoint = config.getOptional[String]("aws.localstack.endpoint")
   SqsClientConfig(sqsName, sqsRegion, localStackEndpoint)
}
Enter fullscreen mode Exit fullscreen mode

Finally, LocalStackSqsClient initialization looks like this:

class LocalStackSqsClient(endpoint: String, region:String) extends SqsClient with Logging {
    private val sqsEndpoint = new EndpointConfiguration(endpoint, region)
    private val awsCreds = new BasicAWSCredentials("test", "test")
    private lazy val sqsClientBuilder = AmazonSQSClientBuilder.standard()
      .withEndpointConfiguration(sqsEndpoint)
      .withCredentials(new AWSStaticCredentialsProvider(awsCreds))
    private lazy val client = sqsClientBuilder.build()


override def BuildClient(): AmazonSQS = {
        log.debug("Initializing LocalStack SQS service")
        client
    }
}
Enter fullscreen mode Exit fullscreen mode

Real AWS Client for the test/live environment (a snippet):

AmazonSQSClientBuilder.standard()
      .withCredentials(new DefaultAWSCredentialsProviderChain)
      .withRegion(region)
Enter fullscreen mode Exit fullscreen mode

Notice that we need fake BasicAWSCredentials that allows us to pass in dummy AWS access key and secret key and then we use AWSStaticCredentialsProvider, an implementation of AWSCredentialsProvider that just wraps static AWSCredentials. When real AWS environment is used, instead of AWSStaticCredentialsProvider we use DefaultAWSCredentialsProviderChain, which picks the EC2 Instance Role if it’s unable to find credentials by any other methods.

And that’s it. Happy coding!

💖 💪 🙅 🚩
oksanah
Oksana Horlock

Posted on January 20, 2023

Join Our Newsletter. No Spam, Only the good stuff.

Sign up to receive the latest update from our blog.

Related