Setting Up Local AWS with LocalStack

Setting Up Local AWS with LocalStack

·

3 min read

Developing applications that rely on AWS can be challenging without a reliable testing setup. Constantly deploying to a live AWS environment is not only time-consuming but also costly. Enter LocalStack—a tool that emulates AWS services locally. Combined with Docker, it’s a game-changer for developers seeking an efficient and cost-effective testing workflow. In this blog, we’ll walk through how to get started and make the most of this setup.

What is LocalStack?

LocalStack is a powerful tool that enables developers to run a replica of AWS services on their local machines. It emulates a wide range of AWS functionalities, allowing you to perform tests on services like Lambda, S3, and DynamoDB without ever needing to access the cloud.

Why Use LocalStack?

• AWS Service Simulation: LocalStack provides an environment where you can test AWS services without interacting with the actual AWS cloud.

• Faster Development: Testing locally means fewer delays and faster iterations during development.

• No Cloud Dependency: You no longer have to worry about cloud costs or internet connectivity to run your tests.

Docker for Easy Setup

Docker simplifies setting up LocalStack by providing a containerized environment that encapsulates all necessary configurations. This makes it easy to spin up a LocalStack instance and start testing AWS services locally in minutes.

Step-by-Step Guide

Let’s create a docker-compose.yml file to define the services. This file will help us start LocalStack and its required services easily.

localstack:
    image: localstack/localstack:latest
    container_name: localstack
    environment:
      - SERVICES=sqs,s3,sts,iam
      - AWS_ACCESS_KEY_ID=dev123
      - AWS_SECRET_ACCESS_KEY=dev123
    ports:
      - '4566-4597:4566-4597'
    volumes:
      - "./localstack-setup/sqs-s3-consumer.sh:/etc/localstack/init/ready.d/init-aws.sh"
      - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack"
      - "/var/run/docker.sock:/var/run/docker.sock"

The SERVICES environment variable is used to define the AWS services you wish to use. The next two environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are optional, but having them makes the process easier to access AWS services than providing the access credentials everytime. Whenever you try to access an AWS service via aws-sdk, the sdk itself looks for these access credentials in the container environment so adding it here will make the process easy. Also don’t forget to add these credentials to your projects container’s environment variables’ list along with AWS_REGION and AWS-DEFAULT_REGION and set them to us-west-1.

LocalStack offers AWS CLI inside the container, you can use this CLI to setup s3 bucket and sqs queue.

awslocal sqs create-queue —queue-name <queue-name>

awslocal s3api create-bucket —bucket <bucket-name>

Get the queue arn using the following command. You will get the queue url when you create the queue.

awslocal sqs get-queue-attributes --queue-url <queue-url> --attribute-names QueueArn

Setup the SQS queue to receive messages whenever a file is uploaded to s3 bucket.

awslocal s3api put-bucket-notification-configuration --bucket --notification-configuration '{ "QueueConfigurations": [ { "Id": "s3-to-sqs", "QueueArn": "<queue-arn>", "Events": ["s3:ObjectCreated:*"] } ] }'

If you want these commands to be executed before any container starts up, you can add it volumes as specified. Replace sqs-s3-consumer.sh with your file shell script.

When you are setting up s3 and sqs clients using aws-sdk (you can go through this documentation as to how you can set up these clients using aws-sdk), for local environment, make sure to add the following in the client config:

SQS Client

{ endpoint: configService.get('AWS_LOCALSTACK_ENDPOINT'), }

And to access the queue, when the queue was created, you will get a local cloud url of the queue which you should use.

S3 Client

{ endpoint: configService.get('AWS_LOCALSTACK_ENDPOINT'), forcePathStyle: true, }

The AWS_LOCALSTACK_ENDPOINT should be the docker url of the localstack container

AWS_LOCALSTACK_ENDPOINT: host.docker.internal:4566

Execute the following command inside localstack container to upload a file to s3 bucket. You need to create a file inside the local stack container.

awslocal s3api put-object --bucket <bucket-name> --key sample.txt --body

Now you are all set! If you properly configured everything, you should be receiving the events from SQS queue and should be able to fetch the file from the s3 bucket using the key from the SQS event.