Testing terraform with moto, part 1

March 11, 2022   

For my first 10% day at my new employer, I decided to experiment with testing of our Terraform infrastructure-as-code for our Kubernetes clusters that I’m the maintainer for.

The current setup utilises a home-grown tool that works with AWS CloudFormation and Terraform. The tool is primarily built around CloudFormation, and expresses infrastructure as projects - the encompass many similar “stacks” in CloudFormation terminology, but they represent different environments of the same services in builder. It adds some conventions, defaults for all projects, and further inherited configuration for different environments, creating a consitency across applications with a relatively small amount of configuration. The tool has Terraform capabilities to extend projects that use non-AWS services (thus are not supported by CloudFormation).

Despite eLife using AWS’s EKS service for kubernetes, the bootstrapping is built out entirely upon the terraform generation part of builder. There are tests, but they amount to checking that the expected terraform configuration is generated based on changes to the project yaml config file. I was hoping to extend testing to include to a mock AWS service of some description - allowing us to test the full effect of changes (as best we can without running directly on AWS) from generating changes to existing infrastructure during plan, to applying a verifying it is in a correct state.

I considered two projects that fit the bill for an “emulated” AWS service - Localstack and Moto. Moto comes out of the boto3 project’s desire for testing their AWS API client, and Localstack is built on top of Moto. It can be considered a big brother of Moto - extended the APIs to not just mock services, but run services. For my purposes, I decided to start with just Moto - primarily to start small, and becuase I assume I can move to localstack if and when I extend this to validate an actual cluster is created.

So, with our scope set, I set off on my adventure.

Getting started

First thing was to get Moto up and running. I used the docker container available here, and just used docker locally to run this container:

1> docker run --rm  --name kubernetes-cluster-provisioning-test -p 5000:5000 motoserver/moto:latest

To test this mocked AWS server, I created an AWS CLI named profile with the aws-plugin-endpoint. Following the instructions at that git repo, on my mac I ended up with this in my ~/.aws/config:

 2endpoint = awscli_plugin_endpoint
 3cli_legacy_plugin_path = "/opt/homebrew/lib/python3.9/site-packages"
 7region = us-east-1
 8output = json
11[profile local]
12eks =
13    endpoint_url = http://localhost:5000/
14apigateway =
15    endpoint_url = http://localhost:5000/
16kinesis =
17    endpoint_url = http://localhost:5000/
18dynamodb =
19    endpoint_url = http://localhost:5000/
20s3 =
21    endpoint_url = http://localhost:5000/
22firehose =
23    endpoint_url = http://localhost:5000/
24lambda =
25    endpoint_url = http://localhost:5000/
26sns =
27    endpoint_url = http://localhost:5000/
28sqs =
29    endpoint_url = http://localhost:5000/
30redshift =
31    endpoint_url = http://localhost:5000/
32elasticsearch =
33    endpoint_url = http://localhost:5000/
34ses =
35    endpoint_url = http://localhost:5000/
36route53 =
37    endpoint_url = http://localhost:5000/
38cloudformation =
39    endpoint_url = http://localhost:5000/
40cloudwatch =
41    endpoint_url = http://localhost:5000/
42ssm =
43    endpoint_url = http://localhost:5000/
44secretsmanager =
45    endpoint_url = http://localhost:5000/
46stepfunctions =
47    endpoint_url = http://localhost:5000/
48eventbridge =
49    endpoint_url = http://localhost:5000/
50sts =
51    endpoint_url = http://localhost:5000/
52iam =
53    endpoint_url = http://localhost:5000/
54ec2 =
55    endpoint_url = http://localhost:5000/

I also added the test authentication to ~/.aws/credentials:

2aws_access_key_id = test
3aws_secret_access_key = test

With both those in place, I was able to run a few different aws-cli commands against the Mocked AWS:

 1> aws --region  us-east-1 ec2 describe-images --filters Name=name,Values=amazon-eks-node-*
 3    "Images": [
 4        {
 5            "Architecture": "x86_64",
 6            "CreationDate": "2022-03-13T20:01:33.000Z",
 7            "ImageId": "ami-ekslinux",
 8            "ImageLocation": "amazon/amazon-eks",
 9            "ImageType": "machine",
10            "Public": true,
11            "KernelId": "None",
12            "OwnerId": "801119661308",
13            "Platform": "Linux/UNIX",
14            "RamdiskId": "ari-1a2b3c4d",
15            "State": "available",
16            "BlockDeviceMappings": [
17                {
18                    "DeviceName": "/dev/sda1",
19                    "Ebs": {
20                        "DeleteOnTermination": false,
21                        "SnapshotId": "snap-87e311c4",
22                        "VolumeSize": 15,
23                        "VolumeType": "standard"
24                    }
25                }
26            ],
27            "Description": "EKS Kubernetes Worker AMI with AmazonLinux2 image",
28            "Hypervisor": "xen",
29            "ImageOwnerAlias": "amazon",
30            "Name": "amazon-eks-node-linux",
31            "RootDeviceName": "/dev/sda1",
32            "RootDeviceType": "ebs",
33            "Tags": [],
34            "VirtualizationType": "hvm"
35        }
36    ]
38> echo test > test.txt
40> aws s3 mb s3://testbucket
41make_bucket: testbucket
43> aws s3 cp ./test.txt s3://testbucket/
44upload: ./test.txt to s3://testbucket/test
46> aws s3 ls testbucket
472022-03-11 16:24:53          5 test.txt

Configure Terraform

To connect Terraform to the Moto instance, I used this provider config in a file called test_moto.tf new directory:

 1// setup provider for localstack
 2provider "aws" {
 3  region                      = "us-east-1"
 4  access_key                  = "test"
 5  secret_key                  = "test"
 6  s3_use_path_style           = true
 7  skip_credentials_validation = true
 8  skip_metadata_api_check     = true
 9  skip_requesting_account_id  = true
11  endpoints {
12    ec2 = "http://localhost:5000"
13    eks = "http://localhost:5000"
14    iam = "http://localhost:5000"
15    s3  = "http://localhost:5000"
16  }

I’ve just added endpoints to the services I expect to use, but there is a whole list of other endpoints that can be overridden for terraform here.

At this point you can run terraform init to get the aws module:

 1> terraform init
 3Initializing the backend...
 5Initializing provider plugins...
 6- Finding latest version of hashicorp/aws...
 7- Installing hashicorp/aws v4.4.0...
 8- Installed hashicorp/aws v4.4.0 (signed by HashiCorp)
10Terraform has created a lock file .terraform.lock.hcl to record the provider
11selections it made above. Include this file in your version control repository
12so that Terraform can guarantee to make the same selections by default when
13you run "terraform init" in the future.
15Terraform has been successfully initialized!
17You may now begin working with Terraform. Try running "terraform plan" to see
18any changes that are required for your infrastructure. All Terraform commands
19should now work.
21If you ever set or change modules or backend configuration for Terraform,
22rerun this command to reinitialize your working directory. If you forget, other
23commands will detect it and remind you to do so if necessary.

Finally, to test that it was working correctly, I added some data providers to read the same values as above with the aws-cli tool:

 1# get test.txt
 2data "aws_s3_object" "test_file" {
 3  bucket = "testbucket"
 4  key    = "test.txt"
 7# get the ami's filters by amazon-eks-node*
 8data "aws_ami" "test_ami" {
 9  filter {
10    values = ["amazon-eks-node*"]
11    name   = "name"
12  }
14  most_recent = true
15  owners      = ["amazon"]
18# output during plan/apply
19output "bucket_file_body" {
20  value = data.aws_s3_object.test_file.body
22output "aws_ami_architecture" {
23  value = data.aws_ami.test_ami.architecture

If all is well, when you run terraform plan, your output should looks something like this:

 1> terraform plan
 3Changes to Outputs:
 4  + aws_ami_architecture = "x86_64"
 5  + bucket_file_body     = <<-EOT
 6        test
 7    EOT
 9You can apply this plan to save these new output values to the Terraform state, without changing any real infrastructure.
13Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.

That’s it for this post, next time I’ll attempt to connect up real terraform config, and see how much we far we can provision.

- Scott