Terraform Pipeline Part 1 - A New Frontier
Terraform Pipeline Part 1 - A New Frontier
I know you’ve all been waiting forever and a half but here we go, another blog post…finally. One of these days I’ll get this consistent posting thing down. I haven’t used up every excuse yet as a reason why I am unable to get these out more often, so I am sure you’ll be glad to know that my time has not been wasted. I’ve continued my learning and finally came to this fun experiment as a part of what I have been playing with lately: Terraform and AWS Developer Tools.
It’s been awhile since I first mentioned HashiCrop’s Terraform (or two blog posts ago) but I have spent a lot of time recently working with it and have really grown to enjoy working with it, a lot. I have not used Terraform Cloud much (Yet) and HashiCorp did just announce their HashiCorp Cloud Platform private beta on AWS, but we are going to work on making an easy to use Terraform Pipeline to configure our AWS Infrastructure.
Quick follow up. I rushed this post and did not explain much of the Terraform template so be sure to read the next part to get a much more complete picture of the Pipeline in action.
The Setup
In order to be able to use this pipeline, a few pieces will need to be created first.
As always, look over what some idiot on the internet is telling you deploy on your account. Better to have an idea what they want you to do, instead of running something that may be malicious.
Required:
- AWS account
- GitHub Account
- GitHub personal access token
- Give the token the following permissions: repo and admin:repo_hook
- Make sure you take note of the token in a temporary place.
Most of this is pretty straight forward but if you are like me and forget constantly, follow the links for refreshers on how to set each up.
Setup GitHub personal access token in AWS
- Once all of these are setup, log into your AWS account and navigate to Secrets Manager, under Security, Identity, & Compliance. In the “Get Started” section click on Store a new secret (Currently [6/22/20] costs: $0.40 per secret per month; $0.05 per 10,000 API calls)
- Select: Other type of secrets. Under Secret key/value, use: GitHub Auth for the key and your GitHub personal access token as the value. Click Next.
- For the Secret name use GitHubAuth. Click Next twice, then click Store.
Setup GitHub Repo
- Fork this GitHub repo into your account.
The Deployment
Deploy CloudFormation Template
- In AWS, go to CloudFormation, under Management & Governance. Now we can pull the CF template from my GitHub repo and… where is my pull from GitHub, AWS?!? Oh only S3?? or I can upload it or copy and paste it all into the Designer..
- Fine, here it is nice and easy in an S3 Bucket: https://vmadbro-cf.s3.amazonaws.com/main-rev1.yaml. You can copy the link and use the Template source Amazon S3 URL or download the template and Upload a template file into CloudFormation.
- Once you have picked your method of deployment, click Next. Pick a stack name (stick to alphanumeric lowercase starting with a letter) and enter the name of your GitHub account (IE mine is https://github.com/gmadro, so i would put in: gmadro). Click Next.
- Click Next for Configure stack options. On the last page scroll to the bottom and check the box since we will be building IAM roles with this stack. Click Create stack.
The Sweet Taste of Victory
So…. everything works right?? What do you mean you don’t know? Ok, I guess it’s time for some validation. If you go to your EC2 console you should see a nice little t2.micro instance named: vMadBro-Build1. It’s there? Great! You follow directions like the best of them and there is nothing left to talk about.
You want details don’t you?
They always want details… Ok, let’s break down what we did above.
What’s under the hood?
AWS CodePipeline is an easy to use continuous delivery service. In our CloudFormation template, used above, we created a pipeline with two stages: Source and Build.
Source
This is what defines the location of the source code for our “build”.
Name: Source
Actions:
- Name: GitHubSource
ActionTypeId:
Category: Source
Owner: ThirdParty
Provider: GitHub
Version: 1
OutputArtifacts:
- Name: GitHubArtifact
Configuration:
Owner:
Ref: GitHubAccount
Repo: terraRepo
Branch: master
OAuthToken: '{{resolve:secretsmanager:GitHubAuth:SecretString:GitHubAuth}}'
Notes:
- The ActionTypeId specifies GitHub as our source code repository. Because we established a GitHub webhook to the pipeline, any changes to the code in that repository will trigger a source pull and build.
- Secrets Manager has to be utilized for storing the GitHub personal access token as the OAuthToken parameter would not allow a AWS Systems Manager Parameter Store.
Build
This is the compute stage of the pipeline. By default, it will look for a buildspec.yaml file that defines the commands that will be executed whenever the build stage is called.
Name: Build
Actions:
- Name: TerraformRun
ActionTypeId:
Category: Build
Owner: AWS
Provider: CodeBuild
Version: 1
Configuration:
ProjectName: !Sub '${AWS::StackName}-Run'
EnvironmentVariables: !Sub >-
[{"name":"STATE_BUCKET","value":"${AWS::StackName}-statestore"},{"name":"LOCKDB_TABLE","value":"${AWS::StackName}-lockdb"},{"name":"REV","value":"${TemplateRevision}"}]
InputArtifacts:
- Name: GitHubArtifact
Notes:
- AWS CodeBuild is what provides the build capabilities for this stage. It is instantiated in the template with some basic environment settings to set the compute power required for this build.
- The environment variables are used to pass in the Terraform state S3 Bucket, DynamoDB table used for locking, and a template revision into the build.
- The InputArtifact, GitHubArtifact, is the OutputArtifact provided from CodePipeline
BuildSpec
env:
variables:
STATE_BUCKET: "pipeline"
LOCKDB_TABLE: "table"
phases:
install:
commands:
- echo Install Terraform
- wget https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_linux_amd64.zip
- unzip terraform_0.12.26_linux_amd64.zip
- mv terraform /usr/local/bin/
build:
commands:
- echo Run terraform init and apply
- cd $REV
- terraform init -backend-config="bucket=$STATE_BUCKET" -backend-config="dynamodb_table=$LOCKDB_TABLE"
- terraform apply -auto-approve
Notes:
- Each time the build stage is called, the above set of commands are executed. The container will pull down Terraform, initialize the backend with the provided S3 Bucket and DynamoDB table, and then apply the configuration.
The last required piece of the pipeline is the Artifact Store. It is an S3 Bucket that is used to store the source code in a zip file or ‘artifact’
ArtifactStore:
Type: S3
Location:
Ref: PipelineArtifactStore
The remaining elements of the CloudFormation template define IAM policies and role, the S3 buckets used for the CodePipeline artifact store and the Terraform state file location, and the DynamoDB table that Terraform uses to maintain a lock on the state file during a Terraform run.
Clean up your mess and go home
You made it through it all and you are not impressed? Fine. At least clean up after yourself.
Terraform wIll create the one micro instance which you can delete (or if you are feeling bold, you can alter the buildspec file to run a Terraform destroy, HA!). The CloudFormation stack will clean up most of it’s mess but you will need to clean out the two S3 buckets before it can do the job properly. After that, delete the stack and the secret in Secrets Manager.
No more traces of my insanity left on your AWS account.
Final Words
As always I hope you enjoyed reading through and (hopefully) trying out my CloudFormation template to make this continuous pipeline for Terraform. I know we didn’t get too deep into Terraform in this post, but remember, it is just Part 1. Yes my friends, there is plenty of insanity to come! In future posts, I’ll dive deeper into the pipeline and Terraform itself. Maybe we get really nutty and have a Terraform configuration that creates its own pipeline? Who knows, we’ve already gone off the deep end. I’ll also look to try out and compare it against HashiCorps own Terraform Cloud.
I need to get some rest now. Thank you for your time and feel free to reach out to me with any questions or feedback on LinkedIn or Twitter @GregMadro.