Terraform Pipeline Part 1 - A New Frontier

Terraform Pipeline Part 1 - A New Frontier

I know you’ve all been waiting forever and a half but here we go, another blog post…finally. One of these days I’ll get this consistent posting thing down. I haven’t used up every excuse yet as a reason why I am unable to get these out more often, so I am sure you’ll be glad to know that my time has not been wasted. I’ve continued my learning and finally came to this fun experiment as a part of what I have been playing with lately: Terraform and AWS Developer Tools.

It’s been awhile since I first mentioned HashiCrop’s Terraform (or two blog posts ago) but I have spent a lot of time recently working with it and have really grown to enjoy working with it, a lot. I have not used Terraform Cloud much (Yet) and HashiCorp did just announce their HashiCorp Cloud Platform private beta on AWS, but we are going to work on making an easy to use Terraform Pipeline to configure our AWS Infrastructure.

Quick follow up. I rushed this post and did not explain much of the Terraform template so be sure to read the next part to get a much more complete picture of the Pipeline in action.

The Setup

In order to be able to use this pipeline, a few pieces will need to be created first.

As always, look over what some idiot on the internet is telling you deploy on your account. Better to have an idea what they want you to do, instead of running something that may be malicious.

Required:

Most of this is pretty straight forward but if you are like me and forget constantly, follow the links for refreshers on how to set each up.

Setup GitHub personal access token in AWS

Setup GitHub Repo

The Deployment

Deploy CloudFormation Template

The Sweet Taste of Victory

So…. everything works right?? What do you mean you don’t know? Ok, I guess it’s time for some validation. If you go to your EC2 console you should see a nice little t2.micro instance named: vMadBro-Build1. It’s there? Great! You follow directions like the best of them and there is nothing left to talk about.

You want details don’t you?

They always want details… Ok, let’s break down what we did above.

What’s under the hood?

AWS CodePipeline is an easy to use continuous delivery service. In our CloudFormation template, used above, we created a pipeline with two stages: Source and Build.

Source

This is what defines the location of the source code for our “build”.

Name: Source
Actions:
  - Name: GitHubSource
    ActionTypeId:
      Category: Source
      Owner: ThirdParty
      Provider: GitHub
      Version: 1
    OutputArtifacts:
      - Name: GitHubArtifact
    Configuration:
      Owner:
        Ref: GitHubAccount
      Repo: terraRepo
      Branch: master
      OAuthToken: '{{resolve:secretsmanager:GitHubAuth:SecretString:GitHubAuth}}'

Notes:

Build

This is the compute stage of the pipeline. By default, it will look for a buildspec.yaml file that defines the commands that will be executed whenever the build stage is called.

Name: Build
Actions:
  - Name: TerraformRun
    ActionTypeId:
      Category: Build
      Owner: AWS
      Provider: CodeBuild
      Version: 1
    Configuration:
      ProjectName: !Sub '${AWS::StackName}-Run'
        EnvironmentVariables: !Sub >-
          [{"name":"STATE_BUCKET","value":"${AWS::StackName}-statestore"},{"name":"LOCKDB_TABLE","value":"${AWS::StackName}-lockdb"},{"name":"REV","value":"${TemplateRevision}"}]
        InputArtifacts:
          - Name: GitHubArtifact

Notes:

BuildSpec

buildspec.yaml

env:
  variables:
    STATE_BUCKET: "pipeline"
    LOCKDB_TABLE: "table"

phases:
  install:
    commands:
      - echo Install Terraform
      - wget https://releases.hashicorp.com/terraform/0.12.26/terraform_0.12.26_linux_amd64.zip
      - unzip terraform_0.12.26_linux_amd64.zip
      - mv terraform /usr/local/bin/
  build:
    commands:
      - echo Run terraform init and apply
      - cd $REV
      - terraform init -backend-config="bucket=$STATE_BUCKET" -backend-config="dynamodb_table=$LOCKDB_TABLE"
      - terraform apply -auto-approve

Notes:

The last required piece of the pipeline is the Artifact Store. It is an S3 Bucket that is used to store the source code in a zip file or ‘artifact’

ArtifactStore:
  Type: S3
  Location:
    Ref: PipelineArtifactStore

The remaining elements of the CloudFormation template define IAM policies and role, the S3 buckets used for the CodePipeline artifact store and the Terraform state file location, and the DynamoDB table that Terraform uses to maintain a lock on the state file during a Terraform run.

Clean up your mess and go home

You made it through it all and you are not impressed? Fine. At least clean up after yourself.

Terraform wIll create the one micro instance which you can delete (or if you are feeling bold, you can alter the buildspec file to run a Terraform destroy, HA!). The CloudFormation stack will clean up most of it’s mess but you will need to clean out the two S3 buckets before it can do the job properly. After that, delete the stack and the secret in Secrets Manager.

No more traces of my insanity left on your AWS account.

Final Words

As always I hope you enjoyed reading through and (hopefully) trying out my CloudFormation template to make this continuous pipeline for Terraform. I know we didn’t get too deep into Terraform in this post, but remember, it is just Part 1. Yes my friends, there is plenty of insanity to come! In future posts, I’ll dive deeper into the pipeline and Terraform itself. Maybe we get really nutty and have a Terraform configuration that creates its own pipeline? Who knows, we’ve already gone off the deep end. I’ll also look to try out and compare it against HashiCorps own Terraform Cloud.

I need to get some rest now. Thank you for your time and feel free to reach out to me with any questions or feedback on LinkedIn or Twitter @GregMadro.