Terraform Pipeline Part 2 - Now with more er

Terraform Pipeline Part 2 - Now with more er

Wait it’s barely been three weeks and I’m already posting again? What can I say, this has been a very fun project to work on. This time I will be expanding on (and correcting) the Terraform Pipeline that I started in part one. The goal here is to add a lot more polish to this pipeline and actually make it more useful. Also maybe actually showing and explaining the Terraform template this time since in the first post I apparently omitted that entire part…Really professional there buddy.

I am sure you have noticed (or not if you are new to the site, Welcome!) the new look to my blog. I found a more friendly software to use in creating it in, Hugo. I think the readability is a lot better and less of a hassle to test and update what I am working on.

So, whats the er? That my friends would be HashiCorp’s, Packer and Docker Inc’s, docker. Packer, like every other HashiCorp product, is built with IaC at its core. It is used to build machine images using a similar template design to Terraform. Docker is… containers. What are containers you ask? Read here. But the quick, and dirty, is the docker container will serve as the App that is being deployed with our Terraform Pipeline. It will give us something more substantial than some random EC2 instance.

Refined Setup

In order to work with this revision of the pipeline, the original requirements still exist plus the following:

If you’ve been following along, from part one, your GitHub access token should still be configured in AWS Secrets Manager and you’ll have my GitHub Repo forked with our infrastructure (and now application) code. If you purged those things out, head back over to that post for what is required.

Unleash the Kraken!

Deploy CloudFormation Template - rev2

Much success!

Once again we have a successful deployment! Doesn’t work…. What did you break? Ok, well it should work. If it doesn’t, let me know as I would love to see what I missed in my configuration, and fix it. But for now, I am going to continue with the assumption that it works. So what is different with our Pipeline this time around? For starters there is very tangible way to see if the deployment really works or not. This time, the pipeline is not just deploying any ole EC2 instance. Now it is a custom built AMI, with a docker-based website inside of it. Let’s dive in.

Check out this engine

The CloudFormation template did not change very much, from rev1, other than the additional parameters, so I will not dive into those changes here. Instead most of our changes are made in the CodeBuild buildspec and the Terraform configuration template. I’ll also go over the additional code from the Packer template and the Docker App.

Starting chronologically:

Buildspec

buildspec.yaml

env:
  variables:
    STATE_BUCKET: "pipeline"
    LOCKDB_TABLE: "table"
    REV: "revision"
    IMAGE_NAME: "name"
    SSH_KEY: "key_name"

Notes:

phases:
  install:
    commands:
      - echo Install Terraform
      - wget https://releases.hashicorp.com/terraform/0.12.28/terraform_0.12.28_linux_amd64.zip
      - unzip terraform_0.12.28_linux_amd64.zip
      - mv terraform /usr/local/bin/
      - echo Install Packer
      - wget https://releases.hashicorp.com/packer/1.6.0/packer_1.6.0_linux_amd64.zip
      - unzip packer_1.6.0_linux_amd64.zip
      - mv packer /usr/local/bin/

Notes:

  build:
    commands:
      - echo Change directory and run Packer to build AMI image
      - cd $REV/infrastructure/packer
      - packer build -var "image_name=$IMAGE_NAME" build.pkr.hcl
      - echo Change directory and Run terraform init and apply to build Infrastructure
      - cd ../terraform
      - terraform init -backend-config="bucket=$STATE_BUCKET" -backend-config="dynamodb_table=$LOCKDB_TABLE"
      - terraform apply -var="image_name=$IMAGE_NAME" -var="ssh_key=$SSH_KEY" -auto-approve
 

Notes:

Packer

Next up is the Packer build template. Packer builds machine images from a base source image and uses provisioners to build a customized image.

source "amazon-ebs" "build1" {
  ami_name = "${var.image_name}"
  region = "us-east-1"
  instance_type = "t2.micro"
  source_ami = "ami-08f3d892de259504d"
  force_deregister = true
  force_delete_snapshot = true
 
  ssh_username = "ec2-user"
}

Notes:

build {
    sources = [
        "source.amazon-ebs.build1"
    ]
 
    provisioner "file" {
        source = "../../app/"
        destination = "/tmp"
    }
 
    provisioner "shell" {
        inline = [
            "sudo yum install -y docker",
            "sudo systemctl start docker",
            "sudo docker build /tmp -t vmadbro/apache:1.0",
            "sudo systemctl enable docker",
            "sudo docker run -d --name Apache --restart unless-stopped -p 80:80 vmadbro/apache:1.0"
        ]
    }
}

Notes:

Docker

The docker “App” is ridiculously simple. It is just an apache web server that injects our Index.html file into. I am still really early into my docker development learning, so I wasn’t going to get too fancy yet. It mainly serves the purpose of being something that is deployed with our pipeline.

FROM httpd:2.4
COPY index.html /usr/local/apache2/htdocs/

Terraform

Now I get to talk in great detail the one template that I totally forgot to show in the first part…

data "aws_ami" "packer" {
  owners = ["self"]
 
  filter {
    name = "name"
    values = [var.image_name]
  }
}
 
data "aws_vpc" "default" {
  default = true
}

Notes:

terraform {
    backend "s3" {
        bucket = "handled-by-pipline"
        key = "terraform.tfstate"
        region = "us-east-1"
        dynamodb_table = "handled-by-pipline"
    }
}

Notes:

resource "aws_security_group" "allow_http_ssh" {
  name = "${var.image_name}-allow_http_ssh"
  description = " Allow HTTP and SSH traffic to this instance from ANYWHERE"
  vpc_id = data.aws_vpc.default.id
 
  ingress {
    description = "Custom HTTP from WORLD"
    from_port = 80
    to_port = 80
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  ingress {
    description = "Custom SSH from WORLD"
    from_port = 22
    to_port = 22
    protocol = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  egress {
    from_port = 0
    to_port = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  tags = {
    Name = "Allow HTTP/SHH"
  }
}

Notes:

resource "aws_instance" "remote"{
    ami = data.aws_ami.packer.id
    instance_type = "t2.micro"
    key_name = var.ssh_key
 
    tags = {
        Name = var.image_name
    }
    vpc_security_group_ids = [aws_security_group.allow_http_ssh.id]
}

Notes:

This thing will do 0-60 in 2.5 seconds flat!

I think I covered everything this time around. Who knows.. Eventually all of these words become a blur and coherent sentences are a challenge. All of this awesome code will yield a nice little Docker website that one can go to as verification of success. The website address will be http://EC2 Public DNS (Ex: http://ec2-01-234-567-89.compute-1.amazonaws.com). It is by no means meant to be production ready, but an example of what one can do with a bit of “code”. IaC does take time to learn, and perfect, but the beauty is in it’s consistency and eventual ease of change. Instead of rebuilding something through some crazy orchestrator or GUI, you have a few lines of text that can change an entire infrastructure.

Tinker around and put away your toys

To really see the continuous deployment side of this pipeline, make changes to the code in your GitHub repo. When changes are committed, they will trigger a webhook to CodePipeline and rebuild your entire App. Experiment a bit with it to see how easy it is to make changes to the environment. It can be easy to break it but that is the fun of learning.

As before make sure you take care to clean up anything you don’t want to pay for. Most of this will be relatively cheap (or free depending on what you use) but I don’t want anyone to have any unexpected bills. You will have to clean out the two S3 buckets this creates before you can let CloudFormation delete the stack. After that you will have your EC2 instance, your Packer AMI and associated snapshot, and the GitHub access token secret to take care of.

The end… is a ways off

I hope this post expands well on what I started in the first one. I swear I think it is complete this time and hope that you found it informative and fun to mess around with. The quick turnaround time for this post was due to the fun I had in experimenting and building this thing out. I am far from done with it and will keep adding to it to add a good bit of functionality to it. My next steps will be to add some CI to our CD and maybe add some testing and validation. I make no promises as my attention span is…. oh a piece of candy…where was I? If you have made it this far you are starting to get used to my chaotic nature.

As always I appreciate you for taking your time in reading and Thank You for any feedback you can give. Feel free to reach out to me LinkedIn or Twitter @GregMadro.