Serverless Primer Part 2 - Terraform and AWS Lambda
Serverless Primer Part 2: Terraform and AWS Lambda
Back for more are we? Seems you didn’t learn from the previous postings, so you either have issues falling asleep (see a Dr.), or reading these give you some sort of entertainment (sadist). Regardless for your reasons, welcome back to learn more about serverless computing. In this post I will focus on the deployment of serverless using HashiCorp’s Terraform. If you missed part 1, scroll down or just follow this link.
Terraform Overview
Terraform by HashiCorp, is a very light weight infrastructure deployment tool that is built entirely on IaC principles (Like the other great offerings by HashiCorp). It now comes in 3 flavors, Terraform CLI, Terraform Cloud, and Terraform Enterprise. In this post I will be focused on Terraform CLI, which is a single executable that runs on a host computer. Terraform has configuration files that define the infrastructure to deploy against many different providers (AWS, Azure, Vmware, as well as 118 others). The configuration files can be written in HashiCorp Configuration Language (HCL) or as JSON documents. Due to the number of providers available, Terraform has quickly become a goto deployment tool for hybrid or multi-cloud environments.
How to start?
Terraform CLI comes in as a whopping 49 MB executable for the 64 bit Linux edition… and that is it. The easiest way to get started with it is to go to the Terraform download page, pick your poison, and wait for the 3 seconds it takes to download. After that, unzip this monster to your location of choice and voila terraform is installed. Ok so you may need to set some paths down but there is not a lot more to getting started with Terraform.
Run the terraform executable and you will be presented with a list of the commands available for you to deploy infrastructure. The commands we will focus on are apply, destroy, and init.
In the order of how we would use them:
init is used to initialize a working directory for terraform to use. By default init will use the current working directory and will look for a terraform configuration file (.tf). The initialization process will download plugins for any providers specified in the configuration file. If a directory is specified following the init command it will use that as the working directory.
# /home/terraform init
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (hashicorp/aws) 2.41.0...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.41"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
#
apply will show the changes proposed in the configuration file and will perform the actions specified in it when authorized.
# /home/terraform apply
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_lambda_function.terraformFunction will be created
+ resource "aws_lambda_function" "terraformFunction" {
+ arn = (known after apply)
+ function_name = "TFpyFunction"
+ handler = "index.main"
+ id = (known after apply)
+ invoke_arn = (known after apply)
+ last_modified = (known after apply)
+ memory_size = 128
+ publish = false
+ qualified_arn = (known after apply)
+ reserved_concurrent_executions = -1
+ role = "arn:aws:iam::501511055678:role/basic-lambda-role"
+ runtime = "python3.7"
+ s3_bucket = "vmadbro-lambda-code"
+ s3_key = "app_code_change.zip"
+ source_code_hash = (known after apply)
+ source_code_size = (known after apply)
+ timeout = 3
+ version = (known after apply)
+ tracing_config {
+ mode = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_lambda_function.terraformFunction: Creating...
aws_lambda_function.terraformFunction: Creation complete after 6s [id=TFpyFunction]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
#
destroy will tear down infrastructure deployed by the configuration.
# /home/terraform destroy
aws_lambda_function.terraformFunction: Refreshing state... [id=TFpyFunction]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
# aws_lambda_function.terraformFunction will be destroyed
- resource "aws_lambda_function" "terraformFunction" {
- arn = "arn:aws:lambda:us-east-1:501511055678:function:TFpyFunction" -> null
- function_name = "TFpyFunction" -> null
- handler = "index.main" -> null
- id = "TFpyFunction" -> null
- invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:501511055678:function:TFpyFunction/invocations" -> null
- last_modified = "2019-12-11T22:48:17.607+0000" -> null
- layers = [] -> null
- memory_size = 128 -> null
- publish = false -> null
- qualified_arn = "arn:aws:lambda:us-east-1:501511055678:function:TFpyFunction:$LATEST" -> null
- reserved_concurrent_executions = -1 -> null
- role = "arn:aws:iam::501511055678:role/basic-lambda-role" -> null
- runtime = "python3.7" -> null
- s3_bucket = "vmadbro-lambda-code" -> null
- s3_key = "app_code_change.zip" -> null
- source_code_hash = "YAtB8gJdt6J7vhl9oWpIFrR75Az+cLXMBdK7te6alCI=" -> null
- source_code_size = 281 -> null
- tags = {} -> null
- timeout = 3 -> null
- version = "$LATEST" -> null
- tracing_config {
- mode = "PassThrough" -> null
}
}
Plan: 0 to add, 0 to change, 1 to destroy.
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_lambda_function.terraformFunction: Destroying... [id=TFpyFunction]
aws_lambda_function.terraformFunction: Destruction complete after 0s
Destroy complete! Resources: 1 destroyed.
#
Under the hood
Now that you have seen the output of running those commands, the following configuration file is what was used to create an AWS Lambda Function:
main.tf
provider "aws" {
region = "us-east-1"
shared_credentials_file = "$HOME/.aws/credentials" #Credentials have been set on this AWS spot instance
}
resource "aws_lambda_function" "terraformFunction" {
# Required elements needed to deploy a Lambda Function and can be defined here but I used variables files.
s3_bucket = var.s3_bucket
s3_key = var.s3_key
function_name = var.function_name
role = var.role
handler = var.handler
runtime = var.runtime
}
The basic blocks of a configuration file are provider and resource. Resource is infrastructure that is to be provisioned and any applicable arguments and the provider is the service on which to define those resources.
variables.tf (variable definition file)
variable "s3_bucket" {
type = string
}
variable "s3_key" {
type = string
}
variable "function_name" {
type = string
}
variable "role" {
type = string
}
variable "handler" {
type = string
}
variable "runtime" {
type = string
}
The variable definition file allows you to define variable types, default values, and use type constructors.
terraform.tfvars (variable assignment file)
#
s3_bucket = "vmadbro-lambda-code"
s3_key = "app_code_change.zip"
function_name = "TFpyFunction"
role = "arn:aws:iam::501511055678:role/basic-lambda-role"
handler = "index.main"
runtime = "python3.7"
The variable assignment file is used to set values for the defined variables.
Running terraform init on a working folder with those three files (with terraform.tfvars configured for your specifics) should yield an AWS Lambda function.
And that is everything right?
Congratulations on becoming a Terraform expert! Go forth and do great things… but before you do, let’s bring you back to Earth as this is just barely scratching the surface of what Terraform can do. The primary goal of this post was to show how Terraform is utilized to use simple code to deploy a Lambda Function in ways akin to that of AWS CloudFormation. In future posts I will dive deeper into Terraform to show how it can be utilized in a multi-cloud solution, as well as diving into Terraform Cloud. For some actually well written documentation on Terraform CLI, head over to the docs page here. In my next post I will show more serverless technologies on AWS and how they can all work together.
Thank you for getting through this lengthy post and I invite you to join me again as I start to build a cadence with getting these out (How did you get so lucky?). As always, feel free to reach out to me with any questions or feedback on LinkedIn or Twitter @GregMadro.