Diving into Infrastructure as Code - Part 1 (Terraform)
Over the past two months I have been playing around with a lot of self-hosted tools and I noticed that I was repeating a lot of things. Regardless of whether I was running on a Raspberry Pi or a DigitalOcean Droplet I was always doing a combination of the following:
- Updating and installing packages.
- Creating and setting up my user.
- Configuring SSH access for that user (from my Desktop, Phone and iPad).
- Updating the SSH config to disable root login, and password authentication.
- Updating the hostname of the machine and configuring other small things.
- Installing and setting up Tailscale.
Once the above is done, I can now begin doing what I wanted to do. That might involve running some containers, or maybe installing some tools on the host itself.
The goal I set for myself at the beginning of all this was simple, I wanted to write code that will provision, setup and deploy my chatbot automatically. It would have to be something modular, where the initial setup was separate from the chatbot setup so that I can reuse the code for other projects as well.
This post is going to be the first in a set of posts that documents my journey through Ansible, Terraform and a bit of Podman towards the end.
- Part 1 : Terraform (Provisioning)
- Part 2 : Ansible (Configuration)
- Part 3 : Podman (Deployment and Running)
- Part 4 : Putting it all together (Thoughts)
Terraform - The Provisioning
Terraform is a tool developed by HashiCorp that allows us to provision and manage infrastructure based on a set of configuration files.
Now, I might be doing it a disservice (or a service) by saying this, but Terraform is quite simple to understand, there is not much required to get off the ground and start provisioning infrastructure with it.
Resources
Initially, when I began looking for resources on Terraform I found the DigialOcean blog posts to be quite helpful, but I would recommend keeping those for reference and going directly to one of their recent livestreams which explains the basic concepts quite well:
Getting Started
To install Terraform, packages are available for Windows, Mac and Linux and the installation steps can be found on the HashiCorp website.
Generally speaking, I do not like to over-engineer things from the start, especially when learning something new. I find that it helps understand why certain things become over-engineered once I experience what challenges they were trying to solve.
Applying that same principle to Terraform, what we need to get started is a set of four files:
provider.tf
variables.tf
main.tf
output.tf
A note on the syntax
Terraform uses constructs from the HashiCorp Configuration Language (HCL), however, the two key ones that we need to know to work with Terraform are arguments and blocks.
Arguments
image_id = "abc123"
Argument values can also be dynamically generated from other parts of our Terraform managed infrastructure.
Blocks
block_type "label 1" "label 2" {
name = "Resource 1"
sub_block {
}
}
A block in Terraform has a type, a set of labels, and a set of arguments / sub blocks that are defined in the schema. As we will see soon, a lot of Terraform features are implemented in blocks.
Provider.tf - Defining our Terraform config and providers
In this file we will create two blocks:
- The top-level Terraform block that will define any high-level configuration we might require.
- The provider(s) that we will be using for our infrastructure, in our case this will be Digital Ocean.
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "~> 2.0"
}
}
}
provider "digitalocean" {
token = var.do_token
}
In Terraform, think of providers as who will be giving us the resources that we will be provisioning and managing. They provide two things: resource types and data sources. Resources can be created and read from, while Data Sources can only be read.
Going through the code above, we begin by defining the top-level Terraform block, and in it, we can define another sub block called required_providers where we will define which providers we require for our infrastructure.
If we visit the Terraform Registry we can get a better sense of the large ecosystem that we can tap into with Terraform. In our case we will only be using the DigitalOcean Provider. We add it to our required_providers, and then define a provider block with the digitalocean label to use it.
The DigitalOcean provider only requires an access token to be used, this can be created here (requires both read and write access). Other providers may require different arguments to be configured so that they can be used with Terraform.
The variable for that access token is one that we will be defining in our next file.
Variables.tf - Defining our variables
This file is quite simple for my use case at the moment, we define only one variable called do_token that we will use to set our Digital Ocean Access Token.
As we can see, even variables are defined as blocks in terraform. The Terraform Docs go into detail on the additional arguments that we can use in this block.
variable "do_token" {
type= string
description= "DigitalOcean Access Token"
}
To set this token we can either pass it as an argument with the terraform command:
-var "do_token=XXXX"
or we can set an environment variable with the following name:
export TF_VAR_do_token=1234
Terraform will take any environment variable that starts with TF_VAR and match it to a variable we have defined. In our case that would be do_token.
So if we had another variable called resource_name we would set an environment variable with name TF_VAR_resource_name to set its value.
Main.tf - Defining Resources and Data Sources
This is the file where we define what resources and data sources we want to use. Here we are defining one data source, and two resources. For the sake of this post we can ignore the second resource block with the local_file label. It will be covered in one of the future blog posts.
# Get Roshar Key ID
data "digitalocean_ssh_key" "roshar_key" {
name = "Roshar"
}
resource "digitalocean_droplet" "braize" {
image = "centos-stream-9-x64"
name = "braize"
region = "fra1"
size = "s-2vcpu-4gb-amd"
ssh_keys = [data.digitalocean_ssh_key.roshar_key.id]
}
resource "local_file" "ansible_inventory" {
content = templatefile("./ansible/inventory.tmpl",
{
group_name = "bot_servers",
hostnames = digitalocean_droplet.braize.*.name,
ansible_hosts = digitalocean_droplet.braize.*.ipv4_address
}
)
filename = "./ansible/bot_inventory"
file_permission = "0755"
directory_permission = "0755"
}
Let us break it down, starting with the data source, we are defining a data source block, with the first label being digital_ocean_ssh_key which is one of the data sources that the DigitalOcean provider defines. The reference for this data source can be found here.
Our second label for that data source is roshar_key, this can be any user defined value, and we can think of it as a variable name. We will use this later to reference the data that we will get from DigitalOcean.
Moving on to the resource block, similarly, the first label digitalocean_droplet is one of the resources defined in the DigitalOcean provider. The second label again, is a user defined value that we can use to later reference our resource. The image, region and size arguments must be valid slugs as defined here.
For the ssh_keys argument, we can see that we are referencing the value of our data source. In Terraform we can refer to resources / data sources using a kind of object dot notation, this is where our second label is used. So to get our data we use the following notation:
data.digitalocean_ssh_key.roshar_key.id
Going from left to right, we are looking up a data source, which is a digitalocean_ssh_key, and we want the id of the key that is called roshar_key.
Output.tf - Defining our outputs
Terraform has the concept of output variables, these can be used to extract certain pieces of information and make them readily available through the terraform output
command.
For example if we want to provision a resource and get the IP afterwards to be used in a script, we can leverage output variables to do so. They can also be printed in JSON format to be parsed and used later on.
output "droplet_ip" {
value = digitalocean_droplet.braize.ipv4_address
description = "Droplet IPV4"
}
output "ansible_init" {
value = "ansible-playbook -u root -i ansible/bot_inventory ../../playbooks/setup_server.yml"
description = "Ansible command"
}
Going from top to bottom, first we define an output block with the label droplet_ip which will get the public IP of the provisioned droplet. Again, this value is accessed using an object dot notation, however, if you notice that we reference resources directly without prepending resource at the start. Unlike data sources where we have to prepend data.
The second output block will be covered in one of the future posts, but at a high level it prints out the command that can be used to perform the initial server setup with Ansible.
Provisioning our first droplet
With these files in place, we can initialise Terraform in that directory with terraform init
, what this command will do is it will read the initial Terraform block and install the required providers that were defined. For us that would only the DigitalOcean provider.
Terraform stores the state of our infrastructure by default in a local file called terraform.tfstate. It keeps track of our existing resources and is updated after each applied change. In our case since we have not run Terraform before, this file will not exist and Terraform will assume nothing has been created.
Once Terraform is initialised, we can then run terraform plan
to view what changes Terraform will do based on the existing state.
If we run this command we should see something like this:
worldhopper@roshar ~/a/t/keeper> terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# digitalocean_droplet.braize will be created
+ resource "digitalocean_droplet" "braize" {
+ backups = false
+ created_at = (known after apply)
+ disk = (known after apply)
+ graceful_shutdown = false
+ id = (known after apply)
+ image = "centos-stream-9-x64"
+ ipv4_address = (known after apply)
+ ipv4_address_private = (known after apply)
+ ipv6 = false
+ ipv6_address = (known after apply)
+ locked = (known after apply)
+ memory = (known after apply)
+ monitoring = false
+ name = "braize"
+ price_hourly = (known after apply)
+ price_monthly = (known after apply)
+ private_networking = (known after apply)
+ region = "fra1"
+ resize_disk = true
+ size = "s-2vcpu-4gb-amd"
+ ssh_keys = [
+ "123456",
]
+ status = (known after apply)
+ urn = (known after apply)
+ vcpus = (known after apply)
+ volume_ids = (known after apply)
+ vpc_uuid = (known after apply)
}
# local_file.ansible_inventory will be created
+ resource "local_file" "ansible_inventory" {
+ content = (known after apply)
+ directory_permission = "0755"
+ file_permission = "0755"
+ filename = "./ansible/bot_inventory"
+ id = (known after apply)
}
Plan: 2 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ ansible_init = "ansible-playbook -u root -i ansible/bot_inventory ../../playbooks/setup_server.yml"
+ droplet_ip = (known after apply)
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
worldhopper@roshar ~/a/t/keeper>
As we can see, the plan is to create two new resources as well as two new outputs. One resource is the droplet, and the other one is the local file that I want created for Ansible later.
If you notice, some values will only be made available after the droplet is provisioned, this includes the droplet IP which will be stored in an output later.
If the plan looks good, we can run terraform apply
and watch our DigitalOcean console as the droplet is created based on our Terraform configuration. This will create the terraform.tfstate file that
will keep track of our infrastructure/resources.
Destroying our droplet
Resources that are tracked in the Terraform state can be destroyed easily with the terraform destroy
command. It will print a similar output to the plan command to show us what will be destroyed and ask us to confirm.
What is next?
Now that we have the ability to provision resources based on code, we move on to the next step. How can we configure what we have just provisioned?
For that we will be using Ansible, and we will look at how we can create playbooks, use community roles and collections to quickly configure our systems.