Diving into Infrastructure as Code - Part 3 (Recap)

Posted on Aug 13, 2022
tl;dr: Putting it all together

The beginning of any new concept or tooling is always my bot project. I feel that this gives me a better understanding of how to use the technology and I often end up coming out of it with a new tool or skill. The majority of the technology that I am familiar with today is a result of this type of activity. It’s been a while since I started on this dive, and I feel that I have an understanding of how all these concepts are put together and what I will be taking away from this.

Overall I touched on 4 different tools throughout this:

  • Terraform
  • Ansible
  • Podman
  • Gitpod / GitHub Codespaces

Terraform - Useful, but overkill

This is the tool that I don’t see myself using at the moment. At most, I have two servers running, one for the bot and another one for whatever else I have running at the time. Whenever I provision a server, whether it is on Linode or another provider, my public keys are automatically added for SSH access. Going from there to running Ansible for configuration is a trivial step.

After my first post, one of the use cases that I explored further with Terraform was provisioning and managing CloudFlare DNS entries and certificates. Combining this with Ansible, the end result was:

  1. Terraform would provision the server, and configure CloudFlare to point to that IP
  2. An origin certificate will be generated and downloaded
  3. Ansible will set up the server and the service that I want hosted
  4. Caddy was deployed as a reverse proxy using the origin certificate that was provisioned in step 2.

As a test, I deployed Hedgedoc under https://docs.aalsuwaidi.com but again, it seemed like Terraform was an extra step that would make more sense if I had more infrastructure to manage.

Ansible - Configuration Management Made Easy

Coming out of this whole exercise, Ansible seems to be what I am taking forward with me as a tool. It is now a core part of my workflow for the bot, and even with that, I feel like I have only scratched the surface of what it is capable of. If you saw my Ansible scripts today compared to the previous post you would notice that a lot has changed.

For example, setting up and running the bot containers, I refactored the majority of that code into an Ansible role called podman_compose. Plus, I have added a deploy_caddy option that would set up a reverse proxy along with any configurations such as the subdomain, or headers.

Here is my playbook for deploying the bot on a server:

- hosts: all
    - keeper_code
    - name: Deploy Keeper
      tags: deploy
        name: podman_compose
        network: keeper_network
        deploy_caddy: false
          - name: keeper_novnc
            tag: latest
            image: docker.io/theasp/novnc
            auto_update: registry
            network: keeper_network
              DISPLAY_WIDTH: "1920"
              DISPLAY_HEIGHT: "1080"
              RUN_XTERM: "no"
            time: 20
          - name: keeper_wabot
            path: "{{ansible_env.HOME}}/keeper/wabot"
            auto_update: local
            tag: latest
            local: true
            network: keeper_network
            cmd_args: ["--requires", "keeper_novnc"]
              - chrome_profile:/app/profile
          - name: keeper_ftbl
            path: "{{ansible_env.HOME}}/keeper/ftbl"
            auto_update: local
            tag: latest
            local: true
            network: keeper_network
              - ftbl:/app/mount
          - name: keeper_ftbl_dld
            path: "{{ansible_env.HOME}}/keeper/ftbl_dld"
            auto_update: local
            tag: latest
            local: true
            network: keeper_network
              - ftbl_dld:/app/goals
          - name: keeper_srv
            path: "{{ansible_env.HOME}}/keeper/srv"
            auto_update: local
            tag: latest
            local: true
            network: keeper_network

The above method, compared to what I had initially set up is cleaner and reusable, and I am sure that I will continuously revisit and improve on this as well.

Challenge with Idempotency

Ansible will take your defined YAML and apply it to that server, but before doing that it would check and see if the state matches that configuration. So if I asked for Podman to be installed, it would check first and then install it if it didn’t exist. Running it a second time would stop after the check since Ansible will see that Podman was installed already.

The module would run commands on the target system to query and build the current state. With this I faced two issues, both in the Podman collection. First, was with the IPC flag, by default it would create the container with the IPC mode set to “private”. However, when the command ran again it would check and find that the IPC mode was “shareable”. For Fedora, manually setting the IPC mode to private resolved the issue, but for CentOS it would have to be set to shareable for the module to behave correctly.

I faced a similar issue with the Podman Network creation, with Podman 4.0 a new network stack was introduced that broke the idempotency of the Podman Network module. Whenever, the playbook ran it would try to recreate the network even if it already existed.

Both might have been edge cases, but required slight workarounds to fix. Using the --check and --diff flags when running the playbooks was essential in debugging and resolving them. This was one thing to keep in mind when working with Ansible, if idempotency breaks it will impact how your playbooks execute and how they might handle different situations.

When and Where to use Ansible

Two questions that are always on my mind are when and where to use Ansible, it was the similar thought process that I applied when looking at Terraform. Going back to the bot example, right now I am doing 4 distinct steps:

  1. Pulling the latest version of the code
  2. Building the latest container images
  3. Recreating the containers (if needed)
  4. Run podman auto-update

If I think of Ansible as a configuration management tool, then it is clear that 1,2,4 should be taken out. I should have a separate process (i.e. GitHub Actions) that will take my code changes, build the latest container images, push them to a registry and run Podman Auto Update. My Ansible script should just create and recreate (if needed) my container definitions on the server.

The challenge with this is similar to Terraform and my initial foray into Ansible. Can I do it in Ansible? A lot of times the answer is yes, but what I should be asking is Should I do it in Ansible? Is it the right tool for the job?

Podman - Containers and systemd

For the bot, Podman did behave pretty much as a drop-in replacement for Docker. What drew me to Podman was the overall design around rootless containers and integration with systemd. At the time I was using Fedora on my desktop and reading about Podman, so it was only natural to try using it on the bot.

Initially there were some challenges with using volumes and networking for rootless containers, but these were fixed with the latest Podman version.

The biggest change that I had to make was accommodating for the lack of “official” docker-compose support. I put it in quotes because you can use the community project Podman Compose or even use docker-compose with the Podman Socket.

While it is not a 1:1 replacement, my podman-compose role in Ansible replaces this in my current workflow. I can define my containers, along with their volumes, ports, and networks all in one place and set them up with Ansible.

GitPod and GitHub Codespaces - Cloud Workspaces

Like Ansible, this is something that I am taking forward with me as a tool that I have integrated in my workflow. Currently, I am using GitPod for both the bot and this blog. I have been using it for just under a month and overall it has been extremely easy to set up and use.

This is the .gitpod.yml file that I use for the blog:

# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
  - command: brew install hugo && hugo server -D -F --baseUrl $(gp url 1313) --liveReloadPort=443 --appendPort=false --bind=

# List the ports to expose. Learn more https://www.gitpod.io/docs/config-ports/
  - port: 1313
    onOpen: open-browser

For the bot it is not that much different:

# List the start up tasks. Learn more https://www.gitpod.io/docs/config-start-tasks/
  - before: eval $(gp env -e)
    init: cd /workspace/keeper/wabot && echo module.exports = {'"admin_jid"':\ '"'$ADMIN_JID'"', '"bot_jid"':\ '"'$BOT_JID'"', '"humio_token"':\ '"'$humio_token'"'} > config.js
    command: cd /workspace/keeper/wabot && docker compose pull && docker compose build
  - port: 8080
    onOpen: open-browser

The only challenge that I have faced so far with Gitpod was the prebuild feature. Ideally, my docker build step should have been part of the prebuild, that way whenever I start a workspace for the bot the docker images are already built, and I can immediately start working. That wasn’t working correctly, but it seemed like a known issue that is being addressed when looking at the team’s issue tracker on GitHub.

Configuring the development container seemed easier to do on GitPod vs Codespace’s devcontainer and that is the reason I am using it at the moment. However, I do have plans to properly test it out before settling on one tool to use.

Overall though, the concept of development workspaces clicked with me and I definitely see myself using them in future projects.

Recap - Right tool for the job

If I had to conclude this whole activity with a summary it would be with this:

You can pretty much do anything with anything, but finding the right tool for the job that is a good fit for your workflow is the best thing that you can do.

Can I use Ansible to provision cloud infrastructure? Yes, but should I? Not really.

I asked myself the same type of question multiple times throughout this, and I am sure that I will ask it over and over again.

It doesn’t stop me from trying though, after all, that’s what makes these types of hobby projects fun.