Came across this blog post by Corey Quinn over on lastweekinaws.com discussing the topic of vendor lock-in, specifically cloud vendors. Corey made some really excellent points but how you are probably already locked in without realizing it. The post reminded me that when I started using AWS after a job change that I was also in the camp of avoiding vendor lock in. Over time I realized, however, that there are some things you must embrace when it comes to a given cloud provider but that doesn’t mean you can’t smartly pick the services you use so that you might leverage some tools that are cloud provider agnostic.

Lets first talk about some additional ways that vendor lock in is inevitable. For starters, if you are not leveraging some of your cloud providers most integral features (speaking purely in AWS terms) like IAM policies and security groups you are almost certainly doing it wrong. Not using IAM policies for configuring an ec2 instance or allowing a CloudFront distribution to access an S3 bucket is usually the wrong way to go about things. You’re much better off just embracing these AWS only techniques in order to build a cleaner, more robust solution. These are the kinds of vendor specific things you should embrace.

However, there are times when you might want to stop and evaluate other options before moving forward. For example, AWS Systems Manager is a tool for managing your systems. Unlike IAM roles, policies and security groups there are other tools out there that provider similar functionality that may be better suited to your needs. Or, maybe you have configuration management that can build and assist in maintaining a database cluster on any provider.

Or maybe you’ve developed your own backup solution that works on any setup. In this case you might want to avoid using RDS unless you really need or want the ease of use that RDS can provide. Maybe the value of having the same tools that you are maintaining work across any cloud provider outweighs the benefits of RDS.

Services like RDS are much easier to cut ties with because your data is actually portable within reasonable limits. Given a normal MySQL RDS instance you can copy the data out and import into some other MySQL system. In these cases I don’t really see RDS as true vendor lock in the sense that you would need to rethink how your software works if you were to move it but rather that if the tooling you’ve built around it is AWS specific that’s where you can get into trouble.

Other services are certainly not that simple and this is where you must carefully consider the services that you use, what your sensitivity to being “locked-in” is and the value that the specific service offers. True vendor lock-in, in my mind, is all about the actual data. Lets say you are considering a video transcoding service that once the videos are transcoded cannot be transferred out or played with out a specific player. This is a great example of a service I would avoid if at all possible and go with some other service that simply accepted an input and provided you with some output to do with as you please.

At the end of the day, avoiding vendor lock-in is a game of determining if what you are looking at is true lock-in or an opportunity to use a platform well and correctly. Avoiding every cloud provider specific tool is almost always a mistake.

Arm processors, used in Raspberry Pi’s and maybe even in a future Mac, are gaining in popularity due to their reduced cost and improved power efficiency over more traditional x86 offerings. As Arm processor adoption accelerates the need for Docker images that support both x86 and Arm will become more and more a necessity. Luckily, recent releases of Docker are capable of building images for multiple architectures. In this post I will cover one way to achieve this by combining a recent release of Gitlab (12+), k3s and the buildx plugin for Docker.

I am taking inspiration for this post from two places. First, this excellent writeup was a great help in getting things start – https://dev.to/jdrouet/multi-arch-images-with-docker-152f. This post was also instrumental in getting this going – https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408.

I assume you already have a working installation of Gitlab with the container registry configured. Optionally, you can use Docker Hub but I won’t cover that in detail. Using Docker Hub involves changing the repository URL and then logging into Docker Hub. You will also need some system available capable of running k3s that is using at least Linux 4.15+. For this you can use either Ubuntu 18.04+ or CentOS 8. There may be other options but I know these two will work. The kernel version is a hard requirement and is something that caused me some headache. If I had just RTFM I could have saved myself some time. For my setup I installed k3s onto a CentOS 8 VM and then connected it to Gitlab. For information on how to setup k3s and connecting it to Gitlab please see this post.

Once you are running k3s on a system with a supported kernel you can start building multi-arch images using buildx. I have created an example project available at https://github.com/dustinrue/buildx-example that you can import into Gitlab to get you started. This example project targets a runner tagged as kubernetes to perform the build. Here is a breakdown of what the .gitlab-ci.yml file is doing:

  • Installs buildx from GitHub (https://github.com/docker/buildx) as a Docker cli plugin
  • Registers qemu binaries to emulate whatever platform you request
  • Builds the images for the requested platforms
  • Pushes resulting images up to the Gitlab Docker Registry

Unlike the linked to posts I also had to add in a docker buildx inspect --bootstrap to make things work properly. Without this the new context was never active and the builds would fail.

The example .gitlab-ci.yml builds multiple architectures. You can request what architectures to build using the --platform flag. This command, docker buildx build --push --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -t ${CI_REGISTRY_URL}:${CI_COMMIT_SHORT_SHA} . will cause images to be build for the listed architectures. If you need a list of available architectures you can target you can add docker buildx ls right before the build command to see a list of supported architectures.

Once the build has completed you can validate everything using docker manifest inspect. Most likely you will need to enable experimental features for your client. Your command will look similar to this DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect <REGISTRY_URL>/drue/buildx-example:9ae6e4fb. Be sure to replace the path to the image with your image. Your output will look similar to this if everything worked properly:

{
   "schemaVersion": 2,
   "mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
   "manifests": [
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:611e6c65d9b4da5ce9f2b1cd0922f7cf8b5ef78b8f7d6d7c02f793c97251ce6b",
         "platform": {
            "architecture": "amd64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:6a85417fda08d90b7e3e58630e5281a6737703651270fa59e99fdc8c50a0d2e5",
         "platform": {
            "architecture": "arm64",
            "os": "linux"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:30c58a067e691c51e91b801348905a724c59fecead96e645693b561456c0a1a8",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v7"
         }
      },
      {
         "mediaType": "application/vnd.docker.distribution.manifest.v2+json",
         "size": 527,
         "digest": "sha256:3243e1f1e55934547d74803804fe3d595f121dd7f09b7c87053384d516c1816a",
         "platform": {
            "architecture": "arm",
            "os": "linux",
            "variant": "v6"
         }
      }
   ]
}

You should see multiple architectures listed.

I hope this is enough to get you up and running building multi-arch Docker images. If you have any questions please open an issue on Github and I’ll try to get it answered.

Not too long ago I wrote about using Packer to build VM templates for Proxmox and created a Github project with the files. In the end I provided basic information on how to setup cloud-init within the Proxmox GUI. This time we’re going to dive a bit deeper into using cloud-init within Proxmox and customize it as needed.

First, lets quickly cover what cloud-init is. Cloud-init is a system for configuring an operating system on first boot. It is always used on cloud based systems like AWS, Azure, OpenStack and can be used on non-cloud based systems like Proxmox, VirtualBox or any system where you can present the info as a CD-ROM. Using cloud-init you can pass in instance meta-data information, network configuration and user information. As part of the user information you can also provide commands to be run. It is the ability to run commands on initial boot that we’re going to tap into.

Out of the box, Proxmox provides a basic cloud-init system that you can enable through the web interface that works well if all you need is to create a user with an SSH key and configure the network. But if you want to customize it you will need to ensure you have snippets enabled and visit the cli of your Proxmox system.

Continue reading

Have you ever wanted to write out a large, templated config file using only shell script code? Maybe you are working with a small IoT device with limited power or some other device and you want to avoid additional dependencies for single task. In these situations using a larger config management system tool can be too heavy or just not practical. In this post I’ll explore the envsubst utility as a way to write out a config file from a template. In the end you’ll see that envsubst is a great and lightweight utility that can be used to create config files.

Continue reading

If you work with AWS using CLI tools I highly recommend aws-vault to help keep your AWS keys secure. Be sure to visit the usage guide for full details on setup. I configured my copy to be unlocked when I am actively using my computer. It’s also a good idea to ensure your storage is encrypted.

A while back I took the time to learn a bit of OpenStack’s Disk Image Builder. Recently I decided to give Packer a try to build templates for Proxmox and I decided to release the results as a Github repo. You can find the repo at https://github.com/dustinrue/proxmox-packer. The project allows you to build a mostly empty CentOS 7 or CentOS 8 template for Proxmox. You can further customize the image by expanding the provisioner section of the packer.json files.

diagram showing how this site is hosted

A co-worker recently discovered a fun project called diagrams that allows you to create diagrams from code. Documentation and how to install diagrams is available at https://diagrams.mingrammer.com. The image you see above was generated with some simple code. The code used to generate the graph looks like this:

from diagrams import Diagram, Cluster
from diagrams.oci.edge import Cdn
from diagrams.onprem.network import Nginx
from diagrams.onprem.compute import Server
from diagrams.onprem.database import Mariadb
from diagrams.onprem.inmemory import Memcached
from diagrams.onprem.client import Users

with Diagram("dustinrue.com", show=False):
  cloudflare = Cdn("CloudFlare")
  users = Users("users")

  with Cluster("web server"):
    nginx = Nginx("nginx")
    php = Server("php")

  with Cluster("database server"):
    mariadb = Mariadb("mariadb")
    memcached = Memcached("memcached")
    
  users - cloudflare
  cloudflare - nginx
  nginx - php
  php - mariadb
  php - memcached

Using diagrams is an easy way to quickly create and track changes to diagrams.

RancherOS, available at https://rancher.com/rancher-os/, is a lightweight container operating system. It is easy to install and easy to configure but a bit light on documentation for some specific use cases. Here, I will describe how I setup RancherOS (1.5.5 as of this writing) for use with my locally installed Rancher 2.x based bare metal cluster. I will also touch on using cloud-config to configure RancherOS at boot to include the iSCSI subsystem and auto join my cluster.

I run my nodes on a Proxmox based hypervisor and have FreeNAS based storage providing NFS and iSCSI. I’m not going to cover the installation of Rancher, Proxmox or FreeNAS but just focus on basic configuration of RancherOS.

RancherOS itself is able to accept configure information using a cloud-config file. Using a cloud-config file allows you to configure a number of things during the first boot up. I take advantage of this to configure some persistent volumes, add my ssh key, enable the iSCSI subsystem and even automatically join my cluster. Here is what the file looks like, with some values removed/shortened:

# cloud-config

# create an rc.local which will cause this system to join the cluster. Replace required values for your server URL and your token
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      wait-for-docker
      if [ ! -f /opt/init-done ]; then
        docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.5 --server <your rancher server url> --token <your rancher token> --worker --node-name $(ip ro | grep default | awk '{print $7}')
        touch /opt/init-done
      fi

rancher:
  # in my setup I use iSCSI to provide block storage to pods, for this to work on RancherOS the iSCSI subsystem must be enabled
  services_include:
    open-iscsi: true
  # setup some local persistent storage for a few important volumes
  # this ensures Kubernetes works properly across reboots
  services:
    user-volumes:
      volumes:
        - /home:/home
        - /opt:/opt
        - /var/lib/kubelet:/var/lib/kubelet
        - /etc/kubernetes:/etc/kubernetes
ssh_authorized_keys:
  - <paste your ssh public key here>

For my setup I saved this file onto a web host accessible within my network. Below you will see how we tell RancherOS about the file during the setup process. You can find more configuration options at https://rancher.com/docs/os/v1.x/en/installation/configuration/.

Please note that the most important settings are the persistent mount options. You should at least use those if you plan to connect the RancherOS instance to a Rancher based Kubernetes cluster.

With the cloud-config file created we can now install RancherOS. There are a few options for installing RancherOS but for my setup I am simply using the basic iso file. For my target machine, a unibody 2008 MacBook, I had to burn the image to CDR. I booted the ISO and waited for it to finish the boot process. Once it was ready, I entered my install command:

sudo ros install -d /dev/sda -c http://<hostname>/rancheros.yaml

This command will instruct the installer to download the config file specified, save it locally (into /var/lib/rancher/conf) and then get everything ready on /dev/sda. I answer y to the reboot question and the system reboots into RancherOS. After a while the system will join your cluster and be ready for use.

That’s it. Your RancherOS node should now be ready to for use and will support iSCSI based block storage. In future posts I will try to discuss setting up other aspects of a bare metal Kubernetes cluster (where bare metal basically refers to running it anywhere but some cloud provider). If you have questions please reach out to me via Twitter.

References:
Using iscsi on RancherOS https://docs.openebs.io/docs/next/prerequisites.html

Once in awhile I like to read about what kind of software and utilities other people are using on their system to make their lives easier. It’s always interesting to see what mix of tools people are using and often times I learn about a new tool I hadn’t heard of before. Today I thought I’d do the same as I’ve started using a number of new tools on a regular basis just in the past six months.

As a systems engineer that is also familiar with programming I have what may be a unique mix of software and tools on my computer. Let’s take a look.

Operating System(s)

I have been using macOS full time since about 2008. I use macOS because it is a mix Unix and a GUI (NeXT if you’re keeping score) which gives me a familiar and robust command line environment with an excellent desktop environment.

I also use Linux heavily but almost never as a desktop or workstation. I have a laptop that I can dual boot between Linux and macOS for testing. I also run multiple Linux systems to run Proxmox for virtualization. Proxmox is a great way to get use out of otherwise retired computers. In fact, my Proxmox cluster is an older HP desktop with a quad core processor mixed with a pair of old MacBooks. I have written about Proxmox before and you can find it here.

I have one Windows PC that exists mostly because of games but also some business software.

Software Tools

When it comes to software these are the tools I use most frequently.

  • Code Editing and Runtimes/Languages
  • DevOps Type Stuff
  • Kuberenetes
    • kubectx/kubens for easy cluster and namespace switching
    • k9s for a text based UI to Kubernetes
  • Utilities
    • Brew
    • Patterns tool for working with regular expressions. Been using it for years but several tools now exist like it
    • iTerm 2 superior to the default terminal available in macOS
  • Other
    • Spotify for music
    • VirtualBox for testing Ansible roles
    • Twitter client
    • Mail.app
    • RamBox for chat
    • Bear for notes

Quick list of software tools that I find make using Kubernetes even better. I consider these tools must haves.