A while back I took the time to learn a bit of OpenStack’s Disk Image Builder. Recently I decided to give Packer a try to build templates for Proxmox and I decided to release the results as a Github repo. You can find the repo at https://github.com/dustinrue/proxmox-packer. The project allows you to build a mostly empty CentOS 7 or CentOS 8 template for Proxmox. You can further customize the image by expanding the provisioner section of the packer.json files.

diagram showing how this site is hosted

A co-worker recently discovered a fun project called diagrams that allows you to create diagrams from code. Documentation and how to install diagrams is available at https://diagrams.mingrammer.com. The image you see above was generated with some simple code. The code used to generate the graph looks like this:

from diagrams import Diagram, Cluster
from diagrams.oci.edge import Cdn
from diagrams.onprem.network import Nginx
from diagrams.onprem.compute import Server
from diagrams.onprem.database import Mariadb
from diagrams.onprem.inmemory import Memcached
from diagrams.onprem.client import Users

with Diagram("dustinrue.com", show=False):
  cloudflare = Cdn("CloudFlare")
  users = Users("users")

  with Cluster("web server"):
    nginx = Nginx("nginx")
    php = Server("php")

  with Cluster("database server"):
    mariadb = Mariadb("mariadb")
    memcached = Memcached("memcached")
    
  users - cloudflare
  cloudflare - nginx
  nginx - php
  php - mariadb
  php - memcached

Using diagrams is an easy way to quickly create and track changes to diagrams.

RancherOS, available at https://rancher.com/rancher-os/, is a lightweight container operating system. It is easy to install and easy to configure but a bit light on documentation for some specific use cases. Here, I will describe how I setup RancherOS (1.5.5 as of this writing) for use with my locally installed Rancher 2.x based bare metal cluster. I will also touch on using cloud-config to configure RancherOS at boot to include the iSCSI subsystem and auto join my cluster.

I run my nodes on a Proxmox based hypervisor and have FreeNAS based storage providing NFS and iSCSI. I’m not going to cover the installation of Rancher, Proxmox or FreeNAS but just focus on basic configuration of RancherOS.

RancherOS itself is able to accept configure information using a cloud-config file. Using a cloud-config file allows you to configure a number of things during the first boot up. I take advantage of this to configure some persistent volumes, add my ssh key, enable the iSCSI subsystem and even automatically join my cluster. Here is what the file looks like, with some values removed/shortened:

# cloud-config

# create an rc.local which will cause this system to join the cluster. Replace required values for your server URL and your token
write_files:
  - path: /etc/rc.local
    permissions: "0755"
    owner: root
    content: |
      #!/bin/bash
      wait-for-docker
      if [ ! -f /opt/init-done ]; then
        docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.3.5 --server <your rancher server url> --token <your rancher token> --worker --node-name $(ip ro | grep default | awk '{print $7}')
        touch /opt/init-done
      fi

rancher:
  # in my setup I use iSCSI to provide block storage to pods, for this to work on RancherOS the iSCSI subsystem must be enabled
  services_include:
    open-iscsi: true
  # setup some local persistent storage for a few important volumes
  # this ensures Kubernetes works properly across reboots
  services:
    user-volumes:
      volumes:
        - /home:/home
        - /opt:/opt
        - /var/lib/kubelet:/var/lib/kubelet
        - /etc/kubernetes:/etc/kubernetes
ssh_authorized_keys:
  - <paste your ssh public key here>

For my setup I saved this file onto a web host accessible within my network. Below you will see how we tell RancherOS about the file during the setup process. You can find more configuration options at https://rancher.com/docs/os/v1.x/en/installation/configuration/.

Please note that the most important settings are the persistent mount options. You should at least use those if you plan to connect the RancherOS instance to a Rancher based Kubernetes cluster.

With the cloud-config file created we can now install RancherOS. There are a few options for installing RancherOS but for my setup I am simply using the basic iso file. For my target machine, a unibody 2008 MacBook, I had to burn the image to CDR. I booted the ISO and waited for it to finish the boot process. Once it was ready, I entered my install command:

sudo ros install -d /dev/sda -c http://<hostname>/rancheros.yaml

This command will instruct the installer to download the config file specified, save it locally (into /var/lib/rancher/conf) and then get everything ready on /dev/sda. I answer y to the reboot question and the system reboots into RancherOS. After a while the system will join your cluster and be ready for use.

That’s it. Your RancherOS node should now be ready to for use and will support iSCSI based block storage. In future posts I will try to discuss setting up other aspects of a bare metal Kubernetes cluster (where bare metal basically refers to running it anywhere but some cloud provider). If you have questions please reach out to me via Twitter.

References:
Using iscsi on RancherOS https://docs.openebs.io/docs/next/prerequisites.html

Once in awhile I like to read about what kind of software and utilities other people are using on their system to make their lives easier. It’s always interesting to see what mix of tools people are using and often times I learn about a new tool I hadn’t heard of before. Today I thought I’d do the same as I’ve started using a number of new tools on a regular basis just in the past six months.

As a systems engineer that is also familiar with programming I have what may be a unique mix of software and tools on my computer. Let’s take a look.

Operating System(s)

I have been using macOS full time since about 2008. I use macOS because it is a mix Unix and a GUI (NeXT if you’re keeping score) which gives me a familiar and robust command line environment with an excellent desktop environment.

I also use Linux heavily but almost never as a desktop or workstation. I have a laptop that I can dual boot between Linux and macOS for testing. I also run multiple Linux systems to run Proxmox for virtualization. Proxmox is a great way to get use out of otherwise retired computers. In fact, my Proxmox cluster is an older HP desktop with a quad core processor mixed with a pair of old MacBooks. I have written about Proxmox before and you can find it here.

I have one Windows PC that exists mostly because of games but also some business software.

Software Tools

When it comes to software these are the tools I use most frequently.

  • Code Editing and Runtimes/Languages
  • DevOps Type Stuff
  • Kuberenetes
    • kubectx/kubens for easy cluster and namespace switching
    • k9s for a text based UI to Kubernetes
  • Utilities
    • Brew
    • Patterns tool for working with regular expressions. Been using it for years but several tools now exist like it
    • iTerm 2 superior to the default terminal available in macOS
  • Other
    • Spotify for music
    • VirtualBox for testing Ansible roles
    • Twitter client
    • Mail.app
    • RamBox for chat
    • Bear for notes

Quick list of software tools that I find make using Kubernetes even better. I consider these tools must haves.

In 2004 I took delivery of a new car that was equipped with a CD changer. Until then I had only ever had cars with a single disc player so stepping up to a deck with a six disc changer was incredible. No longer did I need to keep a sleeve of CDs in the car that would get scratched or lost, I could just keep what I was listening to at the time right in the deck. It was still a time where creating and burning playlists to a burnable CD was totally acceptable and all was well in the world.

I kept that car for about ten years and during that time we saw the iPod and other music players gain tremendous popularity. And why wouldn’t they? You could put as much music onto the device as it would hold and carry it around with you anywhere you went! The original iPod even had this slick wheel based interface for getting around quickly and easily. Amazing! Unfortunately, when my car was designed, these types of players weren’t common yet and there was no way to interface an iPod or any type of music player with my deck because I didn’t have an AUX input. Bluetooth connectivity was even less of a thing at the time so that option was out as well.

So I kept on making CDs of the music I wanted to listen to and feeding them into the changer knowing that some day I would sell the car and pick one up that had a Bluetooth interface. I thought, one day I’ll finally be able to listen to all of my music at anytime and it’ll be great.

Well, as it turns out it wasn’t all rainbows and unicorns.

A few years ago I picked up a newer Mazda, one with Bluetooth and iPod connectivity, an AUX port and even Pandora! The possibilities before me seemed perfect and I got to working figuring out which method would work best for me. After much tinkering I settled on using Bluetooth because it offered wireless connectivity and worked with whatever music app I wanted to use. I loaded up Spotify with downloaded music and that was that.

After awhile though the flaws in this new system started to appear. I discovered that Mazda’s Bluetooth implementation was less than ideal. It takes a lot of time for it to connect to my phone and start playing music, sometimes over a minute. I can no longer just hop in the car and have it resume where it left off just moments after starting the car. Other times it connects but can’t tell me what is playing or just refuses to play anything at all until I visit Spotify and select something from there.

And herein lies the primary issue and why I miss the venerable CD changer. It isn’t because Mazda’s Bluetooth implementation is bad (and it is really bad), it’s that the process of selecting music is so much more involved. To select music, I have to get my phone, unlock it, open the Spotify app and go digging for the playlist or album I want…while driving. It turns out that having a large selection of music requires changes to how you interface and interact with that music. It requires that you look at a screen to scroll and make selections. All of these interactions are fine when you can spend the time doing them but in the car speeding down the highway is not the right time.

So why is the CD changer the better option here? It’s because interacting with a CD changer is fundamentally different than a music app on your phone, even if your vehicle has a stellar deck and you are able to interact with the music app using steering wheel controls or the touch screen you still need to look at a screen to know where you are. Not so with a CD changer. You put six discs into a changer and you know what slot they are in. You know, using your ears, which track you are listening, which CD it is on and from that you know what slot is it in. If you want to listen to a different disc you know how many times to press the disc change button. Listening to Taylor Swift on disc 1 and now you want to listen to the third song on your new Metallica album in slot 3? Press the disc change button twice and then press the next track button a couple of times. Done and you didn’t even have to take your hands off the steering wheel. Such an interaction isn’t an option anymore. In a music app the interface is 100% on the device, with a CD changer half the interface is in your head.

In the end, it isn’t really the CD changer I miss. It’s actually the “interface” that CD changers provided. There is no equivalent, that I’m aware of, in today’s music apps that emulates the CD changer interface. I believe the ideal solution would be to allow a user to configure a set of playlists as “slots” like a CD changer that are in a locked order. Controls are then offered on screen and the steering wheel to switch between playlists in a locked order, just like a CD changer.

I like the progress that has been made with technology. I appreciate being able to put more music than I could possibly listen to in a year in my pocket. I just wish this progress didn’t come at the expense of usability. Burning a CD was a hassle but when it was done it was done but interacting with your player happens every time.

When Docker first came out it was a real mind bender of an experience for me. I simply couldn’t wrap my head around what a Docker image was, how it was different from a virtual machine and so on. “Why not just install the software from rpm?” I said.

I also struggled with how the app in the container was running inside of something and didn’t have access to anything. At the time I saw this just as a silly hurdle that made it more difficult than it should be to get something running rather than a core benefit of using containers.

Over time I got to know Docker and containers better. I gained an understanding of now images are created, how they could be given restricted resources, easily shared and so on. I started creating my own containers to further understand the process, got to know multi stage builds and so in.

Although I had gained a better understanding of the container itself I still couldn’t find a good use case for containers in my line of work. I was too used to creating VMs that ran a static set of services that rarely changed. Docker containers still seemed like another packaging format that has few additional advantages. It wasn’t until I started playing with container orchestration that things really started to click.

With container orchestration, and in particular Kubernetes, the power and convenience of containers becomes much harder to ignore. Orchestration was definitely the missing piece of the puzzle for me that sealed the deal. This is because orchestration solves a number common issues with running larger software infrastructure. One of the biggest issues that Kubernetes solves is how to swap out the running application with little fuss. By simply declaring that a running workload should update the Docker image in use Kubernetes will go through the process of starting the new container, waiting for it to be ready, adding it to the load balancer and then draining connections from the old container. While it’s true you can achieve all of that with a traditional setup it requires a lot more effort. This feature alone is what sold me on using Kubernetes at all and from there my current state of container acceptance.

With Kubernetes revealing the huge potential of containers I’ve since come back to exploring them for other uses outside of orchestration. Now, core features of containers that once bothered me are seen as advantages. I still see containers as a packaging format but one that works equally well on macOS and Windows as well as it does on Linux or in Kubernetes. As an “expert” I can provide a container to a user that has everything installed for some to tool. Previously this may have required me to write extensive documentation detailing the requirements, installation process and finally the configuration of whatever software it took to meet the user’s needs. A process that may end up failing or not work at all because the end user is using a different operating system or because of some other environment specific reason. With containers, if it works for me there is a much greater chance it will work for someone else as well.

Today I find myself building more and more containers for use in CI/CD pipelines. I see them as little utilities that I can chain together to create a larger solution. Similar to the Unix philosophy, I am creating containers that do one thing and do it well. These small containers are easy to maintain, easy to document and easy to use. And this, I believe, is one of the core strengths of containers. They encapsulate a solution into something that is easier to understand. Even though a container is technically more bloated because it contains not only the application itself but also all of its requirements, the end result is something that is ultimately easier to understand. Like writing code, you can write the most incredible for loop ever devised but if the next person can’t understand it is it still a good solution?

Throughout my career I’ve always enjoyed trying out new things to see how I can apply them to everyday problems or how they can be used to create great new opportunities. Docker was one the first things that I really struggled to understand and initially I thought “this is it, this is the tech my kids will understand that I won’t.” Today, however, I can see what a game changer containers are. When properly constructed containers are easier to understand, easier to share with others and easier to document. These are powerful reasons too use containers. There are new hurdles to overcome, like how to maintain them for security, but all things have tradeoffs and it’s up to us to decide which ones are worth it.