I keep doing more multi-architecture builds using buildx and continue to find good information out there to help refine the process. Here is a post I found I thought I’d share that discusses how to build multi-architecture using AWS Graviton2 based instances which are ARM based. https://www.smartling.com/resources/product/building-multi-architecture-docker-images-on-arm-64-bit-aws-graviton2/. I haven’t officially tried this yet but the same process should also work on a Pi4 with the 64bit PiOS installed.
Category: Computing
Fixing DIND Builds That Stall When Using Gitlab and Kubernetes
Under some conditions, you may find that your Docker in Docker builds will hang our stall out, especially when you combine DIND based builds and Kubernetes. The fix for this isn’t always obvious because it doesn’t exactly announce itself. After a bit of searching, I came across a post that described the issue in great detail located at https://medium.com/@liejuntao001/fix-docker-in-docker-network-issue-in-kubernetes-cc18c229d9e5.
As described, the issue is actually due to the MTU the DIND service uses when it starts. By default, it uses 1500. Unfortunately, a lot of Kubernetes overlay networks will set a smaller MTU of around 1450. Since DIND is a service running on an overlay network it needs to use an MTU equal to or smaller than the overlay network in order to work properly. If your build process happens to download a file that is larger than the Maximum Transmission Unit then it will wait indefinitely for data that will never arrive. This is because DIND, and the app using it, thinks the MTU is 1500 when it is actually 1450.
Anyway, this isn’t about what MTU is or how it works, it’s about how to configure a Gitlab based job that is using the DIND service with a smaller MTU. Thankfully it’s easy to do.
In your .gitlab-ci.yml file where you enable the dind service add a command or parameter to pass to Gitlab, like this:
Build Image: image: docker services: - name: docker:dind command: ["--mtu 1000"] variables: DOCKER_DRIVER: overlay2 DOCKER_TLS_CERTDIR: "" DOCKER_HOST: tcp://localhost:2375
This example shown will work if you are using a Kubernetes based Gitlab Runner. With this added, you should find that your build stalls go away and everything works as expected.
Rundeck on Kubernetes
Updated Feb 2023 to remove the use of the incubator chart which has gone dead and replace it with something else. It also updates information about the persistent storage CSI I am using.
In this post I’m going to review how I installed Rundeck on Kubernetes and then configured a node source. I’ll cover the installation of Rundeck using an available helm chart, configuration of persistent storage, ingress, node definitions and key storage. In a later post I’ll discuss how I setup a backup job to perform a backup of the server hosting this site.
For this to work you must have a Kubernetes cluster that allows for ingress and persistent storage. In my cluster I am using nginx-ingress-controller for ingress and democratic-csi. The democratic-csi-iscsi is connected to my TrueNAS Core server and creates iSCSI based storage volumes. It is set as my default storage class. You will also need helm 3 installed.
With the prerequisites out of the way we can get started. First, add the helm chart repository by following the directions on located at https://github.com/EugenMayer/helm-charts/tree/main/charts/rundeck. Once added, perform the following to get the values file so we can edit it:
helm show values eugenmayer/rundeck > rundeck.yamlContinue reading
Cloud Vendor Lock-In
Came across this blog post by Corey Quinn over on lastweekinaws.com discussing the topic of vendor lock-in, specifically cloud vendors. Corey made some really excellent points but how you are probably already locked in without realizing it. The post reminded me that when I started using AWS after a job change that I was also in the camp of avoiding vendor lock in. Over time I realized, however, that there are some things you must embrace when it comes to a given cloud provider but that doesn’t mean you can’t smartly pick the services you use so that you might leverage some tools that are cloud provider agnostic.
Lets first talk about some additional ways that vendor lock in is inevitable. For starters, if you are not leveraging some of your cloud providers most integral features (speaking purely in AWS terms) like IAM policies and security groups you are almost certainly doing it wrong. Not using IAM policies for configuring an ec2 instance or allowing a CloudFront distribution to access an S3 bucket is usually the wrong way to go about things. You’re much better off just embracing these AWS only techniques in order to build a cleaner, more robust solution. These are the kinds of vendor specific things you should embrace.
However, there are times when you might want to stop and evaluate other options before moving forward. For example, AWS Systems Manager is a tool for managing your systems. Unlike IAM roles, policies and security groups there are other tools out there that provider similar functionality that may be better suited to your needs. Or, maybe you have configuration management that can build and assist in maintaining a database cluster on any provider.
Or maybe you’ve developed your own backup solution that works on any setup. In this case you might want to avoid using RDS unless you really need or want the ease of use that RDS can provide. Maybe the value of having the same tools that you are maintaining work across any cloud provider outweighs the benefits of RDS.
Services like RDS are much easier to cut ties with because your data is actually portable within reasonable limits. Given a normal MySQL RDS instance you can copy the data out and import into some other MySQL system. In these cases I don’t really see RDS as true vendor lock in the sense that you would need to rethink how your software works if you were to move it but rather that if the tooling you’ve built around it is AWS specific that’s where you can get into trouble.
Other services are certainly not that simple and this is where you must carefully consider the services that you use, what your sensitivity to being “locked-in” is and the value that the specific service offers. True vendor lock-in, in my mind, is all about the actual data. Lets say you are considering a video transcoding service that once the videos are transcoded cannot be transferred out or played with out a specific player. This is a great example of a service I would avoid if at all possible and go with some other service that simply accepted an input and provided you with some output to do with as you please.
At the end of the day, avoiding vendor lock-in is a game of determining if what you are looking at is true lock-in or an opportunity to use a platform well and correctly. Avoiding every cloud provider specific tool is almost always a mistake.
Multiple Architecture Docker Image Builds Using Gitlab and K3s
Arm processors, used in Raspberry Pi’s and maybe even in a future Mac, are gaining in popularity due to their reduced cost and improved power efficiency over more traditional x86 offerings. As Arm processor adoption accelerates the need for Docker images that support both x86 and Arm will become more and more a necessity. Luckily, recent releases of Docker are capable of building images for multiple architectures. In this post I will cover one way to achieve this by combining a recent release of Gitlab (12+), k3s and the buildx plugin for Docker.
I am taking inspiration for this post from two places. First, this excellent writeup was a great help in getting things start – https://dev.to/jdrouet/multi-arch-images-with-docker-152f. This post was also instrumental in getting this going – https://medium.com/@artur.klauser/building-multi-architecture-docker-images-with-buildx-27d80f7e2408.
I assume you already have a working installation of Gitlab with the container registry configured. Optionally, you can use Docker Hub but I won’t cover that in detail. Using Docker Hub involves changing the repository URL and then logging into Docker Hub. You will also need some system available capable of running k3s that is using at least Linux 4.15+. For this you can use either Ubuntu 18.04+ or CentOS 8. There may be other options but I know these two will work. The kernel version is a hard requirement and is something that caused me some headache. If I had just RTFM I could have saved myself some time. For my setup I installed k3s onto a CentOS 8 VM and then connected it to Gitlab. For information on how to setup k3s and connecting it to Gitlab please see this post.
Once you are running k3s on a system with a supported kernel you can start building multi-arch images using buildx. I have created an example project available at https://github.com/dustinrue/buildx-example that you can import into Gitlab to get you started. This example project targets a runner tagged as kubernetes
to perform the build. Here is a breakdown of what the .gitlab-ci.yml
file is doing:
- Installs buildx from GitHub (https://github.com/docker/buildx) as a Docker cli plugin
- Registers qemu binaries to emulate whatever platform you request
- Builds the images for the requested platforms
- Pushes resulting images up to the Gitlab Docker Registry
Unlike the linked to posts I also had to add in a docker buildx inspect --bootstrap
to make things work properly. Without this the new context was never active and the builds would fail.
The example .gitlab-ci.yml
builds multiple architectures. You can request what architectures to build using the --platform
flag. This command, docker buildx build --push --platform linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6 -t ${CI_REGISTRY_URL}:${CI_COMMIT_SHORT_SHA} .
will cause images to be build for the listed architectures. If you need a list of available architectures you can target you can add docker buildx ls
right before the build command to see a list of supported architectures.
Once the build has completed you can validate everything using docker manifest inspect
. Most likely you will need to enable experimental features for your client. Your command will look similar to this DOCKER_CLI_EXPERIMENTAL=enabled docker manifest inspect <REGISTRY_URL>/drue/buildx-example:9ae6e4fb
. Be sure to replace the path to the image with your image. Your output will look similar to this if everything worked properly:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 527,
"digest": "sha256:611e6c65d9b4da5ce9f2b1cd0922f7cf8b5ef78b8f7d6d7c02f793c97251ce6b",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 527,
"digest": "sha256:6a85417fda08d90b7e3e58630e5281a6737703651270fa59e99fdc8c50a0d2e5",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 527,
"digest": "sha256:30c58a067e691c51e91b801348905a724c59fecead96e645693b561456c0a1a8",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "v7"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 527,
"digest": "sha256:3243e1f1e55934547d74803804fe3d595f121dd7f09b7c87053384d516c1816a",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "v6"
}
}
]
}
You should see multiple architectures listed.
I hope this is enough to get you up and running building multi-arch Docker images. If you have any questions please open an issue on Github and I’ll try to get it answered.
Going Deeper With Proxmox Cloud-Init
Not too long ago I wrote about using Packer to build VM templates for Proxmox and created a Github project with the files. In the end I provided basic information on how to setup cloud-init within the Proxmox GUI. This time we’re going to dive a bit deeper into using cloud-init within Proxmox and customize it as needed.
First, lets quickly cover what cloud-init is. Cloud-init is a system for configuring an operating system on first boot. It is always used on cloud based systems like AWS, Azure, OpenStack and can be used on non-cloud based systems like Proxmox, VirtualBox or any system where you can present the info as a CD-ROM. Using cloud-init you can pass in instance meta-data information, network configuration and user information. As part of the user information you can also provide commands to be run. It is the ability to run commands on initial boot that we’re going to tap into.
Out of the box, Proxmox provides a basic cloud-init system that you can enable through the web interface that works well if all you need is to create a user with an SSH key and configure the network. But if you want to customize it you will need to ensure you have snippets enabled and visit the cli of your Proxmox system.
Continue readingSimplistic templating with envsubst
Have you ever wanted to write out a large, templated config file using only shell script code? Maybe you are working with a small IoT device with limited power or some other device and you want to avoid additional dependencies for single task. In these situations using a larger config management system tool can be too heavy or just not practical. In this post I’ll explore the envsubst utility as a way to write out a config file from a template. In the end you’ll see that envsubst is a great and lightweight utility that can be used to create config files.
Continue readingaws-vault
If you work with AWS using CLI tools I highly recommend aws-vault to help keep your AWS keys secure. Be sure to visit the usage guide for full details on setup. I configured my copy to be unlocked when I am actively using my computer. It’s also a good idea to ensure your storage is encrypted.
Building CentOS images for Proxmox using Packer
A while back I took the time to learn a bit of OpenStack’s Disk Image Builder. Recently I decided to give Packer a try to build templates for Proxmox and I decided to release the results as a Github repo. You can find the repo at https://github.com/dustinrue/proxmox-packer. The project allows you to build a mostly empty CentOS 7 or CentOS 8 template for Proxmox. You can further customize the image by expanding the provisioner section of the packer.json files.
Creating diagrams with code
A co-worker recently discovered a fun project called diagrams that allows you to create diagrams from code. Documentation and how to install diagrams is available at https://diagrams.mingrammer.com. The image you see above was generated with some simple code. The code used to generate the graph looks like this:
from diagrams import Diagram, Cluster
from diagrams.oci.edge import Cdn
from diagrams.onprem.network import Nginx
from diagrams.onprem.compute import Server
from diagrams.onprem.database import Mariadb
from diagrams.onprem.inmemory import Memcached
from diagrams.onprem.client import Users
with Diagram("dustinrue.com", show=False):
cloudflare = Cdn("CloudFlare")
users = Users("users")
with Cluster("web server"):
nginx = Nginx("nginx")
php = Server("php")
with Cluster("database server"):
mariadb = Mariadb("mariadb")
memcached = Memcached("memcached")
users - cloudflare
cloudflare - nginx
nginx - php
php - mariadb
php - memcached
Using diagrams is an easy way to quickly create and track changes to diagrams.