As of this writing I have two Heltec V3 Meshtastic nodes that I use to gain access to the greater Meshtastic network in my area. One is installed at a static location, the attic of my house, while the other one is either in my office or I take it with me. I interact with it using the Meshtastic software on my phone or on my desktop computer. One of the features of Meshtastic is to advertise some node metadata including location. My more mobile node gets my current coordinates from the Meshtastic client connected to the node over Bluetooth, but the static node has no way to know its location so I must tell it. In this post, I will walk through how I advertise a static location for the node installed in my attic as it wasn’t as straightforward as I initially thought it would be.

While it isn’t necessary to advertise the location of your node, it is useful because it helps provide some indication as to how your messages are traveling. You also don’t need to advertise a perfectly precise location though you shouldn’t advertise it as being somewhere on the other side of the planet. I am making some heavy assumptions, primarily that you already know how to install the Meshtastic command line client. If not, take a look at the directions available at https://meshtastic.org/docs/software/python/cli/installation/. It is possible to configure the node without using the command line client but I don’t cover that here. I use the command line client because my static node is connected to a raspberry pi via USB which powers it and allows me to manage it that way.

Getting and setting your coordinates

Before you can set the coordinates for a Meshtastic node, you need to know what they are expressed as a latitude longitude pair. I found that the simplest way is to use google.com/maps, look for the location you want to advertise and get the latitude and longitude values from. To do so, search for the location, right click where you want to advertise and click the top pair of values which will copy them to your clip board. Next, modify the following script to suite your needs to set the values on your Meshtastic node:

#!/bin/bash

meshtastic --port /dev/ttyUSB1 --set position.fixed_position True
meshtastic --port /dev/ttyUSB1 --set position.gps_mode 2
meshtastic --port /dev/ttyUSB1 --set position.gps_enabled False
meshtastic --port /dev/ttyUSB1 --set position.gps_update_interval 300
meshtastic --port /dev/ttyUSB1 --setlat 45.126 --setlon -93.488 --setalt 1

Google will provide very precise coordinates but keeping 3 to 4 places of precision is plenty. One key thing to remember is that, despite what the command line help will tell you, you should set the altitude to something other than nothing or 0. You can provide a proper altitude in meters above sea level or simply setting it to 1 will suffice. Depending on your setup, you may not need to specify the port. I show my port for reference as the Pi the node is connected to has other USB devices attached.

After configuration, your node will begin to advertise its position as often as you configured it to using the gps_update_interval value. You can modify this to suit your network.

Bonus content

Keeping with our scripted methods of managing a Meshtastic node, here is a bit of bonus content. I also set a name for my static node so it is easier to identify consistently on the network. If you ever need to factory reset your node and want to set the name again (if you don’t have the config saved or whatever) then this simple script will help:

#!/bin/bash

meshtastic --port /dev/ttyUSB1 --set-owner "MOONBASE" --set-owner-short "R2D2"

This will set the name of your node easily. You can combine these steps into a larger script to help maintain your node more easily.

Flux, or FluxCD, is a “set of continuous and progressive delivery solutions for Kubernetes that are open and extensible” and is my preferred way to do “GitOps” in my Kubernetes clusters. I like Flux because it handles Helm extremely well (I’m a big fan of Helm) and allows me to have a simple fallback if something goes wrong. In this post, I will go through the process of installing flux, k3s that can be used for testing and then creating a Flux project that adopts Flux’s documented monorepo design. I have also published a copy of the work detailed here to Github so you can use as a working starter for your own project.

Continue reading

Computer professionals all know that secrets like API keys and passwords for services must be kept safe, but it isn’t always clear how to do so in a way that isn’t overly cumbersome. In this post, I am going to go through how I achieve this on macOS using GnuPG. I’m using GnuPG because I use both macOS and Linux on a regular basis. I also share my dot files across systems and GnuPG is the most cross platform option available that I am aware of. Although the solution I am using is cross platform, I am describing how to set this up using a Mac.

For this solution, I am leveraging a number of tools, which I’ve listed below. I assume that if you are the sort of person that has need for API tokens in your shell, you are likely also using brew. Here is what you want:

  • GPG Suite – This is optional but highly recommended. It provides a nice GUI for interacting with GPG and, more importantly it provides a way to tie your GPG passphrase to your Apple Key Chain to unlock it. This makes everything much smoother.
  • GPG – brew install gpg provides the command line tools you will need to manage your passwords or API tokens
  • pass – brew install pass provides the command line password tool.

Initial Setup

If you have not used GPG before then there are a few steps you need to take to get things setup.

IMPORTANT: If you are going to use GPG Suite then you will want to start the initial setup using GPG Keychain. Doing so will ensure the gpg command line tool can also see the key(s) you create. If you start with GPG on the command line, or were already using it, you will want to delete all of the .conf files in ~/.gnupg so that the command line client and GPG Keychain are working together.

Using GPG Keychain is an added bonus because it allows you to store your GPG credentials in the Apple Keychain. The Apple Keychain is unlocked whenever you log into your Mac. It isn’t necessary, but it is convenient.

Once you have created your GPG key you can initialize your pass database. First, get the ID of your GPG key. You can use gpg --list-private-keys and get a list. The value you are looking for will be a set of random characters. Then, initialize your pass database with the command below, replacing the ID with your key’s ID:

pass init 21D62AA0B018951161C3CC46E94469CDDCA62DF0

You will get a message that your password store has been initialized. There are other ways to initialize your pass database like storing it in git. You can read more at https://www.passwordstore.org

Adding a secret value to pass

Adding a value to your pass database is simple. Run:

pass insert key

key is the name of the secret you want to store. Press enter and then put in the value. You can organize your keys however you want by separating them using / like a directory separator.

Using the secret value

Using the secret value is equally simple. Since your KeyChain is unlocked when you sign into your computer you should have no issues retrieving the value in a similar way as adding it. Simply use the following command to get the value, adjust the command for your use case:

pass show key

Again, key is the secret you want to retrieve.

Putting it all together

This is great but we can make this more convenient by adding this to our shell startup scripts. On my system, .bash_profile and .bashrc are both processed. In my .base_profile I have added the following to get secret values from pass and assign them environment variables that various programs I use use in order to connect to services. Continuing my example of working against Proxmox, I have entered this into my .base_profile:

export PROXMOX_PASSWORD=$(pass show proxmox_password)

Since I am using GPG Suite, my GPG passphrase is loaded from my Mac’s Key Chain system once I have saved the pass phrase to it (you are asked the first time you decrypt a value). This way, my start up scripts do not contain any sensitive information and my environment is built from securely stored values.

This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn’t fit in a single post.

A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use. While I understand what shared file systems can do they also have steeper HW requirements and honestly the benefits are rather limited. If your purpose for using shared file systems is to learn about them, then go for it as it is a great thing to understand. Outside of learning, I prefer and have had great luck with keeping things as simple as I can while still getting something that is useful and versatile. Additionally, I also avoid the use of ZFS because running it properly requires more memory, memory I’d prefer to give to my VMs.

For that reason, my Proxmox cluster consists of two systems with a Raspberry Pi acting as a qdevice for quorum. Each system has a single spinning drive for the Proxmox OS and then a single SSD for all of the VMs that live on that node. I then have another physical system providing network based shared storage for content like ISO files and backups, things that truly need to be shared between the Proxmox cluster nodes. This setup gives me a blend of excellent VM performance because the disks are local and speedy, shared file space where it matters for ISOs and backups while maintaining one of the best features of Proxmox, live migration. Yes, live migration of VMs is still possible even when the underlying file system is not shared between systems, it’s just a lot slower of a process because data must be transferred over the network during a migration.

One of the benefits of using a file system like ceph is that you can distribute files across your systems in the event a system or disk fails. You have some redundancy there but regardless of redundancy you still need to have actual backups. To cover for this, I have regular backups taken of important VMs and separate backup tasks specifically for databases. For anything that has files, like Plex or Nextcloud, that data comes from my TrueNAS system using a network file system like NFS or Samba. Again, local storage for speed and shared storage where it really matters.

This setup gives me a lot of the benefits without a ton of overhead which helps me keep costs down. Each Proxmox node, while clustered, still work more or less independently of each other. I can recover from issues by restoring a backup to either node or recover databases quickly and easily. I don’t ever have to debug shared file system issues and any file system issues I do face are localized to the affected node. In the event of a severe issue, recovery is simplified because I can remove the replace the bad drive and simply restore any affected VMs on that node. The HA features of Proxmox are very good and I encourage their use when it makes sense, but you can avoid their complexity and maintenance and still have a reliable home lab that is easy and straight forward to live with.

For some time I’ve wanted a radio scanner so I could listen in on Police/Fire/EMS radio in my area but I’m not serious enough to pay for a dedicated-to-the-task scanner required to listen to today’s radio protocols. Today’s protocols are digitally based with trunking and patching systems that can both isolate calls to a local area while also allowing for nearby stations to be patched in. This is done with a constant control signal and a number of nearby frequencies that radios can hop to when they make a call. Decoding all of this requires specialized equipment or software that understands the P25 Phase 1 or 2 protocol. Radios that can do this start at around $250, go up from there and that’s before you get an antenna or anything associated with it. Additionally, I really like toying with Software Defined Radio (SDR) equipment and the idea of turning a computer into a scanner capable of tracking this radio system seemed fun.

In this post I am going to go through some of what I did to get setup to listen in on what I was interested in. While I knew that an SDR could be used for this task, I didn’t know how to put it together, what software was required and so on.

Get to know the radio systems used near you

The first thing I had to do was confirm what type of radio system was used near me. For that I turned to https://www.radioreference.com. Here, I learned that in the state of Minnesota, all Public Safety Agencies use ARMER which is a P25 Phase 1 system. Based on this information I knew better what to expect when it came to the hardware needed as well as what software I needed to research. Later, as I was setting up the software, I registered for a paid account with Radio Reference so that I could automatically pull down information about frequencies used and more.

Using the site, try to locate a tower site that is as close to you as possible and make note of the frequencies used. The important part to know

Get the right hardware

For the SDR itself, I went with this RTL-SDR Blog V3 brand unit (Amazon Affiliate Link) based on information I found that suggesting it was better supported. I also selected this unit because it has an SMA style connector for a screwed together, stronger connection to the antenna. Additionally I grabbed this antenna set (Amazon Affiliate Link) because it offered a range of antennas that would be a good fit for what I was doing.

Note that, depending on the frequencies used in your area, you may need more than one SDR in order to tune them all in. If your local system is P25 based it will use a main control channel and then a set of additional frequencies for callers to use. This is commonly referred to as a trunk based system. The frequencies in use near you need to fit in the range your SDR can tune in and listen to at the same time. The dongle you select should advertise the bandwidth it can tune to. For example, the SDR I selected advertises “up to 3.2 MHz of instantaneous bandwidth (2.4 MHz stable)” which means it can reliably listen to anything within a 2.4mhz range of frequencies. On a P25 system, the control frequency must always be tracked and all additional frequencies must be within 2.4mhz of the control frequency. If the frequencies used fall outside of this range then you may need multiple SDR adapters to hear everything.

The system near me uses two different control frequencies:

  • 857.2625
  • 860.2625

Callers then are on:

  • 856.2625
  • 857.0125
  • 858.2625
  • 859.2625

As long as I do not select the 860.2625 control frequency, the SDR can tune and hear any of the other frequencies in at the same time as they are all within 2.4mhz from the control frequency.

You may elect to get more than one SDR if you wish to listen to additional trunks or other frequencies at the same time. Later you will see that you can set priorities on what frequencies or trunks you want to listen to first in the event two frequencies become active.

Software

After a short bit of research I found there is a handful of software options available but I quickly settled on SDRTrunk. SDRTrunk is freely available, Java based software that will run on Windows, Mac and Linux alike. It seemed to be among the most recommended pieces of software for this task and readily available information on how to set it up. I used this YouTube video to get things setup – https://www.youtube.com/watch?v=b9Gk865-sVU. The author of the video does a great job explaining how trunking works, talk groups, how to pull in data from Radio Reference and how to configure the software for your needs.

Putting it all together

For my setup I used an older iMac running Ubuntu 22.04. I installed the SDRTrunk software per their directions and used the above mentioned video to learn how to configure the software. The system was almost ready to go out of the box but I had to install rtl-sdr from apt for the device to be recognized by the software. I used the adjustable, silver dipole antenna from the antenna kit with it fully collapsed. This was the closest I could get to the appropriately sized antenna for the frequencies used. I used this site to determine antenna length for a given frequency – https://www.omnicalculator.com/physics/dipole. I am located quite close to the broadcast tower so even a poorly sized antenna still worked. Sizing the antenna properly will assist greatly in improving your ability to tune something in. Even fully collapsing the antenna vs fully extending the antenna resulted in nearly a 10dB improvement in signal strength.

The last thing I did was setup an Icecast 2.4 based broadcast so I can tune in away from home. SDRTrunk has support for a few different pieces of streaming software and Iceast seemed to be the easiest to setup.

Finishing up

While not a full how-to, I hope my post gives you just enough information to get started. I am amazed at how well this solution, the SDR, software and more all work together so well. Better than I expected. I also like that I can repurpose the SDR for other tasks if I want, like pulling in data from remote weather stations and more. If there is something you have a question about leave a comment or find me on Mastodon.

Disclaimer: This post contains Amazon Affiliate links. If you purchase something from Amazon using a link I provided I will likely receive a commission for that sale. This helps support the site!

Some time ago I removed Google Analytics to avoid the tracking that came along with it and it all being tied to Google. I also wasn’t overly concerned about how much traffic my site got. I write here and if it helps someone then great but I’m not out here to play SEO games. Recently, however, I heard of a new self hosted option called Umami that claims to respect user privacy and is GDPR compliant. In this post I will go through how I set it up on the site.

Umami supports both PostgreSQL and MySQL. The installation resource I used, discussed below, defaults to PostgreSQL as the datastore and I opted to stick with that. PostgreSQL is definitely not a strong skill of mine and I struggled to get things running initially. Although I have PostgreSQL installed on a VM already for my Mastodon instance, I had to take some additional steps to get PostreSQL ready for Umami. After some trial and error I was able to get Umami running.

My installation of PostreSQL is done using the official postgres.org resources which you can read about at https://www.postgresql.org. In addition to having PostgreSQL itself installed as a service I also needed to install postgresql15-contrib in order to add pgcrypto support. pgcrypto support wasn’t something I found documented in the Umami setup guide but the software failed to start successfully without it and an additional step detailed below. Below is how I setup my user for Umami with all commands run as the postgres user or in psql. Some info was changed to be very generic, you should change it to suit your environment:

  • cli: createdb umami
  • psql: CREATE ROLE umami WITH LOGIN PASSWORD 'password’;
  • psql: GRANT ALL PRIVILEGES ON DATABASE umami TO umami;
  • psql: \c umami to select the umami database
  • psql: CREATE EXTENSION IF NOT EXISTS pgcrypto;
  • psql: GRANT ALL PRIVILEGES ON SCHEMA public TO umami;

With the above steps taken care of you can continue on.

Since I am a big fan of using Kubernetes whenever I can, my Umami instance is installed into my k3s based Kubernetes cluster. For the installation of Umami I elected to use a Helm chart by Christian Huth which is available at https://github.com/christianhuth/helm-charts and worked quite well for my purposes. Follow Christian’s directions for adding the helm chart repository and read up on the available options. Below is the helm values I used for installation:

ingress:
  # -- Enable ingress record generation
  enabled: true
  # -- IngressClass that will be be used to implement the Ingress
  className: "nginx"
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
  # -- Additional annotations for the Ingress resource
  hosts:
    - host: umami.dustinrue.com
      paths:
        - path: /
          pathType: ImplementationSpecific
  # -- An array with the tls configuration
  tls:
    - secretName: umami-tls
      hosts:
        - umami.dustinrue.com

umami:
  # -- Disables users, teams, and websites settings page.
  cloudMode: ""
  # -- Disables the login page for the application
  disableLogin: ""
  # -- hostname under which Umami will be reached
  hostname: "0.0.0.0"

postgresql:
  # -- enable PostgreSQL™ subchart from Bitnami
  enabled: false

externalDatabase:
  type: postgresql

database:
  # -- Key in the existing secret containing the database url
  databaseUrlKey: "database-url"
  # -- use an existing secret containing the database url. If none given, we will generate the database url by using the other values. The password for the database has to be set using `.Values.postgresql.auth.password`, `.Values.mysql.auth.password` or `.Values.externalDatabase.auth.password`.
  existingSecret: "umami-database-url"

The notable changes I made from the default values provided is I enabled ingress and set my hostname for it as required. I also set cloudMode and diableLogin to empty so that these items were not disabled. Of particular note, leaving hostname at the default value is the correct option as setting it to my hostname broke the startup process. Next, I disabled the postgresql option. This disables the installation of PostgreSQL as a dependent chart since I already had PostreSQL running.

The last section is how I defined my database connection information. To do this, I created a secret using kubectl create secret generic umami-database-url -n umami and then edited the secret with kubectl edit secret umami-database-url -n umami. In the secret, I added a data section with base64 encoded string for “postgresql://umami:[email protected]:5432/umami”. The secret looks like this:

apiVersion: v1
data:
  database-url: cG9zdGdyZXNxbDovL3VtYW1pOnBhc3N3b3JkQDEwLjAuMC4xOjU0MzIvdW1hbWk=
kind: Secret
metadata:
  name: umami-database-url
  namespace: umami
type: Opaque

Umami was then installed into my cluster using helm install -f umami-values.yaml -n umami umami christianhuth/umami which brought it up. After a bit of effort on the part of Umami to initialize the database I was ready to login using the default username/password of admin/umami.

I setup a new site in Umami per the official directions and grabbed some information that is required for site setup from the tracking code page.

Configuring WordPress

Configuring WordPress to send data to Umami was very simple. I added the integrate-umami plugin to my installation, activated the plugin and then went to the settings page to input the information I grabbed earlier. My settings page looks like this:

Screenshot of Umami settings showing the correct values for Script Url and Website ID. These values come from the Umami settings screen for a website.

With this information saved, the tracking code is now inserted into all pages of the site and data is sent to Umami.

Setting up Umami was a bit cumbersome for me initially but that was mostly because I am unfamiliar with PostgreSQL in general and the inline documentation for the Helm chart is not very clear. After some trial and error I was able to get my installation working and I am now able to track at least some metrics for this site. In fact, Umami allows me to share a public URL for others to use. The stats for this site is available at https://umami.dustinrue.com/share/GadqqMiFCU8cSC7U/Blog.

One of the challenges or points of friction for me using Proxmox in my home lab has been integrating Ansible with it more cleanly. The issue is I have traditionally maintained my inventory file manually which is a bit of a hassle. Part of the issue is that Proxmox doesn’t really expose a lot of metadata about the VMs you have running to things like tagging don’t actually exist. Despite that I set out to get a basic, dynamically generated inventory system that will work against my Proxmox installation to make the process at least a bit smoother.

For some time, Ansible has supported the idea of dynamic inventory. This type of inventory will query a backend to build out an inventory that is compliant with Ansible. Proxmox, having an API, has a dynamic inventory plugin available from the community. In this post I will showcase how I got started with a basic Proxmox dynamic inventory.

When I set out I had a few requirements. First, I really don’t have a naming convention of my VMs that makes any sense in DNS. Some systems have a fully qualified domain but most do not. The ones that do have fully qualified domain name wouldn’t actually be available over ssh on the IP resolved for that domain. To get around this, I wanted to be able to map the host name in Proxmox to its internal IP address. By default, the dynamic inventory plugin will set ansible_host to the name of the VM. For this I had to provide a compose entry to set the ansible_host which you’ll see below. This feature is made possible because I always install the qemu guest agent.

The second requirement is that ssh connection info was dynamic as well because I use a number of different operating systems. Since all of my systems use cloud-init I am able to set the ssh username to the ciuser value thus ensuring I always know what the ssh user is regardless of the operating system used.

Here is my dynamic inventory file:

plugin: community.general.proxmox
validate_certs: false
want_facts: true
compose:
  ansible_host: proxmox_agent_interfaces[1]["ip-addresses"][0].split('/')[0]
  ansible_user: proxmox_ciuser

I placed this information into inventory/inventory.proxmox.yaml. Most of the entries are self-explanatory but I will go through what the compose section is doing.

The first item in the compose section is setting the ansible_host. When the inventory plugin gathers information from Proxmox it will gather the assigned IP addresses as determined using the Qemu Guest Agent. In all cases that I could see, the first IP address will be localhost and the second one will always be the primary interface in the system. With information known, I was able to create the jinja2 template to grab the correct IP address and strip the netmask off of it.

The next line is setting the ansible_user by just copying the proxmox_ciuser value. With these two variables set, Ansible will use that username when connecting to the host at its internal IP address. Since the systems were brought up using cloud-init, my ssh key is already present on all of the machines and the connection works without much fuss.

To support this configuration, here is my ansible.cfg:

[defaults]
inventory = ./inventory
fact_caching_connection = .cache
retry_files_enabled = False
host_key_checking = False
forks = 5
fact_caching = jsonfile

[inventory]
cache = True
cache_plugin = jsonfile

[ssh_connection]
pipelining = True
ssh_args = -F ssh_config

This configuration is setting a few options for me related to how to find the inventory, where to cache inventory information and where to cache facts about remote machines. Caching this info greatly speeds up your Ansible runs and I recommend it. The ssh_args value allows me to specify some additional ssh connection info.

In addition to the above configuration files, there are environment variables that are set on my system. These variables define where to find the Proxmox API, what user to connect with and the password. The environment variables are defined on the dynamic inventory plugin page but here is what my variables look like:

PROXMOX_PASSWORD=[redacted]
PROXMOX_URL=https://[redacted]:8006/
PROXMOX_INVALID_CERT=True
PROXMOX_USERNAME=root@pam
PROXMOX_USER=root@pam

The user/username value is duplicated because some other tools rely on PROXMOX_USERNAME instead of PROXMOX_USER.

And that’s it! With this configured I am able to target all of my running hosts by targeting “proxmox_all_running”. For example, ansible proxmox_all_running -m ping will ping all running machines across my Proxmox cluster.

TLDR; The fix for this is to ensure you are forcing your CDN to properly handle “application/activity+json” in the Accept header vs anything else. In other words, you need to Vary on Accept, but it’s best to limit it to “application/activity+json” if you can.

With the release of ActivityPub 1.0.0 plugin for WordPress I hope we’ll see a surge in the number of WordPress sites that can be followed using your favorite ActivityPub based systems like Mastodon and others. However, if you are hosting your WordPress site on Cloudflare (and likely other CDNs) and you have activated full page caching you are going to have a difficult time integrating your blog with the greater Fediverse. This is because when an ActivityPub user on a service like Mastodon performs a search for your profile, that search will land on your WordPress author page looking for additional information in JSON format. If someone has visited your author page recently in a browser then there is the chance Mastodon will get HTML back instead resulting in a broken search. The reverse of this situation can happen too. If a Mastodon user has recently performed a search and later someone lands on your author page, they will see JSON instead of the expected results.

The cause of this is because Cloudflare doesn’t differentiate between a request looking for HTML or one looking for JSON, this information is not factored into how Cloudflare caches the page. Instead, it only sees the author page URL and determines that it is the same request and returns whatever it has. The good news is, with some effort, we can trick Cloudflare into considering what type of content the client is looking for while still allowing for full page caching. Luckily the ActivityPub has a nice undocumented feature to help work around this situation.

To fix this while keeping page caching you will need to use a Cloudflare worker to adjust the request if the Accept header contains “application/activity+json”. I assume you already have page caching in place and you do not have some other plugin on your site that would interfere with page caching, like batcache, WP SuperCache and more. For my site I use Cloudflare’s APO for WordPress and nothing else.

First, you will want to ensure that your “Caching Level” configuration is set to standard. Next, you will need to get setup for working with Cloudflare Workers. You can follow the official guide at https://developers.cloudflare.com/workers/. Next, create a new project, again using their documentation. Next, replace the index.js file contents with:

export default {
  async fetch(req) {
    const acceptHeader = req.headers.get('accept');
    const url = new URL(req.url);

    if (acceptHeader?.indexOf("application/activity+json") > -1) {
      url.searchParams.append("activitypub", "true");
    }

    return fetch(url.toString(), {
      cf: {
        // Always cache this fetch regardless of content type
        // for a max of 5 minutes before revalidating the resource
        cacheTtl: 300,
        cacheEverything: true,
      },
    });
  }
}

You can now publish this using wrangler publish. You can adjust the cacheTtl to something longer or shorter to suite your needs.

Last step is to associate the worker with the /author route of your WordPress site. For my setup I created a worker route of “*dustinrue/author*” and that was it. My site will now cache and return the correct content based on whether or not the Accept header contains “application/activity+json”.

Remember that Cloudflare Workers do cost money though I suspect a lot of small sites will easily fit into the free tier.

When you create a k3s cluster using colima it will default to using Docker for the runtime. This means that any Docker image you build or pull will be available to k3s. This greatly simplifies testing locally built images being referenced by Helm charts or Kustomize (or whatever you are using).

This is not a feature unique to colima, but rather a feature of k3s if it is told to use the Docker runtime. You can read more at https://docs.k3s.io/advanced#using-docker-as-the-container-runtime.

If you have an older Kubernetes cluster with older Helm based software installs and you aren’t paying attention, it can be easy to leave some resources in a state that are impossible to update or remove. This is because of APIs that have been deprecated and then removed. While facing this issue today, I found this Helm plugin exists which can help resolve the problem – https://github.com/helm/helm-mapkubeapis