This post is going to be a lot less of a full “how-to” on how to use Minio as a build cache for Gitlab and a lot more of a post discussing the general process and the tools I used. How I set up Minio to be a build cache for my Gitlab runner is a lot more complex than I care to get fully into in a blog post, but I am posting this in the hopes that it at least inspires others.

A Lot of Context

In my setup, my Gitlab runner lives in Kubernetes. As jobs are queued, the runner spawns new pods to perform the required work. Since the pods are temporary and do not have any kind of persistent storage by default, any work they do that is not pushed back to the Gitlab instance is lost the next time a similar job is run. Having a shared build cache allows you to, in some cases, reduce the total amount of time it takes to perform tasks by keeping local copies of Node modules or other heavy assets. Additionally, if you configure your cache key properly, you can pass temporary artifacts between jobs in a pipeline. In my setup, passing temporary data is what I need.

I use Gitlab for a number of my own personal projects, including maintaining this site’s codebase. While my “production” site is a Digital Ocean virtual machine, my staging site runs on Kubernetes. This means, along with other containers I build, that I need to containerize the code of my site, which also means I need to push the container images into a registry. This is done by authenticating against the container registry as part of the build pipeline.

Additionally, I am using Gitlab CI Catalog or CI/CD components. The Gitlab CI catalog is most similar to GitHub Actions, where you create reusable components that you can tie together to build a solution. I am using components that can sign into various container registries as well as build container images for different architectures. In an effort to create reusable and modular components, I split up the process of authenticating with different registries and the build process. For this to work, I must pass the cached credentials between the jobs in the pipeline. Using the shared build cache to pass the information along ensures that I can keep the credentials available for downstream jobs while keeping them out of artifacts that a regular user can access.

My Solution

For my solution, I am leveraging a number of components I already have in place. This includes k3s as my Kubernetes solution, TrueNAS Scale as my storage solution, and various other pieces to tie it together, like democratic CSI to provide persistent storage for k3s.

The new components for my solution is the Minio operator, located at https://github.com/minio/operator/tree/master/helm/operator as well as a tenant definition based on their documentation. The tenant I created is as minimal as possible using a single server without any encryption. Large scale production environments will at least want to use on-the-wire encryption.

Configuring for my runner looks like this:

config.template.toml: |
  [[runners]]
    request_concurrency = 2
    [runners.cache]
      Type = "s3"
      [runners.cache.s3]
        ServerAddress = "minio:80"
        AccessKey = "[redacted]"
        SecretKey = "[redacted]"
        BucketName = "gitlab-cache"
        Insecure = true
      Shared = true
    [runners.kubernetes]
      image = "alpine:latest"
      privileged = true
      pull_policy = "always"
      service_account = "gitlab-runner"
    [runners.kubernetes.node_selector]
      "kubernetes.io/arch" = "amd64"
    [[runners.kubernetes.volumes.empty_dir]]
      name = "docker-certs"
      mount_path = "/certs/client"
      medium = "Memory"

From this, you can see that my Minio tenant was installed with a service named minio running on port 80. I used a port forward to access the tenant and then create my access credentials and a bucket which was plugged into the runner configuration and deployed using a Helm chart for Gitlab Runner. If you are using Amazon S3 in AWS, then you can leverage AWS IAM Roles for Service Accounts and assign the correct service account to the runners to achieve the same behavior more securely.

With this configuration in place, I am able to cache Docker authentication between jobs in a pipeline. In a future post, I will more fully detail how I am doing this in the CI Catalog, but for now, here is the YAML to define the cache key:

cache:
  - key: docker-cache-$CI_PIPELINE_ID
    paths:
      - $CI_PROJECT_DIR/.docker

By setting the cache key in this way, I ensure that Docker credentials are passed between jobs for an entire pipeline. Care needs to be taken to ensure cached the information is not included in artifacts particularly if they are sensitive in nature.

Between Safari 26.x or changes Cloudflare has made to their Automatic Platform Optimization (APO) system, something went wrong with Google based fonts on my site when viewed in Safari. Attempting to access the fonts under Safari results in 400 errors, or invalid requests. Resolving this was not immediately obvious but a snippet in the APO helped me find a path forward that restored the intended look of my site.

This FAQ answer held the secret – https://developers.cloudflare.com/automatic-platform-optimization/troubleshooting/faq/#why-are-my-font-urls-not-being-transformed. While the “question” pondered why the fonts were not being transformed, it helped me understand how I could override the default behavior of APO when it sees Google fonts being referenced. By modifying my Content Security Policy header so that fonts were not allowed from my domain I was able to restore the intended look. When APO sees the CSP won’t allow fonts to be served from the same domain, it will avoid rewriting the URL for the font. Since I don’t serve any fonts off my domain, this is a quick and easy fix to avoid the issue.

If you are in the market for a solar powered mesh radio device, maybe for Meshtastic, and like me you just want to get up and running with something that is assured to work, then you may want to consider a base station kit from https://mesh-lab.com/products/solar-base-station-kit. Made by same person behind Yeti Wurks, this base station kit has almost everything you need to get a solar powered mesh radio node up and running quickly. I say “almost everything” because, while the solar charger half of the kit includes 6 18650 cells, ideally you would insert at least one 18650 battery into the case as well. The kit was recommended to me by a local mesh group (mspmesh.org). I paid full price for the kit and this is not a sponsored post in any way. What follows is an overview of what you receive in the kit and how to use it.

What is included

The box I received contained everything described on the product listing. This includes:

  • RAK4631 radio in a weather resistant case.
  • Solar charger with 6 18650 cells preinstalled.
  • Antenna.
  • Mounting bracket that you selected at order time.
  • Mounting hardware for the solar panel and more.
  • Directions.
  • Stickers!

Below is a look inside the weather resistant case.

In the case is of course the RAK4631 radio and an additional 18650 cell holder capable of holding up to four additional cells. Pay special attention to the note about battery orientation as the holder is designed for parallel connections rather than serial! Additionally, when inserting the cells pay special attention that the positive side of the cell makes a connection with the contact. In my sample, the cells fit very tightly and it may not make a connection if you don’t slide it against the positive terminal.

Along the edge of the case starting from the top and working clockwise is an N-type antenna connector, a switch to connect/disconnect the four cell battery back from the RAK4631 charge controller, a power connection and USB-C port. Difficult to see in the photo on the left side of the case is a weather resistant vent to allow for pressure equalization.

Also included in the kit is this solar charger, which I’m linking to directly as it provides better photos and description – https://mesh-lab.com/products/off-grid-solar-charger-5-5-watt-5vdc-2-5a. This unit is available separately and includes 6 18650 cells. Also included in the kit is all of the mounting hardware which works perfectly with the 3d printed mount that you selected at order time. The solar panel seems surprisingly efficient and will charge the cells, albeit slowly, even in overcast conditions. Combined with a power efficient RAK4631, this solar panel and battery pack will keep it going for many days. While the panel and battery pack are great, the mounting post included with the panel seems a bit on the flimsy side and I don’t know how it will hold up to the elements over time. For the price, however, I can’t expect much more.

At order time I selected the larger 2″ PVC pipe mounting option. This integrated mount, also available in the store separately, is a 3d printed piece designed to be attached a 2″ PVC pipe using the included worm gear clamps. This all worked very well for me and I had no issues. Again, all screws, clamps and such were included with the kit. The solar panel attaches to the mount as does the weather resistant case. Then the whole unit is attached to whatever pole you have.

Mini Review

I have had the kit for all of a few days as of this writing but I am able to say that everything that is included with the kit works very well together. The RAK4631 radio included in the kit is known for being reliable and power efficient, perfect for solar applications.

If you are looking for an easy to assemble and ready to go device for joining a mesh network, I highly recommend you consider this kit by mesh-lab.com. If you don’t need a solar powered kit there are other options available as well. My thanks to the folks on the mspmesh.org Discord for helping me find a ready made kit. Now that I have something working and in the wild I will likely build additional nodes on my own and I’ll post about my experience here.

Earlier I mentioned that the kit recommends you insert up to four additional 18650 cells into the weather resistant case. This helps ensure proper operation of the RAK4631. I didn’t have any additional cells so, from the recommendation of others, I simply stole one cell from the solar panel. Doing so as easy as opening the back of the solar panel case by removing the rubber covers over the screws, undoing the screws and removing one cell. Be sure to understand how to orient 18650 cells when inserting them into the weather resistant case!

This is a quick guide to performing an over-the-air update of a RAK4631 device using iOS and the DFU application by Nordic Semiconductor. Based on known and personal experience, performing an over-the-air update of this device carries some amount of risk as the device does not have a fall back in the event of an error. If the update fails your device will be left in a state where you need physical, wired access in order to recover. If you have any other method available to you I recommend using that method rather than an over-air-update.

This guide specifically covers using the DFU app on iOS to flash the latest Meshtastic firmware to a RAK4631. Many of these steps may be the same on Android and may work for other, similar devices, but I don’t have these to test with. You will need three things to perform an over-the-air update of a RAK4631 or similar device:

  • iOS device with DFU software installed.
  • The DFU app is properly configured prior to attempting the update.
  • A copy of the correct firmware file you wish to apply to the device. You can find the latest release on the Meshtastic website at https://meshtastic.org/downloads/. I recommend using a “stable” release.

Optionally, if you are using iCloud drive you can download and work with the files on a Mac and place the file on your iCloud drive for easier access on the iOS side.

Configuring DFU

It is very important that you confirm DFU is configured properly prior to applying any update. Failure to do so on iOS will almost certainly result in a “bricked” device that you will need to take down and connect to using USB in order to recover. Configuration is simple. Open the app and tap on settings. Ensure you enable the first option and set the number of packets to 10. Your settings page should look similar to this screenshot:

Note that all options are at their default values, or disabled.

Getting the proper firmware

It is important that you download the correct firmware from the download page. If you do not, you will be unable to perform the update as the DFU app will complain there is no manifest.json file available. To find the correct firmware, use the following steps:

  • Go to the downloads page at https://meshtastic.org/downloads/.
  • Scroll down to the assets section.
  • Find the firmware with “nrf52840” in the name such as firmware-nrf52840-2.6.4.b89355f.zip and download it.
  • Extract the zip file and find the firmware specific to your device with “ota” in the name, for a 4631 it might named firmware-rak4631-2.6.4.b89355f-ota.zip. You are If using a Mac + iCloud drive, copy this file to a location you can find on the iOS device or if already on iOS select this file in the DFU app when told to.

Applying the update

Now to apply the update to the device. Start by opening the DFU application, confirming the configuration is set properly and then tap on select file. Here you should browse to the location of your iCloud drive or the downloads section of iOS and select the firmware file with “ota” in the name.

Next in the Device section, tap on select and find your device. You can also tap on “nearby” to filter on items that are close.

Once you have selected your device, the last step is to tap upload. Be sure to ensure your screen or device does not turn off during the process!. Remember, if for any reason the update fails, you will need to connect the device directly to a computer in order to recover. After some time the update will complete and the device will restart using the new firmware. Adjust any settings that you need to adjust and enjoy your updated device.

As of this writing I have two Heltec V3 Meshtastic nodes that I use to gain access to the greater Meshtastic network in my area. One is installed at a static location, the attic of my house, while the other one is either in my office or I take it with me. I interact with it using the Meshtastic software on my phone or on my desktop computer. One of the features of Meshtastic is to advertise some node metadata including location. My more mobile node gets my current coordinates from the Meshtastic client connected to the node over Bluetooth, but the static node has no way to know its location so I must tell it. In this post, I will walk through how I advertise a static location for the node installed in my attic as it wasn’t as straightforward as I initially thought it would be.

While it isn’t necessary to advertise the location of your node, it is useful because it helps provide some indication as to how your messages are traveling. You also don’t need to advertise a perfectly precise location though you shouldn’t advertise it as being somewhere on the other side of the planet. I am making some heavy assumptions, primarily that you already know how to install the Meshtastic command line client. If not, take a look at the directions available at https://meshtastic.org/docs/software/python/cli/installation/. It is possible to configure the node without using the command line client but I don’t cover that here. I use the command line client because my static node is connected to a raspberry pi via USB which powers it and allows me to manage it that way.

Getting and setting your coordinates

Before you can set the coordinates for a Meshtastic node, you need to know what they are expressed as a latitude longitude pair. I found that the simplest way is to use google.com/maps, look for the location you want to advertise and get the latitude and longitude values from. To do so, search for the location, right click where you want to advertise and click the top pair of values which will copy them to your clip board. Next, modify the following script to suite your needs to set the values on your Meshtastic node:

#!/bin/bash

meshtastic --port /dev/ttyUSB1 --set position.fixed_position True
meshtastic --port /dev/ttyUSB1 --set position.gps_mode 2
meshtastic --port /dev/ttyUSB1 --set position.gps_enabled False
meshtastic --port /dev/ttyUSB1 --set position.gps_update_interval 300
meshtastic --port /dev/ttyUSB1 --setlat 45.126 --setlon -93.488 --setalt 1

Google will provide very precise coordinates but keeping 3 to 4 places of precision is plenty. One key thing to remember is that, despite what the command line help will tell you, you should set the altitude to something other than nothing or 0. You can provide a proper altitude in meters above sea level or simply setting it to 1 will suffice. Depending on your setup, you may not need to specify the port. I show my port for reference as the Pi the node is connected to has other USB devices attached.

After configuration, your node will begin to advertise its position as often as you configured it to using the gps_update_interval value. You can modify this to suit your network.

Bonus content

Keeping with our scripted methods of managing a Meshtastic node, here is a bit of bonus content. I also set a name for my static node so it is easier to identify consistently on the network. If you ever need to factory reset your node and want to set the name again (if you don’t have the config saved or whatever) then this simple script will help:

#!/bin/bash

meshtastic --port /dev/ttyUSB1 --set-owner "MOONBASE" --set-owner-short "R2D2"

This will set the name of your node easily. You can combine these steps into a larger script to help maintain your node more easily.

Flux, or FluxCD, is a “set of continuous and progressive delivery solutions for Kubernetes that are open and extensible” and is my preferred way to do “GitOps” in my Kubernetes clusters. I like Flux because it handles Helm extremely well (I’m a big fan of Helm) and allows me to have a simple fallback if something goes wrong. In this post, I will go through the process of installing flux, k3s that can be used for testing and then creating a Flux project that adopts Flux’s documented monorepo design. I have also published a copy of the work detailed here to Github so you can use as a working starter for your own project.

Continue reading

Computer professionals all know that secrets like API keys and passwords for services must be kept safe, but it isn’t always clear how to do so in a way that isn’t overly cumbersome. In this post, I am going to go through how I achieve this on macOS using GnuPG. I’m using GnuPG because I use both macOS and Linux on a regular basis. I also share my dot files across systems and GnuPG is the most cross platform option available that I am aware of. Although the solution I am using is cross platform, I am describing how to set this up using a Mac.

For this solution, I am leveraging a number of tools, which I’ve listed below. I assume that if you are the sort of person that has need for API tokens in your shell, you are likely also using brew. Here is what you want:

  • GPG Suite – This is optional but highly recommended. It provides a nice GUI for interacting with GPG and, more importantly it provides a way to tie your GPG passphrase to your Apple Key Chain to unlock it. This makes everything much smoother.
  • GPG – brew install gpg provides the command line tools you will need to manage your passwords or API tokens
  • pass – brew install pass provides the command line password tool.

Initial Setup

If you have not used GPG before then there are a few steps you need to take to get things setup.

IMPORTANT: If you are going to use GPG Suite then you will want to start the initial setup using GPG Keychain. Doing so will ensure the gpg command line tool can also see the key(s) you create. If you start with GPG on the command line, or were already using it, you will want to delete all of the .conf files in ~/.gnupg so that the command line client and GPG Keychain are working together.

Using GPG Keychain is an added bonus because it allows you to store your GPG credentials in the Apple Keychain. The Apple Keychain is unlocked whenever you log into your Mac. It isn’t necessary, but it is convenient.

Once you have created your GPG key you can initialize your pass database. First, get the ID of your GPG key. You can use gpg --list-private-keys and get a list. The value you are looking for will be a set of random characters. Then, initialize your pass database with the command below, replacing the ID with your key’s ID:

pass init 21D62AA0B018951161C3CC46E94469CDDCA62DF0

You will get a message that your password store has been initialized. There are other ways to initialize your pass database like storing it in git. You can read more at https://www.passwordstore.org

Adding a secret value to pass

Adding a value to your pass database is simple. Run:

pass insert key

key is the name of the secret you want to store. Press enter and then put in the value. You can organize your keys however you want by separating them using / like a directory separator.

Using the secret value

Using the secret value is equally simple. Since your KeyChain is unlocked when you sign into your computer you should have no issues retrieving the value in a similar way as adding it. Simply use the following command to get the value, adjust the command for your use case:

pass show key

Again, key is the secret you want to retrieve.

Putting it all together

This is great but we can make this more convenient by adding this to our shell startup scripts. On my system, .bash_profile and .bashrc are both processed. In my .base_profile I have added the following to get secret values from pass and assign them environment variables that various programs I use use in order to connect to services. Continuing my example of working against Proxmox, I have entered this into my .base_profile:

export PROXMOX_PASSWORD=$(pass show proxmox_password)

Since I am using GPG Suite, my GPG passphrase is loaded from my Mac’s Key Chain system once I have saved the pass phrase to it (you are asked the first time you decrypt a value). This way, my start up scripts do not contain any sensitive information and my environment is built from securely stored values.

This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn’t fit in a single post.

A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use. While I understand what shared file systems can do they also have steeper HW requirements and honestly the benefits are rather limited. If your purpose for using shared file systems is to learn about them, then go for it as it is a great thing to understand. Outside of learning, I prefer and have had great luck with keeping things as simple as I can while still getting something that is useful and versatile. Additionally, I also avoid the use of ZFS because running it properly requires more memory, memory I’d prefer to give to my VMs.

For that reason, my Proxmox cluster consists of two systems with a Raspberry Pi acting as a qdevice for quorum. Each system has a single spinning drive for the Proxmox OS and then a single SSD for all of the VMs that live on that node. I then have another physical system providing network based shared storage for content like ISO files and backups, things that truly need to be shared between the Proxmox cluster nodes. This setup gives me a blend of excellent VM performance because the disks are local and speedy, shared file space where it matters for ISOs and backups while maintaining one of the best features of Proxmox, live migration. Yes, live migration of VMs is still possible even when the underlying file system is not shared between systems, it’s just a lot slower of a process because data must be transferred over the network during a migration.

One of the benefits of using a file system like ceph is that you can distribute files across your systems in the event a system or disk fails. You have some redundancy there but regardless of redundancy you still need to have actual backups. To cover for this, I have regular backups taken of important VMs and separate backup tasks specifically for databases. For anything that has files, like Plex or Nextcloud, that data comes from my TrueNAS system using a network file system like NFS or Samba. Again, local storage for speed and shared storage where it really matters.

This setup gives me a lot of the benefits without a ton of overhead which helps me keep costs down. Each Proxmox node, while clustered, still work more or less independently of each other. I can recover from issues by restoring a backup to either node or recover databases quickly and easily. I don’t ever have to debug shared file system issues and any file system issues I do face are localized to the affected node. In the event of a severe issue, recovery is simplified because I can remove the replace the bad drive and simply restore any affected VMs on that node. The HA features of Proxmox are very good and I encourage their use when it makes sense, but you can avoid their complexity and maintenance and still have a reliable home lab that is easy and straight forward to live with.

For some time I’ve wanted a radio scanner so I could listen in on Police/Fire/EMS radio in my area but I’m not serious enough to pay for a dedicated-to-the-task scanner required to listen to today’s radio protocols. Today’s protocols are digitally based with trunking and patching systems that can both isolate calls to a local area while also allowing for nearby stations to be patched in. This is done with a constant control signal and a number of nearby frequencies that radios can hop to when they make a call. Decoding all of this requires specialized equipment or software that understands the P25 Phase 1 or 2 protocol. Radios that can do this start at around $250, go up from there and that’s before you get an antenna or anything associated with it. Additionally, I really like toying with Software Defined Radio (SDR) equipment and the idea of turning a computer into a scanner capable of tracking this radio system seemed fun.

In this post I am going to go through some of what I did to get setup to listen in on what I was interested in. While I knew that an SDR could be used for this task, I didn’t know how to put it together, what software was required and so on.

Get to know the radio systems used near you

The first thing I had to do was confirm what type of radio system was used near me. For that I turned to https://www.radioreference.com. Here, I learned that in the state of Minnesota, all Public Safety Agencies use ARMER which is a P25 Phase 1 system. Based on this information I knew better what to expect when it came to the hardware needed as well as what software I needed to research. Later, as I was setting up the software, I registered for a paid account with Radio Reference so that I could automatically pull down information about frequencies used and more.

Using the site, try to locate a tower site that is as close to you as possible and make note of the frequencies used. The important part to know

Get the right hardware

For the SDR itself, I went with this RTL-SDR Blog V3 brand unit (Amazon Affiliate Link) based on information I found that suggesting it was better supported. I also selected this unit because it has an SMA style connector for a screwed together, stronger connection to the antenna. Additionally I grabbed this antenna set (Amazon Affiliate Link) because it offered a range of antennas that would be a good fit for what I was doing.

Note that, depending on the frequencies used in your area, you may need more than one SDR in order to tune them all in. If your local system is P25 based it will use a main control channel and then a set of additional frequencies for callers to use. This is commonly referred to as a trunk based system. The frequencies in use near you need to fit in the range your SDR can tune in and listen to at the same time. The dongle you select should advertise the bandwidth it can tune to. For example, the SDR I selected advertises “up to 3.2 MHz of instantaneous bandwidth (2.4 MHz stable)” which means it can reliably listen to anything within a 2.4mhz range of frequencies. On a P25 system, the control frequency must always be tracked and all additional frequencies must be within 2.4mhz of the control frequency. If the frequencies used fall outside of this range then you may need multiple SDR adapters to hear everything.

The system near me uses two different control frequencies:

  • 857.2625
  • 860.2625

Callers then are on:

  • 856.2625
  • 857.0125
  • 858.2625
  • 859.2625

As long as I do not select the 860.2625 control frequency, the SDR can tune and hear any of the other frequencies in at the same time as they are all within 2.4mhz from the control frequency.

You may elect to get more than one SDR if you wish to listen to additional trunks or other frequencies at the same time. Later you will see that you can set priorities on what frequencies or trunks you want to listen to first in the event two frequencies become active.

Software

After a short bit of research I found there is a handful of software options available but I quickly settled on SDRTrunk. SDRTrunk is freely available, Java based software that will run on Windows, Mac and Linux alike. It seemed to be among the most recommended pieces of software for this task and readily available information on how to set it up. I used this YouTube video to get things setup – https://www.youtube.com/watch?v=b9Gk865-sVU. The author of the video does a great job explaining how trunking works, talk groups, how to pull in data from Radio Reference and how to configure the software for your needs.

Putting it all together

For my setup I used an older iMac running Ubuntu 22.04. I installed the SDRTrunk software per their directions and used the above mentioned video to learn how to configure the software. The system was almost ready to go out of the box but I had to install rtl-sdr from apt for the device to be recognized by the software. I used the adjustable, silver dipole antenna from the antenna kit with it fully collapsed. This was the closest I could get to the appropriately sized antenna for the frequencies used. I used this site to determine antenna length for a given frequency – https://www.omnicalculator.com/physics/dipole. I am located quite close to the broadcast tower so even a poorly sized antenna still worked. Sizing the antenna properly will assist greatly in improving your ability to tune something in. Even fully collapsing the antenna vs fully extending the antenna resulted in nearly a 10dB improvement in signal strength.

The last thing I did was setup an Icecast 2.4 based broadcast so I can tune in away from home. SDRTrunk has support for a few different pieces of streaming software and Iceast seemed to be the easiest to setup.

Finishing up

While not a full how-to, I hope my post gives you just enough information to get started. I am amazed at how well this solution, the SDR, software and more all work together so well. Better than I expected. I also like that I can repurpose the SDR for other tasks if I want, like pulling in data from remote weather stations and more. If there is something you have a question about leave a comment or find me on Mastodon.

Disclaimer: This post contains Amazon Affiliate links. If you purchase something from Amazon using a link I provided I will likely receive a commission for that sale. This helps support the site!

Some time ago I removed Google Analytics to avoid the tracking that came along with it and it all being tied to Google. I also wasn’t overly concerned about how much traffic my site got. I write here and if it helps someone then great but I’m not out here to play SEO games. Recently, however, I heard of a new self hosted option called Umami that claims to respect user privacy and is GDPR compliant. In this post I will go through how I set it up on the site.

Umami supports both PostgreSQL and MySQL. The installation resource I used, discussed below, defaults to PostgreSQL as the datastore and I opted to stick with that. PostgreSQL is definitely not a strong skill of mine and I struggled to get things running initially. Although I have PostgreSQL installed on a VM already for my Mastodon instance, I had to take some additional steps to get PostreSQL ready for Umami. After some trial and error I was able to get Umami running.

My installation of PostreSQL is done using the official postgres.org resources which you can read about at https://www.postgresql.org. In addition to having PostgreSQL itself installed as a service I also needed to install postgresql15-contrib in order to add pgcrypto support. pgcrypto support wasn’t something I found documented in the Umami setup guide but the software failed to start successfully without it and an additional step detailed below. Below is how I setup my user for Umami with all commands run as the postgres user or in psql. Some info was changed to be very generic, you should change it to suit your environment:

  • cli: createdb umami
  • psql: CREATE ROLE umami WITH LOGIN PASSWORD 'password’;
  • psql: GRANT ALL PRIVILEGES ON DATABASE umami TO umami;
  • psql: \c umami to select the umami database
  • psql: CREATE EXTENSION IF NOT EXISTS pgcrypto;
  • psql: GRANT ALL PRIVILEGES ON SCHEMA public TO umami;

With the above steps taken care of you can continue on.

Since I am a big fan of using Kubernetes whenever I can, my Umami instance is installed into my k3s based Kubernetes cluster. For the installation of Umami I elected to use a Helm chart by Christian Huth which is available at https://github.com/christianhuth/helm-charts and worked quite well for my purposes. Follow Christian’s directions for adding the helm chart repository and read up on the available options. Below is the helm values I used for installation:

ingress:
  # -- Enable ingress record generation
  enabled: true
  # -- IngressClass that will be be used to implement the Ingress
  className: "nginx"
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-production
  # -- Additional annotations for the Ingress resource
  hosts:
    - host: umami.dustinrue.com
      paths:
        - path: /
          pathType: ImplementationSpecific
  # -- An array with the tls configuration
  tls:
    - secretName: umami-tls
      hosts:
        - umami.dustinrue.com

umami:
  # -- Disables users, teams, and websites settings page.
  cloudMode: ""
  # -- Disables the login page for the application
  disableLogin: ""
  # -- hostname under which Umami will be reached
  hostname: "0.0.0.0"

postgresql:
  # -- enable PostgreSQL™ subchart from Bitnami
  enabled: false

externalDatabase:
  type: postgresql

database:
  # -- Key in the existing secret containing the database url
  databaseUrlKey: "database-url"
  # -- use an existing secret containing the database url. If none given, we will generate the database url by using the other values. The password for the database has to be set using `.Values.postgresql.auth.password`, `.Values.mysql.auth.password` or `.Values.externalDatabase.auth.password`.
  existingSecret: "umami-database-url"

The notable changes I made from the default values provided is I enabled ingress and set my hostname for it as required. I also set cloudMode and diableLogin to empty so that these items were not disabled. Of particular note, leaving hostname at the default value is the correct option as setting it to my hostname broke the startup process. Next, I disabled the postgresql option. This disables the installation of PostgreSQL as a dependent chart since I already had PostreSQL running.

The last section is how I defined my database connection information. To do this, I created a secret using kubectl create secret generic umami-database-url -n umami and then edited the secret with kubectl edit secret umami-database-url -n umami. In the secret, I added a data section with base64 encoded string for “postgresql://umami:[email protected]:5432/umami”. The secret looks like this:

apiVersion: v1
data:
  database-url: cG9zdGdyZXNxbDovL3VtYW1pOnBhc3N3b3JkQDEwLjAuMC4xOjU0MzIvdW1hbWk=
kind: Secret
metadata:
  name: umami-database-url
  namespace: umami
type: Opaque

Umami was then installed into my cluster using helm install -f umami-values.yaml -n umami umami christianhuth/umami which brought it up. After a bit of effort on the part of Umami to initialize the database I was ready to login using the default username/password of admin/umami.

I setup a new site in Umami per the official directions and grabbed some information that is required for site setup from the tracking code page.

Configuring WordPress

Configuring WordPress to send data to Umami was very simple. I added the integrate-umami plugin to my installation, activated the plugin and then went to the settings page to input the information I grabbed earlier. My settings page looks like this:

Screenshot of Umami settings showing the correct values for Script Url and Website ID. These values come from the Umami settings screen for a website.

With this information saved, the tracking code is now inserted into all pages of the site and data is sent to Umami.

Setting up Umami was a bit cumbersome for me initially but that was mostly because I am unfamiliar with PostgreSQL in general and the inline documentation for the Helm chart is not very clear. After some trial and error I was able to get my installation working and I am now able to track at least some metrics for this site. In fact, Umami allows me to share a public URL for others to use. The stats for this site is available at https://umami.dustinrue.com/share/GadqqMiFCU8cSC7U/Blog.