If you find yourself in the business of creating and testing Helm charts, or you simply want to try one out, then Colima with its built in Kubernetes functionality may be for you. In this post I am going to walk through how to quickly get going with Colima’s Kubernetes integration and an ingress controller for basic Helm chart testing.

I assume you already have Colima and Helm installed and are familiar with the tools and Kubernetes itself. If this is you then continue reading!

For this post I am using Colima 0.4.4, k3s v1.23.6+k3s1, helm 3.9.3 and ingress-nginx 4.2.0. I often find myself creating Helm charts and I want to test my modifications locally before committing my changes. Once in a while I also want to quickly test an available helm chart without messing up an existing Kubernetes installation. In these cases I will create a Colima instance with Kubernetes enabled and install my preferred ingress controller, nginx-ingress.

To get started, ensure that no other colima instances are running using colima list followed by a colima stop <name of profile> for any running instances. You should also ensure that there are no other services running on your system that are opening ports, especially 80, 443 and 3306. This helps ensure your test instance doesn’t interfere with any existing colima instances or other services. Then, issue colima start helm-test --kubernetes -m4 to start a colima instance with 4GB memory and Kubernetes enabled. Once colima has finished creating the instance you can add the ingress-nginx helm repository if you don’t already have it with helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx followed by helm repo update. You can now install ingress-nginx using helm install -n ingress-nginx --create-namespace --set controller.ingressClassResource.default=true ingress-nginx ingress-nginx/ingress-nginx. This command will install ingress-nginx and set it as the default ingress class for the cluster. At this point you have basic installation of Kubernetes with an ingress controller which will allow you to test most Helm charts.

As a test, you could now create a brand new helm chart with helm create nginx. Edit the resulting values.yaml file and enable ingress then install the chart into your new test cluster. You should see that it is able to download and install the default nginx image and create the proper ingress rule automatically. For my test I used helm install -n default nginx .. Before long you should see this as an ingress record:

kubectl get ingress nginx
NAME    CLASS   HOSTS                 ADDRESS        PORTS   AGE
nginx   nginx   chart-example.local   192.168.5.15   80      54s

Despite what the Address column says, the chart is now available at 127.0.0.1. Create a hosts entry and you will be able to get the default nginx page.

Of course, you can use or test other charts too. Here I will install bitnami’s MySQL chart with the following settings in a yaml file

## MySQL Authentication parameters
##
auth:
  ## MySQL root password
  ## ref: https://github.com/bitnami/bitnami-docker-mysql#setting-the-root-password-on-first-run
  ##
  rootPassword: "password"
  ## MySQL custom user and database
  ## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-on-first-run
  ## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-user-on-first-run
  ##
  database: "blog"
  username: "wordpress"
  password: "password"
##
primary:
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
  service:
    ## @param primary.service.type MySQL Primary K8s service type
    ##
    type: LoadBalancer
##
secondary:
  ## Number of MySQL Secondary replicas to deploy
  ##
  replicaCount: 0

I install the bitnami repo using helm repo add https://charts.bitnami.com/bitnami followed by helm repo update to ensure I have the latest info. To install a copy of MySQL with my settings file I use helm install -f mysql.yaml mysql bitnami/mysql. After a short while MySQL will be installed and also available on localhost through k3s’ built in LoadBalancer system. Notice in the mysql.yml file I specified I asked the chart to install the primary instance of MySQL with a LoadBalancer based service instead of the default ClusterIP.

When you are finished testing a simple colima delete helm-test will remove your testing environment and free up resources.

Hopefully you see now how quickly and easily you can get going with Colima and its Kubernetes integration to get a local Kubernetes cluster up and running for testing. The Kubernetes integration Colima uses is very capable and well suited to learning and testing. Enjoy!

Semi-related to my previous post, this post quickly touches on the fact that having swap on your system is not always a bad thing. I have seen “disable swap” become a common “performance hack” suggested by a lot of people and it appears to be growing in popularity. I believe a lot of people are simply parroting something they heard once but don’t actually know when it makes sense to disable swap on a system. I have found that outright disabling swap has a detrimental effect on system performance.

The basic idea behind not using swap is sound, on the surface. The argument is that swap is both much much slower than system memory and that if you are hitting swap then you need more memory. To add to this, a lot of people don’t understand how memory works on Linux (and indeed all major operating systems). Linux wants to use as much memory as possible. If you give it 1TB of memory (or more) then it will do everything it can to eventually use all of it. However, how it uses this memory can be confusing. Looking at this output from free -m, it may not be obvious what is happening:

[root@web2 system]# free -m
              total        used        free      shared  buff/cache   available
Mem:            809         407         137          37         263         251
Swap:          1023         282         741

In the above example output from free -m you will see the columns total, used, free, shared, buff/cache and available. The values for each, respectively is 809, 407, 137, 37, 263 and 251.

In a lot of cases, the value most people will look at is “free.” Unfortunately, on a system that has been running for some time, this value will almost always give the impression that the system is low on memory. Like so many things, there is a lot more to it than what the free value shows. In reality, the value you want to pay attention to is available. This value represents the amount of free memory with memory that can be reclaimed at any time for other purposes added in. The “cache” portion of the buff/cache value is what can be reclaimed and it represents the amount of data from disks that is cached into memory. It is this cache that operating systems try to keep full in order to avoid expensive disk reads and is why a system with a lot of memory can potentially have very little free memory.

A system that is low in available memory will also not be able to cache a lot of disk reads (because remember that available is free+cache added together) which will lead to lower overall performance. Of course, loading an entire disk into memory won’t necessarily have a positive affect on overall performance either. If a file is read once and never used again, does it really need to be cached? Having a lot of memory can lead to things being needlessly cached. A system with 16GB of memory can perform just as well as a system with 32GB of memory if most of the 32GB memory is filled with files that are very rarely read again.

Getting to why having swap is not evil, some apps and portions of apps aren’t always being used, even if they are running. For this reason, having swap available on a system is beneficial because the operating system can page application memory to disk and free up memory for to use as a disk cache for more active applications. In instances, such as the web server hosting this site, having swap available is a necessity because it allows me to have a system with less memory while still maintaining proper performance in normal conditions. Services that are necessary but rarely used are swapped out leaving room in memory for application code to be kept there instead. WordPress is considered “hot data” where as systemd is not. Once the system has booted systemd, while necessary, is not actively doing anything and can be paged to disk without affecting performance in noticeable way. However, swap is an issue if you are dipping into it continuously. This will quickly become evident if you have a lack of available memory as well as a high usage of swap. In this case, you truly do need more memory in the system.

I hope this post helps clear up some of the confusion around memory usage on systems. Have anything to share? Did I get something wrong? Leave a comment!

In a previous post I mentioned that this site is hosted across two different hosts. One that is dedicated to running MySQL and Redis while the other runs Nginx and PHP. I use this arrangement for a few reasons. First, this is the cheapest way to get two real CPU cores on Digital Ocean. During a web request, multiple processes including Nginx, PHP, MySQL and Redis must run and share CPU time with each other. By using multiple machines, the work is spread across multiple physical CPUs which improves overall performance and throughput. Second, it allows me to configure MySQL to use most of the system memory without fear that it’ll be OOM killed. An OOM kill is what happens on a Linux system when it determines it is out of memory and the biggest user of memory needs to be removed (killed) in order to protect the system from a meltdown. In general, regular triggering of the OOM killer should be considered an error in configuration and capacity planning but know that it is there to protect the system.

In this post, I want to discuss a scenario where you want to host a common LAMP/LEMP stack on a single machine. In this kind of setup, multiple processes will be competing with each other for resources. Without getting too into the weeds about tuning software on this kind of setup, I’m going to assume that you will likely configure MySQL in such a way that it, as a single process, will consume the most memory of any process on the system. Indeed, most distributions when installing MySQL (or MariaDB) will have a default configuration that allows MySQL to use in excess of 1GB.

Unlike MySQL, the amount of memory that many other processes may use is relatively unknown. Looking at just PHP (using php-fpm) the amount of memory is fairly dynamic. It is unlikely that you will be able to tune your system to ensure PHP doesn’t use too much memory without sacrificing total throughput. Therefore, it is necessary to configure PHP in such a way that you over provision available memory in an effort to ensure you get the most performance you can most of the time. However, in this scenario it is likely that you will eventually face a situation where PHP is asking for a lot more memory than usual and the system will invoke the OOM killer to deal with the sudden shortage of memory. MySQL, being the single largest user of memory on the system, will almost always be selected by the kernel to be removed. Allowing MySQL to be OOM killed is far less ideal than killing a rogue PHP process or two because it will disrupt all requests rather than the problem requests. So, how do you avoid MySQL being selected?

Most modern systems ship with systemd. Portions of systemd are not well received but, at least in my opinion, the init system is excellent. Using systemd, we are able to customize the startup routines for MySQL (almost any service, actually) so that we can instruct the kernel’s OOM killer to select a different process when the system is low on memory. Here is how it is done:

  • Create a directory – /etc/systemd/system/mysql.service.d. The directory name must match an existing service. For MariaDB it would be mariadb.service.d. You can determine the name by running systemctl list-unit-files
  • In this directory, create a file called oomadjust.conf with the following in it:
    [Service]
    OOMScoreAdjust=-500
  • Run systemctl daemon-reload
  • Restart MySQL

To confirm your customization was picked up run systemctl status mysql. In the “Drop-In” section you should see your customization was picked up. It’ll look similar to this:

Screenshot showing the oomadjust.conf file was picked up by systemd

This setting adjusts the value the OOM killer will calculate when trying to determine what process is using the most memory. By forcing this value to be lowered for MySQL it is much less likely to be selected. Instead, a problem PHP process will likely be selected first and removed. This will save MySQL and the overall availability of your app. Of course, your mileage may vary and you will need to tune your configuration to reduce if not eliminate the need for the OOM killer.

If you would like to learn more about systemd drop-ins take a look at the documentation by flatcar Linux at https://www.flatcar.org/docs/latest/setup/systemd/drop-in-units/. Many things can be overridden without having to edit files provided by packages (which you should avoid).

Have you used systemd’s drop-in system before? Curious how else you might use it? Leave a comment!

CentOS 7 is now in full maintenance mode until 2024. This means it won’t get any updates except security fixes and some mission critical bugs. In addition to being in full maintenance mode, the OS is simply beginning to show its age. It’s still a great OS, just that a lot of packages are very far behind “state of the art.” Packages like git, bash and even the kernel are missing some features that I prefer to have available. With that in mind, and an abundance of time on a Saturday, I decided to upgrade the underlying operating system hosting the site.

The choice of what operating system was not as simple as it was just a year ago. In the past I would have simply spun up the next release of CentOS, which is based off of Red Hat Enterprise Linux, and configured it for whatever duty it was to perform. However, Red Hat had a different idea and decided to make CentOS 8 a rolling release that RHEL is based off of, rather than CentOS being a rebadged clone of RHEL. The history of CentOS is a surprisingly complex and you can read about it at https://en.wikipedia.org/wiki/CentOS.

Since the change, at least a few options are now available to give people, like me, access to a Linux distribution they know and can trust. Among those, Rocky Linux appears to be getting enough traction for me to adopt it as my next Linux distribution. My needs for Linux are pretty basic and more than anything I just want to know that I can install updates without issue and keep the system going for a number of years before I have to worry about it. Rocky Linux gives me that just like CentOS did before. As of this writing, the web server hosting this site is now running Rocky Linux 8 and I’ll upgrade the database server at a later time. So far it has proven to be identical to RHEL and very familiar to anyone who has used RHEL/CentOS in the past.

If you are in the business of creating software, no matter your role, then you owe it to yourself to take a consider David Farley’s Modern Software Engineering: Doing What Works to Build Better Software Faster. I’m not in any way affiliated with the offer and I’m not getting any sort of kickback on that link. I just think it’s a good book.

This book, along with what I consider a sort of companion to it Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations will likely get you to rethink how you are approaching software development. The Accelerate book provides information backed by data that shows that the processes defined in Modern Software Engineering do in fact work to improve the pace of software development, the quality of the software and improvements in developer/employee satisfaction.

The overarching message to take away from the books is that being fast is the key. The quicker you can write and release code into production so that you can then get feedback from it the better your code quality will ultimately be. Care should be taken to remove anything that prevents developers from getting their code into production quickly and with minimal roadblocks. This doesn’t mean you are careless, however! Putting a heavy emphasis on testing, the books paint a picture of the ideal system where tests are written first and then code to satisfy the tests. This process helps ensure that your code is divided up into parts that can be tested easily which will, in essence, indirectly force you to write better, more readable and more easily understood code. These tests then have the additional benefit of allowing developers to know that the changes they made either satisfy requirements or at least didn’t break existing functionality. The difference between “I think it works” and “I know it works” pays huge dividends in developer and team satisfaction. It also provides long term benefits as people are rotated out of teams because it codifies intended behavior. Well written and described tests, when they fail, will tell the developer what the intended outcome of a function is.

This may feel counter-intuitive but the Accelerate book does a great job of showing, with data, that these things are in fact true in most if not all cases. While reading the book there were a number of times where I stopped to consider how I was approaching things and realized some of the assumptions I had made were incorrect and need to be adjusted. Much of what Farley describes sounds difficult to implement and indeed everything he describes does require a certain amount of discipline amongst the team to ensure the work they do enforces the defined ideals.

If you’re looking for a good read that will make you think about how you are approaching software development, regardless of your role in it, I highly recommend Modern Software Engineering as well as Accelerate book.

You sign into Facebook and you see some new friend notifications from people you know you are already friends with. You browse your feed and you see notes from the same people saying “don’t accept the friend request from me my account was hacked!” What’s actually happening here? Was their account hacked in the traditional sense? Why would someone do this? How can I avoid this happening to me?

To get into this we must first properly define what is happening in these cases. What a lot of people describe as “being hacked” isn’t quite right. Being hacked means someone actually broke into your account and you have now lost control of it. This would happen because you had a weak password on your account and you’re not using two factor authentication. I’ll discuss what this means further down. Most of the time what you’re seeing is known as “account cloning” where an attacker has take the publicly available information on your account and create a replica Facebook account and then try getting people to add them as their friend. You can read more about account cloning at https://connections.oasisnet.org/facebook-account-cloning-scam-what-to-do-when-you-get-a-friend-request-from-a-friend/.

Securing your account password

Ok, with some small clarifications out of the way let’s talk about what you can do to help prevent both types of attacks. Let’s start with preventing people from taking over account by guessing your password.

An important first step is to have a strong password. Passwords that contain symbols, differences in capitalization and numbers are stronger than those that don’t. You should avoid using common names and words as these are easily guessed using robotic tools that just continuously try combinations of words until it finds one that works. Once this happens, an attacker can easily take over an account and prevent you from ever getting it back. So, the first tip is to have a strong password that you don’t use anywhere else. You can change your password on Facebook at https://www.facebook.com/settings?tab=security by visiting the page and then clicking Edit for your password. What I find helps a lot is using the built in password saving feature of my browser so that I have a single password to unlock my browser which can then fill in passwords for the sites I visit.

The second tip that is equally, if not more, important is to use two factor authentication. This way, even if an attacker does guess your password they will, hopefully, not have access to your second factor of authentication which will typically be your phone. You can configure two factor authentication at https://www.facebook.com/security/2fac/settings. For simplicity I recommend having Facebook text a code to your phone number that you input into Facebook when required. For advanced users who are more comfortable with or already have an authentication app (like Google Authenticator) then using that is an even stronger choice.

Protecting yourself from account cloning

From the article (you read at least some of it right?) we know that attackers do this because they want to prey on your trust of family and friends to, usually, scam you out of money. It’s important to understand the difference between having your account taken over and your account simply being cloned.

You may not be aware of this but the default settings of Facebook allow anyone to see at least some information about you even if they are not friends with you or even signed into Facebook. Depending on how you configure your account security people can see your profile photo, background photo, some photos and your friends list. All of this is more than enough to allow an attacker to download a copy of those items and then create an account that looks just like it.

Below is what you can do to limit this type of attack. I used the website on my computer to set these settings. Many of these settings are probably available on the phone app as well but you’re on your own.

First, review your privacy settings which is located at https://www.facebook.com/privacy/checkup?source=settings. Click on “Who can see what you share” and then click continue. Scroll through the list and set each one so that it is something other than “Public.” Note that the trade off to setting these values as not public will make it harder for people to find you (even people who you might want to find you). Continue through this page, setting options as you desire.

Limiting these values go a long ways towards preventing people from getting enough information about you and creating a convincing clone of your account.

If you want to control who can post on your timeline, who can tag you and more visit https://www.facebook.com/settings?tab=timeline.

If you want to limit what people can do with your Public Posts visit https://www.facebook.com/settings?tab=followers.

The more options you set to “friends” or “friends of friends” the better.

One last thing about privacy

There is a saying that if a product does not charge then you are the product. Facebook is a tool for gathering your info and sharing it with advertisers so they can target you. Despite this, Facebook offers a decent number of controls for your privacy that you can leverage and I recommend you do that. This limits both their ability to track but also prevents account cloning. If you are an iPhone user with a newer phone (one that runs the latest versions of iOS) and use the Facebook app (or even if you don’t) I recommend visiting the settings of your phone and find Privacy. Tap this option. Find “Tracking”. On this screen you will find an option called “Allow Apps to Request to Track.” Ensure this option is disabled, like this:

These are just some of the steps you can take to help secure your account and reduce the amount of tracking of your information. There is a lot more you can do and if you’re interested then I recommend doing some searches on the web about ensuring Facebook and advertising privacy on your devices.

Jeff Geerling has been on fire the past year doing numerous Pi based projects and posting about them on his YouTube channel and blog. He was recently given the opportunity to take the next TuringPi platform, called Turing Pi 2, for a spin and post his thoughts. This new board takes the original Turing Pi and makes it a whole lot more interesting and is something I’m seriously thinking about getting to setup in my own home lab. The idea of a multi-node, low power Arm based cluster that lives on a mini ITX board is just too appealing to ignore.

The board is appealing to me because it provides just enough of everything you need to build a reasonably complete and functional Kubernetes system that is large enough to learn and demonstrate a lot of what Kubernetes has to offer. In a future post, I hope to detail a k3s based Kubernetes cluster configuration that provides enough functionality to mimic what you might build on larger platforms, like say an actual cloud provider like Digital Ocean or AWS.

Anyway, do yourself a favor and go checkout Jeff’s coverage of the Turing Pi 2 which can be found at https://www.jeffgeerling.com/blog/2021/turing-pi-2-4-raspberry-pi-nodes-on-mini-itx-board.

Sometimes you need to access Docker on a remote machine. The reasons vary, you just want to manage what is running on a remote system or maybe you want to build for a different architecture. One of the ways that Docker allows for remote access is using ssh. Using ssh is a convenient and secure way to access Docker on a remote machine. If you can ssh to a remote machine using key based authentication then you can access Docker (provided you have your user setup properly). To set this up read about it at https://docs.docker.com/engine/security/protect-access/.

In a previous post, I went over using remote systems to build multi-architecture images using native builders. This post is similar but doesn’t use k3s. Instead, we’ll leverage Docker’s built on context system to add multiple Docker endpoints that we can tie together to create a solution. In fact, for this I am going to use only remote Docker instances from my Mac to build an example image. I assume that you already have Docker installed on your system(s) so I won’t go through that part.

Like in the previous post, I will use the project located at https://github.com/dustinrue/buildx-example as the example project. As a quick note, I have both a Raspberry Pi4 running the 64bit version of PiOS as well as an Intel based system available to me on my local network. I will use both of them to build a very basic multi-architecture Docker image. Multi-architecture Docker images are very useful if you need to target both x86 and Arm based systems, like the Raspberry PI or AWS’s Graviton2 platform.

To get started, I create my first context to add the Intel based system. The command to create a new Docker context that connects to my Intel system looks like this:

docker context create amd64 --docker host=ssh://[email protected]

This creates a context called amd64. I can then use this context by issuing docker context use amd64. After that, all Docker commands I run will be run in that context, on that remote machine. Next, I add my pi4 with a similar command:

docker context create arm64 --docker host=ssh://[email protected]

We now have our two contexts. Next we can create a buildx builder that ties the two together so that we can target it for our multi-arch build. I use these commands to create the builder (note the optional –platform value which will mark that builder for the listed platforms):

docker buildx create --name multiarch-builder amd64 [--platform linux/amd64]
docker buildx create --name multiarch-builder --append arm64 [--platform linux/arm64]

We now have a single builder named multiarch-builder that we can use to build our image. When we ask buildx to build a multi-arch image, it will use the platform that most closely matches the target architecture to do the build. This ensures you get the quickest build times possible.

With the example project cloned, we now build an image that will work for 64bit arm, 32bit arm and 64bit x86 systems with this command:

docker buildx build --builder multiarch-builder -t dustinrue/buildx-example --platform linux/amd64,linux/arm64,linux/arm/v6 .

This command will build our Docker image. If you wish to push the image to a Docker registry, remember to tag the image correctly and add --push to your command. You cannot use --load to load the Docker image into your local Docker registry as that is not supported.

Using another Mac as a Docker context

It is possible to use another Mac as a Docker engine but when I did this I ran into an issue. The Docker command is not in a path that Docker will have available to it when it makes the remote connection. To overcome this, this post will help https://github.com/docker/for-mac/issues/4382#issuecomment-603031242.

I have been running Linux as a server operating system for over twenty years now. For a brief period of time, I also ran it as my desktop solution around 2000-2001. Try as I might however, I could never really fully embrace it. I have always found Linux as a desktop operating system annoying to deal with and too limiting (for my use cases, your mileage may vary). A recent series by Linus Tech Tips doing a great job of highlighting some of the reasons why Linux as a desktop operating system has never really gone mainstream (chromebooks being a notable exception).

Check out the videos:

And