My name is Dustin and I am a lot of things to a number of different people. I am a husband, father and a systems engineer that also knows how to write some code. Here I write about technology.
About Dustin Rue
My name is Dustin and I am a lot of things to a number of different people. I am a husband, father and a systems engineer that also knows how to write some code. Here I write about technology.
For a couple of years I’ve used Rogue Amoeba’s SoundSource app to control audio routing on my Mac. It allows me to do tricks like sending system notifications to the built in speakers, Spotify to my Schiit Mani DAC and Zoom to my headphones. It also allows me to apply compressors to Zoom calls so that it normalizes the volume of all participants or knocks down some of the brightness on some mic setups. One thing it lacks, however, is the ability to loop audio coming in on an audio input back to some destination. For that, I would need to pick up some different software. It isn’t a feature I need all of the time so I couldn’t really justify the price.
Sometimes I want to play Xbox but need to basically integrate the audio with Discord which is running on some other system. The problem, of course, is how do I get the audio integrated or mixed properly? I do have an external mixer that can partially get the job done but for technical limitations of my mixer I can’t really mix what I hear without it mixing that back into what others hear.
Enter LadioCast.
LadioCast is an app that is meant to allow a user to listen to web streams that use icecast, rtmp or shoutcast. It has a bit of a bonus feature that allows the user to mix up to four inputs and send it to any output. If you happen to have some kind of external audio device that allows for AUX in, like this Behringer UCA202, then you can easily send any audio into your Mac using the Beheringer as an input and then use LadioCast to redirect it to an output. Between LadioCast’s volume controls and SoundSource I can mix game audio with other audio like music from Spotify and Discord.
If you have been looking for an app that allows you to monitor input audio then give LadioCast a try.
In this post, I thought it would be fun to revisit my home audio journey and walk through how things have changed over the years to where they are now. For as long as I can remember I’ve had an interest in A/V gear but audio gear and I have a history that goes back a bit further. When I was younger, I would read up on the latest devices and formats and just soak up as much information as I could. When I could I would, using what we had around or I had access to, spend hours recording to tapes, dubbing tapes, and listening to whatever we had. Over the years I’ve upgraded my stuff but never really getting too far beyond entry-level equipment. This hasn’t lessened my enjoyment of it all in the least though.
As I was growing up, got a job, and eventually had cash to spend I would save it for various pieces of gear to add into my own setup. Being a teen living in a small rural area miles from many electronics stores and it being before online shopping was a thing my choices were rather limited. I didn’t let this stop me from putting together a fun system. At the time most big box stores had a much more robust electronics section. K-mart sold audio components as did Radio Shack.
I don’t actually remember what my very first system consisted of but it was probably something we picked up at a garage sale to get me going. I know it was an inexpensive, all-in-one system with a radio and tape deck. I also remember that it had an AUX input. This AUX input is what I hooked my first component up to. A Sharp DX-200 CD player.
This CD player served me well for many many years before ultimately succumbing to a small plumbing mishap while sitting in storage (it also needed a new belt). However, it wasn’t long before I knew that the next piece I needed in my setup was a new receiver to replace the original…unit. For this, I remember spending a lot of time looking at various flyers for big electronics stores and researching what was available at the time. After a while, I settled on a Kenwood KR-A4060.
This thing was, honestly, amazing at the time. AM-FM receiver with phono, CD, and tape input with monitoring (this will be important later). This also served me well for many years until I gave it to someone to use in their new place. Procuring it was a bit of a chore because it required a two-hour drive to pick up and GPS wasn’t a thing yet. It’s amazing we ever found anything back then.
At this point, this is where things get a bit fuzzy. I don’t know when I got new speakers but it had to have been quickly after picking up the receiver. Optimus STS-1000 speakers from Radio Shack were not the best speakers ever made but perfectly adequate (and more importantly, large).
I held onto these speakers until got married and was living in an apartment that wouldn’t have appreciated them the same way I did.
After the speakers came the graphic EQ. An Optimus 31-2025, also from Radio Shack (do you see a trend here? It’s because Radio Shack was close). This rebranded Realistic 31-2020 (but also under the RCA brand) graphic EQ was pretty neat and it only worked because the receiver had tape monitoring. Using tape monitoring I was able to send any audio to the EQ, modify it and then “monitor” it. This meant the changes made by the EQ were applied to all of the inputs of the receiver.
This is a piece that was just a lot of fun because of all the bouncing lights. Its demise came after buying my first full A/V receiver which didn’t have the tape monitoring trick.
The Sony CDP-CE215 5 disc changer was most likely next in the chain. To be honest I don’t remember when I picked this up or even where. This was a pretty basic 5 disc changer, even for the time, but served me well for many years until I offered it up to someone so they could use it in their new house. If I remember correctly, it was one of the first Sony models that came with the jog shuttle wheel making it easy to quickly jump to the desired track.
Next in the audio stack, and definitely a late addition that should probably have never happened was a Sherwood DD-4030C
This was another Radio Shack pick-up that, because tapes were already on their way out, was a very cheap clearance item that I couldn’t say no to. Of all the things I’ve sold this is the one I regret the most. Mostly because it was so feature complete and such a smooth performer. Everywhere I see this being sold it either looks to be in bad shape or is twice as much as I paid for it new. That said, I did finally manage to find one on eBay and was recently able to fix it up and get it in working order! In a future post I may go through some of what I did to get it working again.
Somewhere between all of those, I picked up an Optimus PRO SW-12 subwoofer. To this day I can’t remember what I used to power it since it was passive but it created a lot of boom, more than a kid my age had any right having. It too was sold after getting married and living in an apartment.
This setup treated me well for a number of years. It was definitely all low end but was a great introduction to the world of hifi gear and gave me a place to grow from. Anyway, I think that does it for a “post 1” and in my next post I’ll look back on the home theater stuff that later replaced most of this equipment.
I have been running Linux as a server operating system for over twenty years now. For a brief period of time, I also ran it as my desktop solution around 2000-2001. Try as I might however, I could never really fully embrace it. I have always found Linux as a desktop operating system annoying to deal with and too limiting (for my use cases, your mileage may vary). A recent series by Linus Tech Tips doing a great job of highlighting some of the reasons why Linux as a desktop operating system has never really gone mainstream (chromebooks being a notable exception).
Today I’d like to discuss what I often see as one of the largest contributors to poor backend WordPress performance. Often times I see this particular issue contributing to 50% or more of the total time the user waits for a page to load. The problem? Remote web or API calls.
In my previous post, I talk about using Akismet to handle comment spam on this site. In order for Akismet to work at all it needs to be allowed to access an outside service using API calls. While API calls are necessary for some plugins to work properly I often see plugins or themes that make unnecessary remote calls. Some plugins and themes like to phone home for analytics reasons or to check for updates. Even WordPress core will make remote calls in order to determine the latest version of WordPress or to check for available theme and plugin updates even if you have otherwise disabled this functionality. These remote calls add a lot of extra time to requests or can even cause your site to become unavailable if API endpoints take too long to respond.
This site is very basic and runs a minimal set of plugins. Because of this, I am able to get away with a rather ham-fisted method of dealing with remote API calls so that I can ensure my site remains as responsive as possible given the low budget hosting arrangement I use. This one weird trick is to simply disallow remote calls at all except for the ones absolutely necessary for my plugins to operate properly.
In my wp-config.php file I have defined the following:
The first define tells WordPress to block all http requests that use wp_remote_get or similar. This has the immediate affect of blocking the majority of remote web calls. While this works for any plugin that uses WordPress functions for accessing remote data, any plugin that makes direct web requests using libraries like curl or guzzle will not be affected.
The second define tells WordPress what remote domains are allowed to be accessed. As you can see, the two plugins that are allowed to make remote calls are Cloudflare and Akismet. Allowing these domains allows these two plugins to function normally.
By blocking most remote calls I get the benefit if preventing my theme and core from phoning home on some page loads and while I’m in the admin. This trick alone, without making any other optimizations, makes WordPress feel much more snappy to use and allows pages that are uncached to be built much more quickly. Blocking remote calls has the side effect of preventing core’s ability to check the core and plugin versions but I am in the WordPress world enough that I am checking on these things manually anyway so the automated checks just aren’t necessary. I’d rather trade the automated checks for a continuously better WordPress experience.
What can you do as a developer?
This trick is decidedly heavy handed and only works as someone operating a WordPress site. Developers may want to consider their use of remote web calls and may be wondering, what can I do to ensure WordPress remains as responsive as possible? The primary question a developer should always ask when creating a remote request is “is the data from the remote request necessary right now?” Meaning, is the data the remote request is getting necessary for the current page load or could it be deferred to some background process like WordPress’s cron system and then cached? The issue with remote requests isn’t that they are being performed, it is that they are often performed on what I call “the main thread” where the main thread is the request a user has made and must then wait for the results. Remote requests that are made in the background will not be felt by end users. In addition, background requests can be performed once rather than for every request. In addition to improving page load times for end users you may also find you can get away with less hardware.
If you need help determine how many remote calls are being performed there are some options. You can certainly write an mu-plugin that simply logs any remote requests being made but what I used was a free New Relic subscription. I used New Relic on this site to determine what remote calls were being performed and then configured what domains were allowed based on that information. By blocking unnecessary remote requests I was able to cut my time to first byte timings in half.
Do you have any simple tricks you use to improve WordPress performance? Leave the info in the comments below!
At the beginning of November I decided to give comments on posts a try again. In the past, allowing comments on the site has been an issue as the overhead of managing spam comments was more than I wanted to deal with. It required almost daily attendance as spam was a constant stream of junk even with tools like Akismet installed, enabled and configured.
I am happy to report that Akismet is greatly improved and it is doing an excellent job of blocking and removing spam comments so that I don’t even need to see them. If you, like me, have had a negative experience trying to manage comments in the past then maybe it is time to try them again with an updated Akismet setup.
For nearly as long as I’ve been using Linux I have had some system on my home network that is acting as a server or test bed for various pieces of software or services. In the beginning this system might be my DHCP and NAT gateway, later it might be a file server but over the years I have almost always had some sort system running that acted as a server of some kind. These systems would often be configured using the same operating system that I was using in the workplace and running similar services where it made sense. This has always given me a way to practice upgrades, service configuration changes and just be as familiar with things as I possibly could.
As I’ve moved along in my career, the services I deal with have gotten more complex and what I want running at home as grown more complex to match. Although my home lab pales in comparison to what others have done I thought it would still be fun to go through what I have running.
Hardware
Like a lot of people, the majority of the hardware I’m running is older hardware that isn’t well suited for daily use. Some of the hardware is stuff I got free, some of it is hardware previously used to run Windows and so on. Unlike what seems to be most home lab enthusiasts, I like to keep things as basic as possible. If a consumer grade device is capable of delivering what I need at home then I will happily stick to that.
On the network side, my home is serviced with cable based Internet. This goes into an ISP provided Arris cable modem and immediately behind this is a Google WiFi access point. Nothing elaborate here, just a “basic” WiFi router handles all DHCP and NAT for my entire network and does a fine job with it. After the WiFi router is a Cisco 3560g 10/100/1000 switch. This sixteen year old managed switch does support a lot of useful features but most of my network is just sitting on VLAN 1 as I don’t have a lot of need for segmenting my network. Attached to the switch are two additional Google WiFi access points, numerous IoT devices, phones, laptops and the like.
Also attached to the switch are, of course, items that I consider part of the home lab. This includes a 2011 HP Compaq 8200 Elite Small Form Factor PC, an Intel i5-3470 based system built around 2012 and a Raspberry Pi 4. The HP system has a number of HDD and SSD drives, 24GB memory, a single gigabit ethernet port and hosts a number of virtual machines. The built Intel i5-3470 system has 16GB memory, a set of three 2TB HDDs and a single SSD for hosting the OS. The Pi4 is a 4GB model with an external SSD attached.
Operating Systems
Base operating system on the HP is Proxmox 7. This excellent operating system is best describe as being similar to VMware ESXi. It allows you to host as many Virtual Machines as your hardware will support, can be clustered and even migrate VMs between cluster nodes. Proxmox is definitely a happy medium between having a single system and being a full on cloud like OpenStack. I can effectively achieve a lot of a cloud stack would provide but with greater simplicity. Although I can create VMs and manually install operating systems, I have created a number of templates to make creating VMs quicker and easier. The code for building the templates is at https://github.com/dustinrue/proxmox-packer.
On the Intel i5-3470 based system is TrueNAS Core. This system acts as a Samba based file store for the entire home network including remote Apple Time Machine support, NFS for Proxmox and iSCSI for Kubernetes. TrueNAS Core is an excellent choice for creating a NAS. Although it is capable of more, I stick just to just the basic file serving functionality and don’t get into any of the extra plugins or services it can provide.
The Raspberry Pi 4 is running the 64bit version of Pi OS. Although it is a beta release it has proven to work well enough.
Software and Services
The Proxmox system hosts a number of virtual machines. These virtual machines provide:
Utility VMs for things like creating Pi OS images, running Docker and more
Kubernetes
On top of Proxmox I also run k3s to provide Kubernetes. Kubernetes allows me to run software and test Helm charts that I’m working on. My Kubernetes cluster consists of a single amd64 based VM running on Proxmox and the Pi4 to give me a true arm64 node. In Kubernetes I have installed:
cert-manager for SSL certifications. This is configured against my DNS provider to validate certificates.
ingress-nginx for ingress. I do not deploy Traefik on k3s but prefer to use ingress-nginx. I’m more familiar with its configuration and have good luck with it.
democratic-csi for storage. This package is able to provide on demand storage for pods that ask for it using iSCSI to the TrueNAS system. It is able to automatically create new storage pools and share them using iSCSI.
gitlab-runner for Gitlab runner. This provides my Gitlab server with the ability to do CI/CD work.
I don’t currently use Kubernetes at home for anything other than short term testing of software and Helm charts. Of everything in my home lab Kubernetes is the most “lab” part of it where I do most of my development of Helm charts and do basic testing of software. Having a Pi4 in the cluster really helps with ensuring charts are targeting operating systems and architectures properly. It also helps me validate that Docker images I am building do work properly across different architectures.
As you can see, I have a fairly modest home lab setup but it provides me with exactly what I need to provide the services I actually use on a daily basis as well as provide a place to test software and try things out. Although there is a limited set of items I run continuously I can easily use this for testing more advanced setups if I need to.
Chris Wiegman asks, what are you building? I thought this would be a fun question to answer today. Like a lot of people I have a number of things in flight but I’ll try to limit myself to just a few them.
PiPlex
I have run Plex in my house for a few years to serve up my music collection. In 2021 I also started paying for Plex Pass which gives me additional features. One of my favorite features or add-ons is PlexAmp which gives me a similar to Spotify like experience but for music I own.
Although I’m very happy with the Plex server I have I wondered if it would be feasible to run Plex on a Raspberry Pi. I also wanted to learn how Pi OS images were generated using pi-gen. With that in mind I set out to create a Pi OS image that preinstalls Plex along with some additional tools like Samba to make it easy to get up and running with a Plex server. I named the project PiPlex. I don’t necessarily plan on replacing my existing Plex server with a Pi based solution but the project did serve its intended goal. I learned a bit about how Pi OS images are created and I discovered that it is quite possible to create a Pi based Plex server.
ProxySQL Helm Chart
One of the most exciting things I’ve learned in the past two years or so is Kubernetes. While it is complex it is also good answer to some equally complex challenges in hosting and scaling some apps. My preferred way of managing apps on Kubernetes is Helm.
One app I want install and manage is ProxySQL. I couldn’t find a good Helm chart to get this done so I wrote one and it is available at https://github.com/dustinrue/proxysql-kubernetes. To make this Helm chart I first had to take the existing ProxySQL Docker image and rebuild it so it was built for x86_64 as well as arm64. Next I created the Helm chart so that it installs ProxySQL as a cluster and does the initial configuration.
Site Hosting
I’ve run my blog on WordPress since 2008 and the site has been hosted on Digital Ocean since 2013. During most of that time I have also used Cloudflare as the CDN. Through the years I have swapped the droplets (VMs) that host the site, changed the operating system and expanded the number of servers from one to two in order to support some additional software. The last OS change was done about three years ago and was done to swap from Ubuntu to CentOS 7.
CentOS 7 has served me well but it is time to upgrade it to a more recent release. With the CentOS 8 controversy last year I’ve decided to give one of the new forks a try. Digital Ocean offers Rocky Linux 8 and my plan is to replace the two instances I am currently running with a single instance running Rocky Linux. I no longer have a need for two separate servers and if I can get away with hosting the site on a single instance I will. Back in 2000 it was easy to run a full LAMP setup (and more) on 1GB of memory but it’s much more of a challenge today. That said, I plan to use a single $5 instance with 1 vCPU and 1GB memory to run a LEMP stack.
Cloudflare
Speaking of Cloudflare, did you know that Cloudflare does not cache anything it deems “dynamic”? PHP based apps are considered dynamic content and HTML output by software like WordPress is not cached. To counter this, I created some page rules a few years ago that forces Cloudflare to cache pages, but not the admin area. Combined with the Cloudflare plug-in this solution has worked well enough.
In the past year, however, Cloudflare introduced their automatic platform optimization option that targets WordPress. This feature enables the perfect mix of default rules (without using your limited set of rules) for caching a WordPress site properly while breaking the cache when you are signed in. This is also by far the cheapest and most worry free way to get the perfect caching setup for WordPress and I highly recommend using the feature. It works so well I went ahead and enabled it for this site.
Multi-Architecture Docker Images
Ever since getting a Raspberry Pi 4, and when rumors of an Arm powered Mac were swirling, I’ve been interested in creating multi-architecture Docker images. I started with a number of images I use at work so they are available for both x86_64 and arm64. In the coming weeks I’d like to expand a bit on how to build multi-architecture images and how to replace Docker Desktop with a free alternative.
Finishing Up
This is just a few of the things I’m working on. Hopefully in a future post I can discuss some of the other stuff I’m up to. What are you building?
In late August of 2021 the company behind Docker Desktop announced their plans to change the licensing model of their popular Docker solution for Mac and Windows. This announcement means many companies who have been using Docker Desktop would now need to pay for the privilege. Thankfully, the open source community is working to create a replacement.
For those not aware, Docker doesn’t run natively on Mac. The Docker Desktop system is actually a small Linux VM running real Docker inside of it and then Docker Desktop does a bunch of magic to make it look and feel like it is running natively on your system. It is for this reason that Docker Desktop users get to enjoy abysmal volume mount performance, the process of shuffling files (especially small ones) requires too much metadata passing to be efficient. Any solution for running Docker on a Mac will need to behave the same way and will inherit the same limitations.
Colima is a command line tool that builds on top of lima to provide a more convenient and complete feeling Docker Desktop replacement and it already shows a lot of promise. Getting started with colima is very simple as long as you already have brew and Xcode command line tools installed. Simply run brew install colima docker kubectl and wait for the process to finish. You don’t need Docker Desktop installed, in fact you should not have it running. Once it is complete you can start it with:
colima start
This will launch a default VM with the docker runtime enabled and configure docker for you. Once it completes you will then have a working installation. That’s literally it! Commands like docker run --rm -ti hello-world will work without issue. You can also build and push images. It’s can do anything you used Docker Desktop for in the past.
Mounting Volumes
Out of the box colima will mount your entire home directory as a read only volume within the colima VM which makes it easily accessible to Docker. Colima is not immune, however, to the performance issues that Docker Desktop struggled with but the read only option does seem to provide reasonable performance.
If, for any reason, you need to have the volumes you mount as read/write you can do that when you start colima. Add --mount <path on the host>:<path visible to Docker>[:w]. For example:
colima start --mount $HOME/project:/project:w
This will mount $HOME/project as /project within the Docker container and it will be writeable. As of this writing the ability to mount a directory read/write is considered alpha quality so you are discouraged from mounting important directories like, your home directory.
In my testing I found that mounting volumes read/write was in fact very slow. This is definitely an area that I hope some magic solution can be found to bring it closer to what Docker Desktop was able to achieve which still wasn’t great for large projects.
Running Kubernetes
Colima also supports k3s based Kubernetes. To get it started issue colima stop and then colima start --with-kubernetes. This will launch colima’s virtual machine, start k3s and then configure kubectl to work against your new, local k3s cluster (this may fail if you have an advanced kubeconfig arrangement).
With Kubernetes running locally you are now free to install apps however you like.
Customizing the VM
You may find the default VM to be a bit on the small side, especially if you decide to run Kubernetes as well. To give your VM more resources stop colima and then start it again with colima start --cpu 6 --memory 6. This will dedicate 6 CPU cores to your colima VM as well as 6GB of memory. You can get a full list of options by simply running colima and pressing enter.
What to expect
This is a very young project that already shows great potential. A lot is changing and currently in the code base is the ability to create additional colima VMs that run under different architectures. For example, you can run arm64 Docker images on your amd64 based Mac or vice versa.
Conclusion
Colima is a young but promising project that can be used to easily replace Docker Desktop and if you are a Docker user I highly recommend giving it a try and providing feedback if you are so inclined. It has the ability to run Docker containers, docker-compose based apps, Kubernetes and build images. With some effort you can also do multi-arch builds (which I’ll cover in a later post). You will find the project at https://github.com/abiosoft/colima.
This one is, primarily, for all the people responsible for ensuring a WordPress site remains available and running well. “Systems” people if we must name them. If you’re a WordPress developer you might want to ride along on this one as well so you and the systems or DevOps team can be speaking a common language when things go bad. Often times, systems people will immediately blame developers for writing bad code but the two disciplines must cooperate to keep things running smoothly, especially at scale. It’s important for systems AND developers to understand how code works and scales on servers.
What I’m about to cover is some common performance issues that I see come up and then be misdiagnosed or “fixed” incorrectly. They’re the kind of thing that causes a WordPress site to become completely unresponsive or very slow. What I cover may seem obvious to some, and they are certainly very generalized, but I’ve seen enough bad calls to know there are a number of people out there that get tripped up by these situations. None of the issues are necessarily code related nor are they strictly WordPress related, they apply to many PHP based apps; it’s all about how sites behave at scale. I am going to explore WordPress site performance issues since that’s where my talents are currently focused.
In all scenarios I am expecting that you are running something getting a decent amount of traffic at the server(s). I am assume you are running a LEMP stack consisting of Linux, Nginx, PHP-FPM and MySQL. Maybe you even have a caching layer like Memcached or Redis (and you really should). I’m also assuming you have basic levels of visibility into the app using something like New Relic.
In addition to the method described by Chris, there exists another method that feels a touch more natural than using ENV vars to pass parameters. It looks like this:
# If the first argument is good pass the rest of the line to the target
ifeq (good,$(firstword $(MAKECMDGOALS)))
# use the rest as arguments for "good"
RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
# ...and turn them into do-nothing targets
$(eval $(RUN_ARGS):;@:)
endif
.PHONY: bad good
bad:
@echo "$(MAKECMDGOALS)"
good:
@echo "$(RUN_ARGS)"
In this example, if we run make bad hello world you will get an error that looks like this:
make bad hello world
bad hello world
make: *** No rule to make target 'hello'. Stop.
However, if we run make good hello world then the extra parameters are simply passed to the the command in the target. The output looks like this:
make good hello world
hello world
The magic is, of course, coming from the ifeq section of the Makefile. It checks to see if the first word of the Make command goals is the target keyword. If it is, then it removes all of the trailing words and turns them into do-nothing targets allowing them to be used elsewhere in the file.
For reference, I found this trick on https://stackoverflow.com/a/14061796 awhile back and have put it to use in some internal tools where I want to wrap some command with additional steps.
Makefiles are a great tool partly because they allow you to create documentation that remains similar across projects yet allows you to customize what is actually happening behind the scenes. This makes your project much more approachable for new people since you can simply document “run make” and take care of the difficult stuff for them. Advanced users can still get into the Makefile to see what is going on or even customize it.