I have been running this blog on this domain for over ten years now but the “hardware” has changed a bit. I have always done a VPS but where it lives has changed over time. I started with Rackspace and then later moved to Digital Ocean back when they were the new kid on the block and offered SSD based VPS instances with unlimited bandwidth. I started on a $5 droplet and then upgraded to a pair of $5 droplets so that I could get better separation of concerns and increase the total amount of compute I had at my disposal. This setup has served me very well for the past five years or so. If you are interested in checking out Digital Ocean I have a referral code you can use – https://m.do.co/c/5016d3cc9b25

As of this writing, the site is hosted on two of the lowest level droplets Digital Ocean offers which cost $5 a month each. I use a pair of instances primarily because it is the cheapest way to get two vCPU worth of compute. I made the change to two instances back when I was running xboxrecord.us (XRU) as well as a NodeJS app. Xboxrecord.us and the associated NodeJS app (which also powered guardian.theater at the time), combined with MySQL, used more CPU than a single instance could provide. By adding a new instance and moving MySQL to it I was able to spread the load across the two instances quite well. I have since shutdown XRU and the NodeJS app but have kept the split server arrangement mostly because I haven’t wanted to spend the time moving it back to a single instance. Also, how I run WordPress is slightly different now because in addition to MySQL I am also running Redis. Four services (Nginx, PHP, Redis and MySQL) all competing for CPU time during requests is just a bit too much for a single core.

Making the dual server arrangement work is simple on Digital Ocean. The instance that runs MySQL also runs Redis for object and page caching for WordPress. This means Nginx and PHP gets its own CPU and MySQL and Redis get their own CPU for doing work. I am now effectively running a dual core system but with the added overhead, however small, of doing some work across the private network. Digital Ocean has offered private networking with no transfer fees between instances for awhile now so I utilize that move data between the two instances. Digital Ocean also has firewall functionality that I tap into to ensure the database server can only be reached by my web server. There is no public access to the database server at all.

The web server is, of course, publicly available. In front of this server is a floating IP, also provided by Digital Ocean. I use a floating IP so that I can create a new web server and then simply switch where the floating IP points so make it live. I don’t need to change any DNS and my cut overs are fairly clean. Floating IPs are free and I highly recommend always leverage floating IPs in front of an instance.

Although the server is publicly available, I don’t allow for direct access to the server. To help provide some level of protection I use Cloudflare in front of the site. I have used Cloudflare for almost as long as I’ve been on Digital Ocean and while I started out on their free plan I have since transitioned to using their Automatic Platform Optimization system for WordPress. This feature does cost $5 a month to enable but what it gives you, when combined with their plugin, is basically the perfect CDN solution for WordPress. I highly recommend this as well.

In all, hosting this site is about $15 a month. This is a bit steeper than some people may be willing to pay and I could certainly do it for less. That said, I have found this setup to be reliable and worry free. Digital Ocean is an excellent choice for hosting software and keeps getting better.

Running WordPress

WordPress, if you’re careful, is quite light weight by today’s standards. Out of the box it runs extremely quickly so I have always done what I could to ensure it stays that way so that I can keep the site as responsive as possible. While I do utilizing caching to keep things speedy you can never ignore uncached speeds. Uncached responsiveness will always be felt in the admin area and I don’t want a sluggish admin experience.

Keeping WordPress running smoothly is simple in theory and sometimes difficult in practice. In most cases, doing less is always the better option. For this reason I install and use as few plugins as necessary and use a pretty basic theme. My only requirement for the theme is that it looks reasonable while also being responsive (mobile friendly). Below is a listing of the plugins I use on this site.

Akismet

This plugin comes with WordPress. Many people know what this plugin is so I won’t get into it too much. It does what it can to detect and mark command spam as best it can and does a pretty good job of it these days.

Autoptimize

Autoptimize combines js and css files into single files as much as possible. This reduces the total number of requests required to load content. This fulfills my “less is more” requirement.

Autoshare for Twitter

Autoshare for Twitter is a plugin my current employer puts out. It does one thing and it does it extremely well. It shares new posts, when told to do so, directly to Twitter with the title of the post as well as a link to it. When I started I would do this manually. Autoshare for Twitter greatly simplifies this task. Twitter happens to be the only place I share new content to.

Batcache

Batcache is a simple page caching solution for WordPress for caching pages at the server. Pages that are served to anonymous users are stored in Redis, with memcache(d) also supported. Additional hits to server will be served out of the cache until the page expires. This may seem redundant since I have Cloudflare providing full page caching but caching at the server itself ensures that Cloudflare’s many points of presence get a consistent copy from the server.

Cloudflare

The Cloudflare plugin is good by itself but required if you are using their APO option for WordPress. With this plugin, API calls are made to Cloudlfare to clear the CDN cache when certain events happen in WordPress, like saving a new post.

Cookie Notice and Compliance

Cookie Notice and Compliance for that sweet GDPR compliance. Presents that annoying “we got cookies” notification.

Redis Object Cache

Redis Object Cache is my preferred object caching solution. I find Redis, combined with this plugin, to be the best object caching solution available for WordPress.

Site Kit by Google

Site Kit by Google, another plugin by my employer, is the best way to integrate some useful Google services, like Google Analytics and Google Adsense, into your WordPress site.

That is the complete set of plugins that are deployed and activated on my site. In addition to this smallish set of plugins I also employ another method to keep my site running as quickly as I can, which I described in Speed up WordPress with this one weird trick. These plugins, combined with the mentioned trick, ensure the backend remain as responsive as possible. New Relic reports that the typical, average response time of the site is under 200ms even if the traffic to the site is pretty low. This seems pretty good to me while using the most basic droplets Digital Ocean has to offer.

Do you host your own site? Leave a comment describing what your methods are for hosting your own site!

I have been working remote for about eight years now and I thought I’d write a bit about my experience with it. What is it like to work from home most of the time, how have I made it work and what problems have I had?

As I said, about eight years ago I made the transition from working in an office full time to working remotely full time. The transition came about after my wife and I agreed that the timing was right for us to move near her family so that she could fulfill a life long dream of opening her own business.

Luckily for me, the company I was at was at the time was receptive to the idea of me changing roles from that of a strictly systems administrator to more of a developer role. Where I was at, the systems I had to manage were in-house and remote work wouldn’t have been feasible (and cloud just wasn’t an option for the company yet). By changing roles from systems to development I was able to remove the requirement to be physically close to the systems that ran the software. Instead, I was able to hone my skills as a developer and dig into DevOps a lot.

Getting Settled In

I knew right away that if I was going to work remote that having a great Internet connection would be a must. I said early on that we can live anywhere as long as I could get milk from a grocery store late at night and reliable Internet with good speeds was available. Luckily we were able to agree on a town that was close enough to my wife’s parents that also met my requirements. At the time, 40 megabit Internet seemed like a solid idea (coming from the 5 I had before) and lucky for me it has since been upgraded multiple times. 200 megabit down is the normal, base package speed and has been perfect for me.

Upon moving into the new house the first thing I did pick a room in the house that could be used as a space dedicated to me. While working remotely means that I can basically work anywhere, I knew right away that having a dedicated space in the house would be important. The room I’m in isn’t necessarily dedicated to doing work, it is just a space that is fully mine and isn’t shared in any way. There is no family computer, TV or game consoles in the room. I know that I can be in this room at any time of day or night and it’ll be as I left it before and that I won’t disturb or be disturbed by anybody else in the house. When I am in this space I know right away that I can concentrate on whatever it is I’m doing. In addition to work, this is also my space to enjoy gaming, music or tinkering on projects.

Working remotely, especially from home, means that you are usually responsible for your office furniture. You may not realize it, but one of the perks for a company with remote workers is that they don’t have to buy very expensive office furniture. But you most certainly should for yourself. I picked up a corner style desk that is a bit more well built than you’ll usually fine and an office chair to match. A good chair is very important as it is something you will be sitting in for many hours a day. Get the best chair you can afford.

Building a Reliable Network

Having good Internet delivered to a home means nothing if you can’t distribute it reliably within the home. During the first few days in the house, after we had the furniture in place, I got to work wiring up as much of the house with ethernet as I could. I know from experience that hardwiring as much stuff as possible frees up valuable “air time” for WiFi devices. Not only does it ensure you get maximum throughput, it is also simply more reliable than WiFi. This is extra important on video calls where a laggy connection is much more noticeable than when you are browsing the web. My office has six total ethernet jacks that lead to a central area with a network switch. These six jacks are used for my main computer as well as a few other items like my Xbox and, depending on time of year, another PC or anything else I want to wire up.

WiFi is still important, of course, so the next thing I did was pick up additional WiFi access points to spread throughout the house. I connected these access points to my wired network (avoid wireless based backhauls if you can!) These access points would later be replaced by three Google WiFi access points. While I can service my house with a single WiFi access point the signal was too weak in some rooms to provide full throughput. I think a lot of people fall into a trap where they assume that, if they can get a WiFi connection at all then it is fine. This is not true. Any device that has a weak WiFi signal, ironically, uses more of the available WiFi resources. There are technical reasons for this that I won’t get into but trust me when I say that the most important thing you can do for WiFi performance is ensure everywhere you are using WiFi you are getting as close to a “perfect” signal as you can. This will ensure that your access points are able to use the most efficient methods available to transfer data between them and your devices.

Actually Working Remotely

In addition to having a dedicated space and a solid home network, working remotely takes some discipline. Without it, the work/life balance becomes very murky and difficult to maintain. I have found that keeping “normal” working hours is more effective than not. This means that I go to my office at 8(ish) in the morning and I consider my work day done at 5pm. I take an hour to myself around the lunch hour. This will surprise some co-workers when I say that I follow this routine every day but I find that helps set clear boundaries on when I am available and when I am not. Since I am always at home, that boundary is very easy to violate even for myself. It’s too easy to sit down in my office “after hours” and work on something. This will eventually lead to burn out and you need to actively avoid it.

In the beginning this was a bit more difficult. Smart phones were still fairly new and didn’t have ways to stop or filter notifications. This level of immediate connectivity meant I could be contacted at all times which made it easy to feel like you were never truly done and away from work. As remote work has caught on and evolved so too have the tools used to facility remote work. Software like Slack now allow you to silence notifications during certain periods of the day. MacOS and iOS now have a shared “focus” mode that you can use to prevent any apps you choose from issuing notifications during times you specify. This allows you to get notifications for things you care about while hiding work related ones that really don’t need your attention (but are hard to ignore).

Working remotely doesn’t have to mean you work from home. One of the freedoms of remote work is being able to literally work from anywhere when you feel like it. In a coffee shop? No problem. Want to try a coworking space? Do it! Working remotely means you aren’t limited on where you work. You will always have the tools you need to properly communicate with co-workers.

Sounds Great But…

Working remote is great but it is not for everyone. There are definitely some aspects of it that a person needs to be aware of before switching to remote work.

People. Most of the time I don’t mind working in my office, alone, because it allows me to concentrate without distractions. I can listen to music at whatever volume I choose and even sing along if I want. But there are days where I wish I could actually interact with a person, in person. There really is something to interacting with people in a local space and collaborating on some thing together. Something that just can’t quite be replicated over a Zoom meeting as easily though it does depend a bit on what type of work you are doing. Brainstorming on the design of something is for me, a bit blah over a Zoom session and I found that things just flow better when you’re in person.

One other thing that isn’t bad but can be challenging is timezones. While working locally at some place you can at least assume you’re all working within the same timezone. Maybe you have a team somewhere else but there is at least a core group of people who you work with daily that come and go on the same schedule as you.

And Yet

Working remote is not something I think I could trade. I really like having my own space and the overall flexibility that it affords. I feel that it is less of an issue if I need to do a midday errand like shuttling my kids around or even taking off a bit earlier in a day to watch them in after school activities. I can always easily make that time backup later if I need to. I also like knowing that where I work is not tied to where I live and that, if the opportunity presented itself, I could switch what I do without uprooting the family.

What to Avoid

If you are considering remote work there is at least one thing, in my mind, that you may want to avoid or very carefully evaluate. Hybrid workplaces, where some people are in the office and some are not, need very careful evaluation. Especially if they implemented remote work as an option during the pandemic. This arrangement can be made to work, in fact I did this for the first few years of my remote work life, but it comes at an additional cost. You as a remote worker will often be left out of discussions and decisions. If you are in a position or your duties are such that you mostly just “take orders” then this isn’t much of an issue. If you are part of a team designing products or where heavy collaboration is necessary then a company with remote first is more desirable.

Finishing Up

That about wraps up the thoughts I’m able to share about my remote work experience but I’m curious about you, dear reader, what are your thoughts? What has made remote work a success for you or what prevents you from working remotely?

Jeff Geerling has been on fire the past year doing numerous Pi based projects and posting about them on his YouTube channel and blog. He was recently given the opportunity to take the next TuringPi platform, called Turing Pi 2, for a spin and post his thoughts. This new board takes the original Turing Pi and makes it a whole lot more interesting and is something I’m seriously thinking about getting to setup in my own home lab. The idea of a multi-node, low power Arm based cluster that lives on a mini ITX board is just too appealing to ignore.

The board is appealing to me because it provides just enough of everything you need to build a reasonably complete and functional Kubernetes system that is large enough to learn and demonstrate a lot of what Kubernetes has to offer. In a future post, I hope to detail a k3s based Kubernetes cluster configuration that provides enough functionality to mimic what you might build on larger platforms, like say an actual cloud provider like Digital Ocean or AWS.

Anyway, do yourself a favor and go checkout Jeff’s coverage of the Turing Pi 2 which can be found at https://www.jeffgeerling.com/blog/2021/turing-pi-2-4-raspberry-pi-nodes-on-mini-itx-board.

Sometimes you need to access Docker on a remote machine. The reasons vary, you just want to manage what is running on a remote system or maybe you want to build for a different architecture. One of the ways that Docker allows for remote access is using ssh. Using ssh is a convenient and secure way to access Docker on a remote machine. If you can ssh to a remote machine using key based authentication then you can access Docker (provided you have your user setup properly). To set this up read about it at https://docs.docker.com/engine/security/protect-access/.

In a previous post, I went over using remote systems to build multi-architecture images using native builders. This post is similar but doesn’t use k3s. Instead, we’ll leverage Docker’s built on context system to add multiple Docker endpoints that we can tie together to create a solution. In fact, for this I am going to use only remote Docker instances from my Mac to build an example image. I assume that you already have Docker installed on your system(s) so I won’t go through that part.

Like in the previous post, I will use the project located at https://github.com/dustinrue/buildx-example as the example project. As a quick note, I have both a Raspberry Pi4 running the 64bit version of PiOS as well as an Intel based system available to me on my local network. I will use both of them to build a very basic multi-architecture Docker image. Multi-architecture Docker images are very useful if you need to target both x86 and Arm based systems, like the Raspberry PI or AWS’s Graviton2 platform.

To get started, I create my first context to add the Intel based system. The command to create a new Docker context that connects to my Intel system looks like this:

docker context create amd64 --docker host=ssh://[email protected]

This creates a context called amd64. I can then use this context by issuing docker context use amd64. After that, all Docker commands I run will be run in that context, on that remote machine. Next, I add my pi4 with a similar command:

docker context create arm64 --docker host=ssh://[email protected]

We now have our two contexts. Next we can create a buildx builder that ties the two together so that we can target it for our multi-arch build. I use these commands to create the builder (note the optional –platform value which will mark that builder for the listed platforms):

docker buildx create --name multiarch-builder amd64 [--platform linux/amd64]
docker buildx create --name multiarch-builder --append arm64 [--platform linux/arm64]

We now have a single builder named multiarch-builder that we can use to build our image. When we ask buildx to build a multi-arch image, it will use the platform that most closely matches the target architecture to do the build. This ensures you get the quickest build times possible.

With the example project cloned, we now build an image that will work for 64bit arm, 32bit arm and 64bit x86 systems with this command:

docker buildx build --builder multiarch-builder -t dustinrue/buildx-example --platform linux/amd64,linux/arm64,linux/arm/v6 .

This command will build our Docker image. If you wish to push the image to a Docker registry, remember to tag the image correctly and add --push to your command. You cannot use --load to load the Docker image into your local Docker registry as that is not supported.

Using another Mac as a Docker context

It is possible to use another Mac as a Docker engine but when I did this I ran into an issue. The Docker command is not in a path that Docker will have available to it when it makes the remote connection. To overcome this, this post will help https://github.com/docker/for-mac/issues/4382#issuecomment-603031242.

For a couple of years I’ve used Rogue Amoeba’s SoundSource app to control audio routing on my Mac. It allows me to do tricks like sending system notifications to the built in speakers, Spotify to my Schiit Mani DAC and Zoom to my headphones. It also allows me to apply compressors to Zoom calls so that it normalizes the volume of all participants or knocks down some of the brightness on some mic setups. One thing it lacks, however, is the ability to loop audio coming in on an audio input back to some destination. For that, I would need to pick up some different software. It isn’t a feature I need all of the time so I couldn’t really justify the price.

Sometimes I want to play Xbox but need to basically integrate the audio with Discord which is running on some other system. The problem, of course, is how do I get the audio integrated or mixed properly? I do have an external mixer that can partially get the job done but for technical limitations of my mixer I can’t really mix what I hear without it mixing that back into what others hear.

Enter LadioCast.

LadioCast is an app that is meant to allow a user to listen to web streams that use icecast, rtmp or shoutcast. It has a bit of a bonus feature that allows the user to mix up to four inputs and send it to any output. If you happen to have some kind of external audio device that allows for AUX in, like this Behringer UCA202, then you can easily send any audio into your Mac using the Beheringer as an input and then use LadioCast to redirect it to an output. Between LadioCast’s volume controls and SoundSource I can mix game audio with other audio like music from Spotify and Discord.

If you have been looking for an app that allows you to monitor input audio then give LadioCast a try.

In this post, I thought it would be fun to revisit my home audio journey and walk through how things have changed over the years to where they are now. For as long as I can remember I’ve had an interest in A/V gear but audio gear and I have a history that goes back a bit further. When I was younger, I would read up on the latest devices and formats and just soak up as much information as I could. When I could I would, using what we had around or I had access to, spend hours recording to tapes, dubbing tapes, and listening to whatever we had. Over the years I’ve upgraded my stuff but never really getting too far beyond entry-level equipment. This hasn’t lessened my enjoyment of it all in the least though.

As I was growing up, got a job, and eventually had cash to spend I would save it for various pieces of gear to add into my own setup. Being a teen living in a small rural area miles from many electronics stores and it being before online shopping was a thing my choices were rather limited. I didn’t let this stop me from putting together a fun system. At the time most big box stores had a much more robust electronics section. K-mart sold audio components as did Radio Shack.

I don’t actually remember what my very first system consisted of but it was probably something we picked up at a garage sale to get me going. I know it was an inexpensive, all-in-one system with a radio and tape deck. I also remember that it had an AUX input. This AUX input is what I hooked my first component up to. A Sharp DX-200 CD player.

Sharp DX-200 CD Player.
Sharp DX-200 CD Player

This CD player served me well for many many years before ultimately succumbing to a small plumbing mishap while sitting in storage (it also needed a new belt). However, it wasn’t long before I knew that the next piece I needed in my setup was a new receiver to replace the original…unit. For this, I remember spending a lot of time looking at various flyers for big electronics stores and researching what was available at the time. After a while, I settled on a Kenwood KR-A4060.

Kenwood KR-A4060
Kenwood KR-A4060

This thing was, honestly, amazing at the time. AM-FM receiver with phono, CD, and tape input with monitoring (this will be important later). This also served me well for many years until I gave it to someone to use in their new place. Procuring it was a bit of a chore because it required a two-hour drive to pick up and GPS wasn’t a thing yet. It’s amazing we ever found anything back then.

At this point, this is where things get a bit fuzzy. I don’t know when I got new speakers but it had to have been quickly after picking up the receiver. Optimus STS-1000 speakers from Radio Shack were not the best speakers ever made but perfectly adequate (and more importantly, large).

STS-1000 Speakers
STS-1000 Speakers

I held onto these speakers until got married and was living in an apartment that wouldn’t have appreciated them the same way I did.

After the speakers came the graphic EQ. An Optimus 31-2025, also from Radio Shack (do you see a trend here? It’s because Radio Shack was close). This rebranded Realistic 31-2020 (but also under the RCA brand) graphic EQ was pretty neat and it only worked because the receiver had tape monitoring. Using tape monitoring I was able to send any audio to the EQ, modify it and then “monitor” it. This meant the changes made by the EQ were applied to all of the inputs of the receiver.

Optimus 31-2025
Optimus 31-2025 Graphic EQ

This is a piece that was just a lot of fun because of all the bouncing lights. Its demise came after buying my first full A/V receiver which didn’t have the tape monitoring trick.

The Sony CDP-CE215 5 disc changer was most likely next in the chain. To be honest I don’t remember when I picked this up or even where. This was a pretty basic 5 disc changer, even for the time, but served me well for many years until I offered it up to someone so they could use it in their new house. If I remember correctly, it was one of the first Sony models that came with the jog shuttle wheel making it easy to quickly jump to the desired track.

The Sony CDP-CD215 5 disc changer
Sony CDP-CD215

Next in the audio stack, and definitely a late addition that should probably have never happened was a Sherwood DD-4030C

Sherwood DD-4030C

This was another Radio Shack pick-up that, because tapes were already on their way out, was a very cheap clearance item that I couldn’t say no to. Of all the things I’ve sold this is the one I regret the most. Mostly because it was so feature complete and such a smooth performer. Everywhere I see this being sold it either looks to be in bad shape or is twice as much as I paid for it new. That said, I did finally manage to find one on eBay and was recently able to fix it up and get it in working order! In a future post I may go through some of what I did to get it working again.

Somewhere between all of those, I picked up an Optimus PRO SW-12 subwoofer. To this day I can’t remember what I used to power it since it was passive but it created a lot of boom, more than a kid my age had any right having. It too was sold after getting married and living in an apartment.

This setup treated me well for a number of years. It was definitely all low end but was a great introduction to the world of hifi gear and gave me a place to grow from. Anyway, I think that does it for a “post 1” and in my next post I’ll look back on the home theater stuff that later replaced most of this equipment.

I have been running Linux as a server operating system for over twenty years now. For a brief period of time, I also ran it as my desktop solution around 2000-2001. Try as I might however, I could never really fully embrace it. I have always found Linux as a desktop operating system annoying to deal with and too limiting (for my use cases, your mileage may vary). A recent series by Linus Tech Tips doing a great job of highlighting some of the reasons why Linux as a desktop operating system has never really gone mainstream (chromebooks being a notable exception).

Check out the videos:

And

Today I’d like to discuss what I often see as one of the largest contributors to poor backend WordPress performance. Often times I see this particular issue contributing to 50% or more of the total time the user waits for a page to load. The problem? Remote web or API calls.

In my previous post, I talk about using Akismet to handle comment spam on this site. In order for Akismet to work at all it needs to be allowed to access an outside service using API calls. While API calls are necessary for some plugins to work properly I often see plugins or themes that make unnecessary remote calls. Some plugins and themes like to phone home for analytics reasons or to check for updates. Even WordPress core will make remote calls in order to determine the latest version of WordPress or to check for available theme and plugin updates even if you have otherwise disabled this functionality. These remote calls add a lot of extra time to requests or can even cause your site to become unavailable if API endpoints take too long to respond.

This site is very basic and runs a minimal set of plugins. Because of this, I am able to get away with a rather ham-fisted method of dealing with remote API calls so that I can ensure my site remains as responsive as possible given the low budget hosting arrangement I use. This one weird trick is to simply disallow remote calls at all except for the ones absolutely necessary for my plugins to operate properly.

In my wp-config.php file I have defined the following:

define( 'WP_HTTP_BLOCK_EXTERNAL', true );
define( 'WP_ACCESSIBLE_HOSTS', 'api.cloudflare.com,rest.akismet.com,*.rest.akismet.com' );

The first define tells WordPress to block all http requests that use wp_remote_get or similar. This has the immediate affect of blocking the majority of remote web calls. While this works for any plugin that uses WordPress functions for accessing remote data, any plugin that makes direct web requests using libraries like curl or guzzle will not be affected.

The second define tells WordPress what remote domains are allowed to be accessed. As you can see, the two plugins that are allowed to make remote calls are Cloudflare and Akismet. Allowing these domains allows these two plugins to function normally.

By blocking most remote calls I get the benefit if preventing my theme and core from phoning home on some page loads and while I’m in the admin. This trick alone, without making any other optimizations, makes WordPress feel much more snappy to use and allows pages that are uncached to be built much more quickly. Blocking remote calls has the side effect of preventing core’s ability to check the core and plugin versions but I am in the WordPress world enough that I am checking on these things manually anyway so the automated checks just aren’t necessary. I’d rather trade the automated checks for a continuously better WordPress experience.

What can you do as a developer?

This trick is decidedly heavy handed and only works as someone operating a WordPress site. Developers may want to consider their use of remote web calls and may be wondering, what can I do to ensure WordPress remains as responsive as possible? The primary question a developer should always ask when creating a remote request is “is the data from the remote request necessary right now?” Meaning, is the data the remote request is getting necessary for the current page load or could it be deferred to some background process like WordPress’s cron system and then cached? The issue with remote requests isn’t that they are being performed, it is that they are often performed on what I call “the main thread” where the main thread is the request a user has made and must then wait for the results. Remote requests that are made in the background will not be felt by end users. In addition, background requests can be performed once rather than for every request. In addition to improving page load times for end users you may also find you can get away with less hardware.

If you need help determine how many remote calls are being performed there are some options. You can certainly write an mu-plugin that simply logs any remote requests being made but what I used was a free New Relic subscription. I used New Relic on this site to determine what remote calls were being performed and then configured what domains were allowed based on that information. By blocking unnecessary remote requests I was able to cut my time to first byte timings in half.

Do you have any simple tricks you use to improve WordPress performance? Leave the info in the comments below!

At the beginning of November I decided to give comments on posts a try again. In the past, allowing comments on the site has been an issue as the overhead of managing spam comments was more than I wanted to deal with. It required almost daily attendance as spam was a constant stream of junk even with tools like Akismet installed, enabled and configured.

I am happy to report that Akismet is greatly improved and it is doing an excellent job of blocking and removing spam comments so that I don’t even need to see them. If you, like me, have had a negative experience trying to manage comments in the past then maybe it is time to try them again with an updated Akismet setup.

For nearly as long as I’ve been using Linux I have had some system on my home network that is acting as a server or test bed for various pieces of software or services. In the beginning this system might be my DHCP and NAT gateway, later it might be a file server but over the years I have almost always had some sort system running that acted as a server of some kind. These systems would often be configured using the same operating system that I was using in the workplace and running similar services where it made sense. This has always given me a way to practice upgrades, service configuration changes and just be as familiar with things as I possibly could.

As I’ve moved along in my career, the services I deal with have gotten more complex and what I want running at home as grown more complex to match. Although my home lab pales in comparison to what others have done I thought it would still be fun to go through what I have running.

Hardware

Like a lot of people, the majority of the hardware I’m running is older hardware that isn’t well suited for daily use. Some of the hardware is stuff I got free, some of it is hardware previously used to run Windows and so on. Unlike what seems to be most home lab enthusiasts, I like to keep things as basic as possible. If a consumer grade device is capable of delivering what I need at home then I will happily stick to that.

On the network side, my home is serviced with cable based Internet. This goes into an ISP provided Arris cable modem and immediately behind this is a Google WiFi access point. Nothing elaborate here, just a “basic” WiFi router handles all DHCP and NAT for my entire network and does a fine job with it. After the WiFi router is a Cisco 3560g 10/100/1000 switch. This sixteen year old managed switch does support a lot of useful features but most of my network is just sitting on VLAN 1 as I don’t have a lot of need for segmenting my network. Attached to the switch are two additional Google WiFi access points, numerous IoT devices, phones, laptops and the like.

Also attached to the switch are, of course, items that I consider part of the home lab. This includes a 2011 HP Compaq 8200 Elite Small Form Factor PC, an Intel i5-3470 based system built around 2012 and a Raspberry Pi 4. The HP system has a number of HDD and SSD drives, 24GB memory, a single gigabit ethernet port and hosts a number of virtual machines. The built Intel i5-3470 system has 16GB memory, a set of three 2TB HDDs and a single SSD for hosting the OS. The Pi4 is a 4GB model with an external SSD attached.

Operating Systems

Base operating system on the HP is Proxmox 7. This excellent operating system is best describe as being similar to VMware ESXi. It allows you to host as many Virtual Machines as your hardware will support, can be clustered and even migrate VMs between cluster nodes. Proxmox is definitely a happy medium between having a single system and being a full on cloud like OpenStack. I can effectively achieve a lot of a cloud stack would provide but with greater simplicity. Although I can create VMs and manually install operating systems, I have created a number of templates to make creating VMs quicker and easier. The code for building the templates is at https://github.com/dustinrue/proxmox-packer.

On the Intel i5-3470 based system is TrueNAS Core. This system acts as a Samba based file store for the entire home network including remote Apple Time Machine support, NFS for Proxmox and iSCSI for Kubernetes. TrueNAS Core is an excellent choice for creating a NAS. Although it is capable of more, I stick just to just the basic file serving functionality and don’t get into any of the extra plugins or services it can provide.

The Raspberry Pi 4 is running the 64bit version of Pi OS. Although it is a beta release it has proven to work well enough.

Software and Services

The Proxmox system hosts a number of virtual machines. These virtual machines provide:

Kubernetes

On top of Proxmox I also run k3s to provide Kubernetes. Kubernetes allows me to run software and test Helm charts that I’m working on. My Kubernetes cluster consists of a single amd64 based VM running on Proxmox and the Pi4 to give me a true arm64 node. In Kubernetes I have installed:

  • cert-manager for SSL certifications. This is configured against my DNS provider to validate certificates.
  • ingress-nginx for ingress. I do not deploy Traefik on k3s but prefer to use ingress-nginx. I’m more familiar with its configuration and have good luck with it.
  • democratic-csi for storage. This package is able to provide on demand storage for pods that ask for it using iSCSI to the TrueNAS system. It is able to automatically create new storage pools and share them using iSCSI.
  • gitlab-runner for Gitlab runner. This provides my Gitlab server with the ability to do CI/CD work.

I don’t currently use Kubernetes at home for anything other than short term testing of software and Helm charts. Of everything in my home lab Kubernetes is the most “lab” part of it where I do most of my development of Helm charts and do basic testing of software. Having a Pi4 in the cluster really helps with ensuring charts are targeting operating systems and architectures properly. It also helps me validate that Docker images I am building do work properly across different architectures.

Personal Workstation

My daily driver is currently an i7 Mac mini. This is, of course, running macOS and includes all of the usual tools and utilities I need. I detailed some time ago the software I use at https://dustinrue.com/2020/03/whats-on-my-computer-march-2020-edition/.

Finishing Up

As you can see, I have a fairly modest home lab setup but it provides me with exactly what I need to provide the services I actually use on a daily basis as well as provide a place to test software and try things out. Although there is a limited set of items I run continuously I can easily use this for testing more advanced setups if I need to.