For the past few years I have been using Kubernetes to host a number of services including custom code, WordPress and all manner of other publicly available projects. In this time I have come to rely on a few, what I call, base services that make the experience of running software in Kubernetes just a bit nicer. In this post I’m going to go through what base services I install and a bit on why.

All of the services listed below are installed using helm. I consider Helm the only method for managing applications running in a Kubernetes cluster. Nothing else is able to manage software as well as helm. If a service I want to run in Kubernetes doesn’t have a helm chart I will create one for it.

Almost every Kubernetes setup I use needs to actually service requests from users and this is almost always done using the Ingress system. My preferred ingress controller is the community maintained ingress-nginx. Do not confuse this controller with nginx-ingress, which is put out by nginx.com. I prefer this fully open source controller for its straight forward feature set and configuration system. It has a large number of features and works equally well in both home lab and cloud environments. As an Nginx user anyway I find the configuration very familiar. To install ingress-nginx, I add their repo using helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx. You will find additional information at https://kubernetes.github.io/ingress-nginx/deploy/.

SSL is all but a necessity these days and I have found no better way than to use cert-manager in the cluster. Nearly all of my use cases allow for the usage of a cluster wide, DNS based resolver that allows me to get SSL certs for resources that are not yet publicly accessible or are internal only. By leveraging DNS services from AWS or Cloudflare (or any supported DNS provider) I am able to automatically create and update certificates with very little intervention. To install cert-manager I use the the official helm chart provided by the project using helm repo add jetstack https://charts.jetstack.io. Additional installation directions are available at https://cert-manager.io/docs/installation/helm/.

Speaking of DNS, in clusters where I need to have DNS records pointed towards the cluster I use external-dns. This service looks for ingress entries and manages records in your DNS provider pointing the desired hostname towards your cluster or its external load balancer. I install external-dns using the helm chart by Bitnami. Learn more at https://github.com/bitnami/charts.

Getting logs out of a production cluster is important and assuming you have some place to accept the logs, you won’t generally do better than using fluent-bit. Installation and configuration of fluent-bit is highly dependent on what your logging system is so I recommend reading their documentation on how to get going. Fluent-bit is quite popular and it is usually easy to find examples for whatever your logging system is.

Used by a number of other services, metrics-server gathers basic utilization data from pods and nodes in your cluster. This service is so essential many small Kubernetes systems, like k3s, automatically install this service. I install this service again using Bitnami’s charts available at https://github.com/bitnami/charts.

For managed Kubernetes instances in public clouds I find cluster-autoscaler to be an essential service. When configured correctly, and when combined with metrics-server and properly configured resource settings, cluster-autoscaler will automatically add and remove worker nodes. Information about how to add the cluster-autoscaler helm chart can be found at https://github.com/kubernetes/autoscaler/tree/master/charts/cluster-autoscaler.

These services make Kubernetes much easier and automatic and for that reason I find them to be essential in almost every cluster. What services do you find essential?

If you find yourself in the business of creating and testing Helm charts, or you simply want to try one out, then Colima with its built in Kubernetes functionality may be for you. In this post I am going to walk through how to quickly get going with Colima’s Kubernetes integration and an ingress controller for basic Helm chart testing.

I assume you already have Colima and Helm installed and are familiar with the tools and Kubernetes itself. If this is you then continue reading!

For this post I am using Colima 0.4.4, k3s v1.23.6+k3s1, helm 3.9.3 and ingress-nginx 4.2.0. I often find myself creating Helm charts and I want to test my modifications locally before committing my changes. Once in a while I also want to quickly test an available helm chart without messing up an existing Kubernetes installation. In these cases I will create a Colima instance with Kubernetes enabled and install my preferred ingress controller, nginx-ingress.

To get started, ensure that no other colima instances are running using colima list followed by a colima stop <name of profile> for any running instances. You should also ensure that there are no other services running on your system that are opening ports, especially 80, 443 and 3306. This helps ensure your test instance doesn’t interfere with any existing colima instances or other services. Then, issue colima start helm-test --kubernetes -m4 to start a colima instance with 4GB memory and Kubernetes enabled. Once colima has finished creating the instance you can add the ingress-nginx helm repository if you don’t already have it with helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx followed by helm repo update. You can now install ingress-nginx using helm install -n ingress-nginx --create-namespace --set controller.ingressClassResource.default=true ingress-nginx ingress-nginx/ingress-nginx. This command will install ingress-nginx and set it as the default ingress class for the cluster. At this point you have basic installation of Kubernetes with an ingress controller which will allow you to test most Helm charts.

As a test, you could now create a brand new helm chart with helm create nginx. Edit the resulting values.yaml file and enable ingress then install the chart into your new test cluster. You should see that it is able to download and install the default nginx image and create the proper ingress rule automatically. For my test I used helm install -n default nginx .. Before long you should see this as an ingress record:

kubectl get ingress nginx
NAME    CLASS   HOSTS                 ADDRESS        PORTS   AGE
nginx   nginx   chart-example.local   192.168.5.15   80      54s

Despite what the Address column says, the chart is now available at 127.0.0.1. Create a hosts entry and you will be able to get the default nginx page.

Of course, you can use or test other charts too. Here I will install bitnami’s MySQL chart with the following settings in a yaml file

## MySQL Authentication parameters
##
auth:
  ## MySQL root password
  ## ref: https://github.com/bitnami/bitnami-docker-mysql#setting-the-root-password-on-first-run
  ##
  rootPassword: "password"
  ## MySQL custom user and database
  ## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-on-first-run
  ## ref: https://github.com/bitnami/bitnami-docker-mysql/blob/master/README.md#creating-a-database-user-on-first-run
  ##
  database: "blog"
  username: "wordpress"
  password: "password"
##
primary:
  persistence:
    ## If true, use a Persistent Volume Claim, If false, use emptyDir
    ##
    enabled: false
  service:
    ## @param primary.service.type MySQL Primary K8s service type
    ##
    type: LoadBalancer
##
secondary:
  ## Number of MySQL Secondary replicas to deploy
  ##
  replicaCount: 0

I install the bitnami repo using helm repo add https://charts.bitnami.com/bitnami followed by helm repo update to ensure I have the latest info. To install a copy of MySQL with my settings file I use helm install -f mysql.yaml mysql bitnami/mysql. After a short while MySQL will be installed and also available on localhost through k3s’ built in LoadBalancer system. Notice in the mysql.yml file I specified I asked the chart to install the primary instance of MySQL with a LoadBalancer based service instead of the default ClusterIP.

When you are finished testing a simple colima delete helm-test will remove your testing environment and free up resources.

Hopefully you see now how quickly and easily you can get going with Colima and its Kubernetes integration to get a local Kubernetes cluster up and running for testing. The Kubernetes integration Colima uses is very capable and well suited to learning and testing. Enjoy!

Many audio formats have come and gone over the years with some of them being better than others. Music on physical formats, particularly vinyl, have been increasing in popularity year over year. In addition to vinyl records, I personally have been adding to my CD collection. In order to actually play CDs I had to buy a CD player (or two) because I have either gotten rid of the players I had or they failed. To date, I have a total of three CD players. Two Sony 5 disc changes and a Sharp DX-200 single disc player. These units all needed some amount of effort in order to get them back into shape.

Not content with just CD and vinyl, I decided it was time to get into one of the few formats I had never owned or even used before. Minidisc. As described by Wikipedia, Minidisc is an erasable magneto-optical disc format that allows users to record sound either in real time or, using specific equipment, transfer data using a USB connection. The discs can be reused like a cassette but unlike a cassette they are digital and provide near CD quality. Minidisc uses a proprietary, lossy compression system that allows it to fix 60-80 minutes of audio onto a single disc. Unlike CD+/-R, Minidiscs can be modified, on the player, after the fact with text info, track arrangement and even editing of where tracks are split. All in, the Minidisc format feels incredibly ahead of its time even today. There is no other format that is remotely close to what Minidisc can do outside of sitting at a computer and fiddling in software. While it is true that Minidisc’s features are superseded by music software like Apple’s Music or Spotify there is a real and undeniable charm to the format that makes it fun to use even today. Later iterations of the Minidisc format provided additional capabilities which you can read about at https://www.minidisc.wiki//technology/start.

Minidisc recorders and players come in a number of form factors including portal players that are barely larger than the discs themselves (aside from thickness) to full HiFi component sized units. While the HiFi component size is my preference there are few that support NetMD, or a USB connection for interfacing with a computer. Many portal players, even earlier models, support NetMD for quick transfer for music from a computer to the Minidisc.

The player I picked up is a Sony MDS-JE500. This model, from 1997, has issues with its loading mechanism that causes it to continuously attempt to eject the disc even after ejecting the disc. A known and common problem, this unit will need to have some microswitches replaced in the near future. I picked this model because it comes an era when Sony was producing devices with my favorite design language. This device is able to record both analog and digital directly to Minidisc and allows for manually setting analog recording levels. Overall a really nice and functional device.

Minidisc, despite being so feature rich, ultimately failed for numerous reasons including high initial cost and the rise in digital formats and players like the iPod. In addition to this, commercially licensed music outside of Sony properties was far and few between. Even today you will not find many Minidiscs and those that you do find fetch a high price.

If you are interested in learning more or getting into the format I highly recommend taking a peek at https://www.minidisc.wiki/. This community run wiki provides a growing collection of articles related to the Minidisc format including where to buy devices and minidiscs, device features and even repair information.

Minidisc is not a format I expect to find a lot of prerecorded music for but I am enjoying transferring my vinyl records to minidisc or creating custom mixes. It’s true there are more modern ways to do the same thing, including just skipping the process entirely and creating playlists in Spotify, it is still an interesting departure from the norm and, most of all, is fun.

Since 2021 I’ve been using a combination of tools to handle my music collection. Today I’m going to talk about the tools I’m using to manage my collection including how I catalog, import, serve and listen to it.

Although I do subscribe to a music streaming service I have taken an interest in expanding my physical collection as well. My collection consists largely of CDs with some vinyl records mixed in. While I appreciate the convenience of digital stream I also enjoy the process and experience of playing physical media, which I’ve written about before. That said, I like to also take my collection with me in digital formats and enjoy knowing that it comes from my own personal collection. Before we get into how I copy my CDs to digital lets first discuss how I catalog and keep track of my collection.

Cataloging

A couple of years ago I learned about a site called discogs.com. In their words Discogs is “a platform for music discovery and collection” and this is exactly how I use it. You can search for and add to your collection each piece of physical music media you own or are interested in owning and add it to your collection or wishlist, respectively. The database contains user submitted and curated information about most releases available with surprising detail. You can choose to be super detailed about how you add items to your collection by selecting the exact release or more simply add the first item you find. How you use Discogs is ultimately up to you but it is an incredibly handy way to track what you already own, find new stuff you’d like to own and so on. Using Discogs allows me to track the state of my media (some of it is damaged and needs to be replaced, for example) as well as ensure I don’t buy the same item twice.

Importing

I import all of my CDs using a tool called XLD, available at https://tmkk.undo.jp/xld/index_e.html. Using an external DVD drive to my Mac, XLD is able to look up what CD is in the drive, grab metadata about it and take care of copying the music off of it and onto my NAS. The metadata ensures that the folders are named properly as well as the track titles. I stick to the FLAC format for the files as it ensures the best quality and compatibility with playback software. Whenever I sync music to my phone for offline play in the car I opt to have the songs encoded on the fly to a smaller format.

Some vinyl records also include digital files that you can download from a site. For these I will typically add them to an appropriate folder of either MP3 encoded music or FLAC encoded music.

Storage

All of my music is stored on a TrueNAS based storage system and then shared out to a virtual machine that is running Plex. TrueNAS exports the data using Samba so it is easy for my Mac and the virtual machine to access without issue. TrueNAS stores the files on a raidz set for redundancy and I periodically back the data up to another disk.

Playback

Once the music is imported and stored on TrueNAS I add it in Plex. Plex is a convenient way to manage music as it detects the music you have added and downloads additional metadata about it, like album reviews. Recent releases of Plex allow you to “sonically fingerprint” music so that it can better find similar music in our collection for building better mixes.

Although Plex is the server part of the music system the actual software I use is called Plexamp. Plexamp is an app that is dedicated to music playback offering a slick interface, ability to download music locally from Plex and provides gapless playback. If you’ve ever listened to an album and wondered why there were gaps between tracks that sound like they should flow together, gapless is what you’re looking for. In addition to gapless, when playing a mix you can optionally have Plexamp fade between songs and I find that this works extremely well. Overall, Plex and Plexamp are my favorite tools for listening to music.

The actual hardware I listen on varies depending on where I am. While working and at my desk then I will be using the setup detailed on my audio system page. While out and about it will be through my iPhone connected to headphones or my car.

Conclusion

I’ve long listened to music but only recently have I gotten back into the general process of collecting it and paying attention to the process of listening to it. I enjoy my physical formats but I’m also not blind to the convenience of digital formats. How do you manage your music?

Semi-related to my previous post, this post quickly touches on the fact that having swap on your system is not always a bad thing. I have seen “disable swap” become a common “performance hack” suggested by a lot of people and it appears to be growing in popularity. I believe a lot of people are simply parroting something they heard once but don’t actually know when it makes sense to disable swap on a system. I have found that outright disabling swap has a detrimental effect on system performance.

The basic idea behind not using swap is sound, on the surface. The argument is that swap is both much much slower than system memory and that if you are hitting swap then you need more memory. To add to this, a lot of people don’t understand how memory works on Linux (and indeed all major operating systems). Linux wants to use as much memory as possible. If you give it 1TB of memory (or more) then it will do everything it can to eventually use all of it. However, how it uses this memory can be confusing. Looking at this output from free -m, it may not be obvious what is happening:

[root@web2 system]# free -m
              total        used        free      shared  buff/cache   available
Mem:            809         407         137          37         263         251
Swap:          1023         282         741

In the above example output from free -m you will see the columns total, used, free, shared, buff/cache and available. The values for each, respectively is 809, 407, 137, 37, 263 and 251.

In a lot of cases, the value most people will look at is “free.” Unfortunately, on a system that has been running for some time, this value will almost always give the impression that the system is low on memory. Like so many things, there is a lot more to it than what the free value shows. In reality, the value you want to pay attention to is available. This value represents the amount of free memory with memory that can be reclaimed at any time for other purposes added in. The “cache” portion of the buff/cache value is what can be reclaimed and it represents the amount of data from disks that is cached into memory. It is this cache that operating systems try to keep full in order to avoid expensive disk reads and is why a system with a lot of memory can potentially have very little free memory.

A system that is low in available memory will also not be able to cache a lot of disk reads (because remember that available is free+cache added together) which will lead to lower overall performance. Of course, loading an entire disk into memory won’t necessarily have a positive affect on overall performance either. If a file is read once and never used again, does it really need to be cached? Having a lot of memory can lead to things being needlessly cached. A system with 16GB of memory can perform just as well as a system with 32GB of memory if most of the 32GB memory is filled with files that are very rarely read again.

Getting to why having swap is not evil, some apps and portions of apps aren’t always being used, even if they are running. For this reason, having swap available on a system is beneficial because the operating system can page application memory to disk and free up memory for to use as a disk cache for more active applications. In instances, such as the web server hosting this site, having swap available is a necessity because it allows me to have a system with less memory while still maintaining proper performance in normal conditions. Services that are necessary but rarely used are swapped out leaving room in memory for application code to be kept there instead. WordPress is considered “hot data” where as systemd is not. Once the system has booted systemd, while necessary, is not actively doing anything and can be paged to disk without affecting performance in noticeable way. However, swap is an issue if you are dipping into it continuously. This will quickly become evident if you have a lack of available memory as well as a high usage of swap. In this case, you truly do need more memory in the system.

I hope this post helps clear up some of the confusion around memory usage on systems. Have anything to share? Did I get something wrong? Leave a comment!

In a previous post I mentioned that this site is hosted across two different hosts. One that is dedicated to running MySQL and Redis while the other runs Nginx and PHP. I use this arrangement for a few reasons. First, this is the cheapest way to get two real CPU cores on Digital Ocean. During a web request, multiple processes including Nginx, PHP, MySQL and Redis must run and share CPU time with each other. By using multiple machines, the work is spread across multiple physical CPUs which improves overall performance and throughput. Second, it allows me to configure MySQL to use most of the system memory without fear that it’ll be OOM killed. An OOM kill is what happens on a Linux system when it determines it is out of memory and the biggest user of memory needs to be removed (killed) in order to protect the system from a meltdown. In general, regular triggering of the OOM killer should be considered an error in configuration and capacity planning but know that it is there to protect the system.

In this post, I want to discuss a scenario where you want to host a common LAMP/LEMP stack on a single machine. In this kind of setup, multiple processes will be competing with each other for resources. Without getting too into the weeds about tuning software on this kind of setup, I’m going to assume that you will likely configure MySQL in such a way that it, as a single process, will consume the most memory of any process on the system. Indeed, most distributions when installing MySQL (or MariaDB) will have a default configuration that allows MySQL to use in excess of 1GB.

Unlike MySQL, the amount of memory that many other processes may use is relatively unknown. Looking at just PHP (using php-fpm) the amount of memory is fairly dynamic. It is unlikely that you will be able to tune your system to ensure PHP doesn’t use too much memory without sacrificing total throughput. Therefore, it is necessary to configure PHP in such a way that you over provision available memory in an effort to ensure you get the most performance you can most of the time. However, in this scenario it is likely that you will eventually face a situation where PHP is asking for a lot more memory than usual and the system will invoke the OOM killer to deal with the sudden shortage of memory. MySQL, being the single largest user of memory on the system, will almost always be selected by the kernel to be removed. Allowing MySQL to be OOM killed is far less ideal than killing a rogue PHP process or two because it will disrupt all requests rather than the problem requests. So, how do you avoid MySQL being selected?

Most modern systems ship with systemd. Portions of systemd are not well received but, at least in my opinion, the init system is excellent. Using systemd, we are able to customize the startup routines for MySQL (almost any service, actually) so that we can instruct the kernel’s OOM killer to select a different process when the system is low on memory. Here is how it is done:

  • Create a directory – /etc/systemd/system/mysql.service.d. The directory name must match an existing service. For MariaDB it would be mariadb.service.d. You can determine the name by running systemctl list-unit-files
  • In this directory, create a file called oomadjust.conf with the following in it:
    [Service]
    OOMScoreAdjust=-500
  • Run systemctl daemon-reload
  • Restart MySQL

To confirm your customization was picked up run systemctl status mysql. In the “Drop-In” section you should see your customization was picked up. It’ll look similar to this:

Screenshot showing the oomadjust.conf file was picked up by systemd

This setting adjusts the value the OOM killer will calculate when trying to determine what process is using the most memory. By forcing this value to be lowered for MySQL it is much less likely to be selected. Instead, a problem PHP process will likely be selected first and removed. This will save MySQL and the overall availability of your app. Of course, your mileage may vary and you will need to tune your configuration to reduce if not eliminate the need for the OOM killer.

If you would like to learn more about systemd drop-ins take a look at the documentation by flatcar Linux at https://www.flatcar.org/docs/latest/setup/systemd/drop-in-units/. Many things can be overridden without having to edit files provided by packages (which you should avoid).

Have you used systemd’s drop-in system before? Curious how else you might use it? Leave a comment!

CentOS 7 is now in full maintenance mode until 2024. This means it won’t get any updates except security fixes and some mission critical bugs. In addition to being in full maintenance mode, the OS is simply beginning to show its age. It’s still a great OS, just that a lot of packages are very far behind “state of the art.” Packages like git, bash and even the kernel are missing some features that I prefer to have available. With that in mind, and an abundance of time on a Saturday, I decided to upgrade the underlying operating system hosting the site.

The choice of what operating system was not as simple as it was just a year ago. In the past I would have simply spun up the next release of CentOS, which is based off of Red Hat Enterprise Linux, and configured it for whatever duty it was to perform. However, Red Hat had a different idea and decided to make CentOS 8 a rolling release that RHEL is based off of, rather than CentOS being a rebadged clone of RHEL. The history of CentOS is a surprisingly complex and you can read about it at https://en.wikipedia.org/wiki/CentOS.

Since the change, at least a few options are now available to give people, like me, access to a Linux distribution they know and can trust. Among those, Rocky Linux appears to be getting enough traction for me to adopt it as my next Linux distribution. My needs for Linux are pretty basic and more than anything I just want to know that I can install updates without issue and keep the system going for a number of years before I have to worry about it. Rocky Linux gives me that just like CentOS did before. As of this writing, the web server hosting this site is now running Rocky Linux 8 and I’ll upgrade the database server at a later time. So far it has proven to be identical to RHEL and very familiar to anyone who has used RHEL/CentOS in the past.

Nobody asked for this but today I’m going to discuss why I put a CD player back into my audio setup.

Before we get into that, I want to touch on one of my biggest pet peeves about macOS: the media controls. A few years ago a change was made to the keyboard media controls that allowed them to control more media, even media that is available on web pages like YouTube or the little video widgets on news sites. On the surface this seems like a welcome change but in practice it feels as if the feature was programmed to purposely do the wrong thing at all times. For example, let’s say you have Spotify open playing music in the background and you visit a site that as an auto play video. Then you get a phone call so you press pause on the keyboard and…the music doesn’t stop? What gives? Well, macOS decided that the keyboard controls should control the video on the webpage and not Spotify. Or, maybe you’re like me and you use multiple music apps like Spotify and Plexamp. You’re listening to music with Spotify in the foreground with Plexamp paused in the background. You press pause on the keyboard and now suddenly there is two songs playing because macOS decided that what you really meant was to unpause the inactive music app, not the one you are actively using!

While I certainly appreciate having access to an effectively unlimited supply of music at the click of a button the overall experience has degraded significantly over the years. I believe a major contributor to this is due to how powerful today’s computers are. We’ve added greater functionality and expectations to computers and in a sense they’ve become too capable and complex for their own good. It used to be that browsing the web while running Winamp was about as much as you could reasonably expect a computer to do. I’m not lamenting that computers are more capable but I am saying that it has come at the expense of some tasks that used to feel simple and straight forward.

Which brings me back to why I’m using a CD player. As I mentioned in my broader post about the state of my audio stack in 2022, I have put a CD player back into my audio setup partially because of the straight forward simplicity that it offers. I turn on my amplifier, CD player, turn the input knob to CD and then put in a CD. That’s it, that’s all it does. Since the device has but one function there is never a question of what pressing a button will do. If a CD is playing it will always pause it. If it paused then it will play it again. As Antoine de Saint-ExupĂ©ry Terre des Hommes once said, “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away” and I believe using a CD player is similar in a way. It’s incredibly refreshing to put down a device that can do anything well enough in favor of a device that does just one thing really well.

Of course, using music apps will always offer greater overall flexibility what with the huge selection to choose from, ability to take and play the music anywhere and all the other reasons CDs lost out to file based formats. But like reading an actual book, taking a CD out of its case, placing it onto the tray of a CD player and pressing play provides the sort of tactile experience not possible using digital files. For these reasons, at least for now, I am back to listening to CDs (along with my vinyl records) at least some of the time.

If you are in the business of creating software, no matter your role, then you owe it to yourself to take a consider David Farley’s Modern Software Engineering: Doing What Works to Build Better Software Faster. I’m not in any way affiliated with the offer and I’m not getting any sort of kickback on that link. I just think it’s a good book.

This book, along with what I consider a sort of companion to it Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations will likely get you to rethink how you are approaching software development. The Accelerate book provides information backed by data that shows that the processes defined in Modern Software Engineering do in fact work to improve the pace of software development, the quality of the software and improvements in developer/employee satisfaction.

The overarching message to take away from the books is that being fast is the key. The quicker you can write and release code into production so that you can then get feedback from it the better your code quality will ultimately be. Care should be taken to remove anything that prevents developers from getting their code into production quickly and with minimal roadblocks. This doesn’t mean you are careless, however! Putting a heavy emphasis on testing, the books paint a picture of the ideal system where tests are written first and then code to satisfy the tests. This process helps ensure that your code is divided up into parts that can be tested easily which will, in essence, indirectly force you to write better, more readable and more easily understood code. These tests then have the additional benefit of allowing developers to know that the changes they made either satisfy requirements or at least didn’t break existing functionality. The difference between “I think it works” and “I know it works” pays huge dividends in developer and team satisfaction. It also provides long term benefits as people are rotated out of teams because it codifies intended behavior. Well written and described tests, when they fail, will tell the developer what the intended outcome of a function is.

This may feel counter-intuitive but the Accelerate book does a great job of showing, with data, that these things are in fact true in most if not all cases. While reading the book there were a number of times where I stopped to consider how I was approaching things and realized some of the assumptions I had made were incorrect and need to be adjusted. Much of what Farley describes sounds difficult to implement and indeed everything he describes does require a certain amount of discipline amongst the team to ensure the work they do enforces the defined ideals.

If you’re looking for a good read that will make you think about how you are approaching software development, regardless of your role in it, I highly recommend Modern Software Engineering as well as Accelerate book.