Whatever your reason for placing an NGINX proxy in front of your Gitlab installation, you need to ensure you’re using the right configuration to support all of Gitlab’s features. I recently discovered that although my installation was mostly working I couldn’t get pipeline/build logs properly. I discovered that my proxy configuration was to blame. After some searching around I finally found that my config wasn’t quite right. To get the most out of Gitlab and ensure a smooth experience use configuration shown below as a template for your own. In my setup I use LetsEncrypt for SSL so if you’re not you can remove any of the SSL specific parts. The important configuration information is contained the the location block.

 

upstream gitlab {
  server <ip of your gitlab server>:<port>;
}

server {
    listen          443;
    server_name     <your gitlab server hostname;

    ssl on;
    ssl_certificate <path to cert>;
    ssl_certificate_key <path to key>;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    server_tokens off;


    gzip on;
    gzip_vary on;
    gzip_disable "msie6";
    gzip_types application/json;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;

    location / {
       client_max_body_size   0;
       proxy_set_header    Host                $http_host;
       proxy_set_header    X-Real-IP           $remote_addr;
       proxy_set_header    X-Forwarded-Ssl     on;
       proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
       proxy_set_header    X-Forwarded-Proto   $scheme;

      proxy_pass https://gitlab;
    }
}

This configuration will properly pass all requests through to your Gitlab server as well as allow CI/CD pipeline logs to pass through properly.

Recently I found myself needing to perform a downgrade of Home Assistant in Hassio which wasn’t immediately obvious. If you need to downgrade your Hassio install enable the ssh add-on (or the web based one) and enter the following command on the console:

curl -d '{"version": "0.57.1"}' http://hassio/homeassistant/update

The command will hang for bit while Hassio downloads the different versions and prepares to install it.

I was recently introduced to a superb piece of software called Proxmox. Proxmox is a virtualization environment not unlike VMware ESXi. Capable of running full KVM based virtual machines or lightweight LXC based guests, Proxmox has proven to be the perfect solution for a home lab setup. Installing Proxmox is no different than installing any other Linux distribution and with minimal effort can be clustered together to form a system capable of migrating a guest from one host to another. With the right hardware you can even perform live migrations. Although Proxmox supports and is capable a lot more than I need it satisfies my desire to have a more “enterprise” like way to virtualize hardware in my home.

Proxmox is free with support plans available. If I were to use it anywhere other than at home I’d definitely play for the support subscription as it allows you to get access to the proper update repositories as well as, obviously, support. Without the support subscription your Proxmox is basically part of a testing repo meaning you get faster access to updates but also updates that are less tested.

In the coming weeks I’ll detail a bit more how I’m using Proxmox, how to setup KVM or LXC based hosts and provision them using Ansible.

UPDATE: This method is old and outdated. Most of the time this is probably what you actually want – https://docs.ansible.com/ansible/latest/modules/reboot_module.html.

Sometimes when using Ansible there is the need to reboot a server and wait for it to return. This simple recipe will allow you to achieve that while also getting some nice feedback so you know what is going on. You can place these tasks into a role or just in your playbook:

- name: Store target host and user
  set_fact:
  target_host: "{{ ansible_host }}"
  target_user: "{{ ansible_user }}"
 
- name: Reboot the server
  shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
  async: 1
  poll: 0
  ignore_errors: true
 
- name: Wait for server to shutdown
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc != 0
  failed_when: result.rc == -1
  retries: 200
  delay: 1
 
- name: Wait for server to be ready
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc == 0
  retries: 200
  delay: 3

 

In a previous post I mentioned I had recently picked up a HiLetgo ESP8266 NodeMCU module along with a DHT22 temperature and humidity sensor. In this post, I’ll describe how I combined the board, the sensor, hass.io and MQTT using the mosquito add-on for hass.io to create a temperature sensor for my home office.

I’m not going to go into detail about how to setup hass.io on a Raspberry PI, their site does an excellent job of describing how to get it installed, but I do highly recommend using that installation method if you’re on the fence. Raspberry PIs are inexpensive and Home Assistant runs quite well on the platform.

Instead, I’m going to concentrate on what it takes to get this working while going over what you need to enable in hass.io to support a small, WiFi enabled board sending temperature and humidity readings.

Continue reading

Wanted to quickly share some thoughts and links of software I’ve found recently and what I’ve been up to.

If you’re into home automation at all you have to check out Home Assistant. There is a bit of a learning curve initially but once you get an understanding of how to configure it you’ll find there is a lot of potential with it. I recently replaced my HomeBridge installation on my Raspberry Pi 3 with the prebuilt RPi3 image.

If you manage servers big or small take a look at Ansible. It isn’t new technology but it is something I’ve grown quite fond of recently. It’s easy to install on Linux, Mac and even Windows 10 if you have that oddly named Linux add-on. Even if you don’t use Ansible to manage servers you should use something.

If you enjoy Destiny or Destiny 2, checkout Guardian Theater. This is a project spearheaded by a friend of mine after collaborating with me on a Xbox GameDVR clip site (https://xboxrecord.us) and deciding it’d be way cooler if you could look up clips related to yours. Guardian Theater promises to show game clips recorded by other guardians while in the same activity as you. Lots of fun!

Does your WordPress site load in under a second? This one can despite running, for years, on one of the lower VPS tiers available at DigitalOcean thanks to Cloudflare. DigitalOcean’s server offerings are excellent for the price and I’ve always found the performance perfectly acceptable given what I’m paying each month. That said, Cloudflare offers a free CDN tier with just the right mix of features to be appealing and useful. No matter how good your server is you will always benefit from a CDN’s ability to cache content and get it physically closer to your audience. This post goes into detail about how to get the best possible performance from Cloudflare and WordPress by tweaking a few settings and installing a single plugin.

I have no association with Cloudflare and I’m not hear to sell it to you but for this post to make any sense you’ll need to have a WordPress site running through at least Cloudflare’s free CDN offering. If this sounds like you then lets continue.

Continue reading

 

Although I’m more than comfortable using command line tools to manage things there are times where a GUI is just more convenient. Pruning old containers, images and volumes in Docker are all things that are easier much to manage under a new tool I saw via twitter the other day. Portainer promises to make the task of managing Docker a bit easier and they’ve made good progress on delivering on that promise.  Getting up and running with it is incredibly simple because, as you’d expect, it’s available as a Docker image.  Simply issue the following this slightly Mac specific command:

docker run -d -p 9000:9000 -v "/var/run/docker.sock:/var/run/docker.sock" -v portainer:/data --name portainer portainer/portainer

This will get Portainer up and running on your system. If you’re on a Linux system you can skip mapping docker.sock. The other mapping just gives a persistent store for the little bit of data Portainer generates.  For full documentation visit their documentation site.

Found myself with an odd situation when running Jenkins using Docker. The time displayed was correct but claimed it was UTC which lead to some inconsistent behavior. The best way to resolve this is to force the Docker container to use the correct timezone from the host system.

To do so, add the following to your run command:

-v /etc/timezone:/etc/timezone -v /etc/localtime:/etc/localtime