There are times when it is necessary or desirable to access servers through a single host, called a bastion. This is the first host you'd access prior to using ssh to access some other host. Limiting access to the other hosts would either be controlled by firewall rules or simply because they don't have public IPs. Whatever your reason, a bastion host is a great way to increase security by decreasing the number of exposed hosts on the internet.

For the best security, all hosts should be configured to allow only key based authentication. This immediately negates any brute force based attempts to access your server. While convenient, it isn't necessary for you to use the same keys on all servers you access. Search the web for the best way to achieve key only authentication on your distribution of choice.

Configuring access to any server using a bastion host starts by first defining how you will connect to the bastion host itself. To get started, simply add an entry into your .ssh/config file that describes how to access the bastion host itself. As an example, lets say you have a bastion host at IP 192.168.0.1 and you've installed your public key to user called 'bastionuser'. Your entry would look like this:

HostName bastionhost
User bastionuser
Host 192.168.0.1

This entry does two things. It gives you a very easy way to ssh to your bastion host and it gives you a target you can use as a proxy to access other hosts. To use the entry you can simply issue 'ssh bastionhost' and you'll access your bastion host as user bastionuser using your default private key.

With access to the bastion host itself out of the way, you're now ready to create .ssh/config entries to access other servers that are only accessible through the bastion host. For this example, lets say a server with IP 192.168.1.2 is available from the bastion host. You'd create an entry that looks like this:

HostName targetserver
User targetuser
Host 192.168.1.2
ProxyCommand ssh bastionhost -W %h:%p

That's it! When you want to ssh to the target server, simply issue ssh targetserver and your connection will first hit the bastion host to be used as a proxy. Note that, at all times, your local private key will be used to make the connection unless you explicitly tell ssh to use something else using IdentityFile <path to file>. Even if you use different keys, those keys must always exist on your local system, keys on remote systems will never be used. It's up to you to find a way to distribute your keys to all other target servers.

In addition to using a bastion host for to access a single server or a set of them, you can also chain multiple bastion hosts together simply by configuring more entries with ProxyCommand. For example, lets say a server at 192.168.2.2 is only accessible from targetserver. You'd create an entry like this:

HostName finaldestination
User finaluser
Host 192.168.2.2
ProxyCommand ssh targetserver -W %h:%p

With this entry in place it is now possible to access your final destination by issuing ssh finaldestination. This configuration will instruct ssh to attempt to access finaldestination using target server, but in order to access targetserver to first go through the bastion host. There is technically no limit to the number of hosts you can proxy through but you'll eventually hit the limits of latency.

Whatever your reason for placing an NGINX proxy in front of your Gitlab installation, you need to ensure you’re using the right configuration to support all of Gitlab’s features. I recently discovered that although my installation was mostly working I couldn’t get pipeline/build logs properly. I discovered that my proxy configuration was to blame. After some searching around I finally found that my config wasn’t quite right. To get the most out of Gitlab and ensure a smooth experience use configuration shown below as a template for your own. In my setup I use LetsEncrypt for SSL so if you’re not you can remove any of the SSL specific parts. The important configuration information is contained the the location block.

 

upstream gitlab {
  server <ip of your gitlab server>:<port>;
}

server {
    listen          443;
    server_name     <your gitlab server hostname;

    ssl on;
    ssl_certificate <path to cert>;
    ssl_certificate_key <path to key>;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_prefer_server_ciphers on;
    server_tokens off;


    gzip on;
    gzip_vary on;
    gzip_disable "msie6";
    gzip_types application/json;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_buffers 16 8k;
    gzip_http_version 1.1;

    location / {
       client_max_body_size   0;
       proxy_set_header    Host                $http_host;
       proxy_set_header    X-Real-IP           $remote_addr;
       proxy_set_header    X-Forwarded-Ssl     on;
       proxy_set_header    X-Forwarded-For     $proxy_add_x_forwarded_for;
       proxy_set_header    X-Forwarded-Proto   $scheme;

      proxy_pass https://gitlab;
    }
}

This configuration will properly pass all requests through to your Gitlab server as well as allow CI/CD pipeline logs to pass through properly.

I was recently introduced to a superb piece of software called Proxmox. Proxmox is a virtualization environment not unlike VMware ESXi. Capable of running full KVM based virtual machines or lightweight LXC based guests, Proxmox has proven to be the perfect solution for a home lab setup. Installing Proxmox is no different than installing any other Linux distribution and with minimal effort can be clustered together to form a system capable of migrating a guest from one host to another. With the right hardware you can even perform live migrations. Although Proxmox supports and is capable a lot more than I need it satisfies my desire to have a more “enterprise” like way to virtualize hardware in my home.

Proxmox is free with support plans available. If I were to use it anywhere other than at home I’d definitely play for the support subscription as it allows you to get access to the proper update repositories as well as, obviously, support. Without the support subscription your Proxmox is basically part of a testing repo meaning you get faster access to updates but also updates that are less tested.

In the coming weeks I’ll detail a bit more how I’m using Proxmox, how to setup KVM or LXC based hosts and provision them using Ansible.

UPDATE: This method is old and outdated. Most of the time this is probably what you actually want – https://docs.ansible.com/ansible/latest/modules/reboot_module.html.

Sometimes when using Ansible there is the need to reboot a server and wait for it to return. This simple recipe will allow you to achieve that while also getting some nice feedback so you know what is going on. You can place these tasks into a role or just in your playbook:

- name: Store target host and user
  set_fact:
  target_host: "{{ ansible_host }}"
  target_user: "{{ ansible_user }}"
 
- name: Reboot the server
  shell: sleep 2 && shutdown -r now "Ansible package updates triggered"
  async: 1
  poll: 0
  ignore_errors: true
 
- name: Wait for server to shutdown
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc != 0
  failed_when: result.rc == -1
  retries: 200
  delay: 1
 
- name: Wait for server to be ready
  local_action: shell ssh -o BatchMode=yes -o ConnectTimeout=2 -o StrictHostKeyChecking=no "{{ target_user }}@{{ target_host }}" true
  register: result
  until: result.rc == 0
  retries: 200
  delay: 3

Wanted to quickly share some thoughts and links of software I’ve found recently and what I’ve been up to.

If you’re into home automation at all you have to check out Home Assistant. There is a bit of a learning curve initially but once you get an understanding of how to configure it you’ll find there is a lot of potential with it. I recently replaced my HomeBridge installation on my Raspberry Pi 3 with the prebuilt RPi3 image.

If you manage servers big or small take a look at Ansible. It isn’t new technology but it is something I’ve grown quite fond of recently. It’s easy to install on Linux, Mac and even Windows 10 if you have that oddly named Linux add-on. Even if you don’t use Ansible to manage servers you should use something.

If you enjoy Destiny or Destiny 2, checkout Guardian Theater. This is a project spearheaded by a friend of mine after collaborating with me on a Xbox GameDVR clip site (https://xboxrecord.us) and deciding it’d be way cooler if you could look up clips related to yours. Guardian Theater promises to show game clips recorded by other guardians while in the same activity as you. Lots of fun!

 

Although I’m more than comfortable using command line tools to manage things there are times where a GUI is just more convenient. Pruning old containers, images and volumes in Docker are all things that are easier much to manage under a new tool I saw via twitter the other day. Portainer promises to make the task of managing Docker a bit easier and they’ve made good progress on delivering on that promise.  Getting up and running with it is incredibly simple because, as you’d expect, it’s available as a Docker image.  Simply issue the following this slightly Mac specific command:

docker run -d -p 9000:9000 -v "/var/run/docker.sock:/var/run/docker.sock" -v portainer:/data --name portainer portainer/portainer

This will get Portainer up and running on your system. If you’re on a Linux system you can skip mapping docker.sock. The other mapping just gives a persistent store for the little bit of data Portainer generates.  For full documentation visit their documentation site.

Found myself with an odd situation when running Jenkins using Docker. The time displayed was correct but claimed it was UTC which lead to some inconsistent behavior. The best way to resolve this is to force the Docker container to use the correct timezone from the host system.

To do so, add the following to your run command:

-v /etc/timezone:/etc/timezone -v /etc/localtime:/etc/localtime

screen-shot-2016-10-24-at-11-22-05-pm

Not too long ago a Raspberry Pi 3 found its way into my home and after looking around on the Internet for Raspberry Pi based projects I decided to turn it into a HomeKit hub using Homebridge. While installing Homebridge isn’t terribly difficult I decided I could make it even easier by utilizing puppet.

Using puppet to install Homebridge as well as any additional modules you want is beneficial because you can backup the puppet files and if you need to rebuild your Raspberry Pi for any reason you can quickly get back up and running by applying your existing puppet config. If you’re feeling adventurous you can manage all of your software and configs using puppet.

This post details how to get Homebridge installed on a Raspberry Pi running Raspbian Jessie and assumes you have already Raspbian installed, have gained access to it using ssh or are using the terminal application in PIXELS.

To install Homebridge, do the following (prefix all commands with sudo if you are not already logged in as root):

  1. apt-get update
  2. apt-get install puppet
  3. puppet module install puppetlabs-apt
  4. puppet module install puppet-nodejs
  5. puppet module install dustinrue-homebridgepi

With that out of the way you can create the puppet file that defines what software is installed on your system. To get started, all you need is the following:

include homebridgepi

Into a file called homebridge.pp using your favorite editor. You can now tell puppet to apply this to your system.

puppet apply homebridge.pp

This will install Homebridge and set it to start on system boot using systemd. Homebridge will be installed to run as root but you can change this behavior by editing /etc/systemd/system/homebridge.service and changing the User value. I choose to run it as root as some modules require root privileges to run helper programs. Know that if you do so you’ll need to manually move your .homebridge directory from /root to whatever user you choose.

You can view the logs of Homebridge by issuing:

journalctl -uf homebridge

You will find your config.json file located at /root/.homebridge/config.json. In it will be the default config that you can expand on. You can configure any additional modules you want to install (described below) in this file as well.

From here you can define additional Homebridge modules you want to install by editing your homebridge.pp file. As an example, here are some of the modules I install on my system to support what I’m using Homebridge for:

 

include homebridgepi

package {
  'vim':
    ensure => 'installed',
    provider => 'apt';

  'homebridge-wol':
    ensure => 'latest',
    install_options => '--unsafe-perms',
    provider => 'npm';

  'homebridge-xbox-one':
    ensure => 'latest',
    source => 'https://github.com/dustinrue/homebridge-xbox-one.git',
    install_options => '--unsafe-perms',
    provider => 'npm';
}

 

My multicore Solr on Ubuntu 10.04 has proven to be one of my most popular posts yet.  Seeing the success of that post I decided it was time to show how to get the latest version of Solr up and running on Ubuntu 10.04.  As of this writing the latest version of Solr is 3.4.0.

Before we get started you should read and follow my previous post because I borrow all of the config settings from Ubuntu’s Solr 1.4 packages.  The default config settings from the Ubuntu maintainers is still a decent starting point with Solr 3.4.  Once finished you can safely remove the old Solr 1.4 package if you want to.

With a working Solr 1.4 installation in place, we can get started on getting Solr 3.4 running.  You can change some of the following paths if you want, just remember to change them in all of the appropriate places.  Everything you’re about to see should be done as the root user.

Create some required paths

mkdir /usr/local/share/solr3
mkdir /usr/local/etc/solr3
mkdir -p /usr/local/lib/solr3/data

Next, re-own the data dir to the proper user

chown -R tomcat6.tomcat6 /usr/local/lib/solr3/data

Download the latest version of Solr

You can get the latest version of Solr from http://lucene.apache.org/solr/ and extract the files into root’s home directory.

wget http://mirrors.axint.net/apache//lucene/solr/<version>/apache-solr-<version>.tgz
tar zxvf apache-solr-<version>tgz

Extract the war Solr war file

Extract the Solr war file into a location.  You may need to install the unzip utility with apt-get install unzip.

cd /usr/local/share/solr3 
unzip /root/apache-solr-<version>/dist/apache-solr-<version>.war

Install additional libs

There are a few other libs included with the Solr distribution.  You can install anything else you need, I specifically need to have the dataimporthandler add ons.

cp /root/apache-solr-3.4.0/dist/apache-solr-dataimporthandler-* WEB-INF/lib/

Configure Multicore

If you want to have multicore enabled you’ll need to perform the following actions.  The rest of this post assumes you have copied this file and will require you to make some changes to support multicore.  I’ve marked steps that can be skipped if you also wish to skip the multicore functionality.

Copy in the multicore config file:

cp /root/apache-solr-3.4.0/example/multicore/solr.xml .

You should now edit the solr.xml file at this point, doing the following:

  • Set persistent to true
  • Remove entries for core0 and core1

Next, change the ownership and permissions so that tomcat is able to modify this file when needed

chown tomcat6.tomcat6 /usr/local/share/solr3
chown tomcat6.tomcat6 /usr/local/share/solr3/solr.xml

Copy existing config files

This is where we’re going to borrow some files from Ubuntu’s Solr package maintainer.

cd /usr/local/etc/solr3
cp -av /etc/solr/* .

Because we simply copied the config files we need to modify them to fit our new environment.  Change the following in the solr-tomcat.xml file:

  • Change docBase to /usr/local/share/solr3
  • Change Environment value to /usr/local/share/solr3

Also edit tomcat.policy file changing:

  • Modify all entries referencing solr to point to appropriate /usr/local location

Change the following in conf/solrconfig.xml:

  • Change <dataDir> to /usr/local/lib/solr3/data

If you are using multicore and you followed the Solr 1.4 multicore post you’ll have a conftemplate directory and you’ll need make changes to conftemplate/solrconfig.xml

  • Change <dataDir> to /usr/local/lib/solr3/data/CORENAME

Create symlinks

Here we’ll create some symlinks to support the way Ubuntu packages Solr.  This is necessary because we copied Ubuntu’s config files and those files reference a few locations.  Creating the symlinks also allows us to continue using the scripts created in the previous post with minimal modifications.

  • cd /usr/local/share/solr3
  • ln -s /usr/local/etc/solr3/conf
  • ln -s /usr/local/etc/solr3/ /etc/solr3
  • ln -s /usr/local/lib/solr3 /var/lib/solr3

Enable/Start the new Solr instance

We can now enable our new Solr 3.4 instance in tomcat by doing the following:

cd /etc/tomcat6/Catalina/localhost
ln -s /usr/local/etc/solr3/solr-tomcat.xml solr3.xml

Note that the name of the symlink is important as it will define where we find this instance (/solr vs /solr3).  At this point you can create a new core.  I’ve provided the updated scripts here.