This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn’t fit in a single post.

A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use. While I understand what shared file systems can do they also have steeper HW requirements and honestly the benefits are rather limited. If your purpose for using shared file systems is to learn about them, then go for it as it is a great thing to understand. Outside of learning, I prefer and have had great luck with keeping things as simple as I can while still getting something that is useful and versatile. Additionally, I also avoid the use of ZFS because running it properly requires more memory, memory I’d prefer to give to my VMs.

For that reason, my Proxmox cluster consists of two systems with a Raspberry Pi acting as a qdevice for quorum. Each system has a single spinning drive for the Proxmox OS and then a single SSD for all of the VMs that live on that node. I then have another physical system providing network based shared storage for content like ISO files and backups, things that truly need to be shared between the Proxmox cluster nodes. This setup gives me a blend of excellent VM performance because the disks are local and speedy, shared file space where it matters for ISOs and backups while maintaining one of the best features of Proxmox, live migration. Yes, live migration of VMs is still possible even when the underlying file system is not shared between systems, it’s just a lot slower of a process because data must be transferred over the network during a migration.

One of the benefits of using a file system like ceph is that you can distribute files across your systems in the event a system or disk fails. You have some redundancy there but regardless of redundancy you still need to have actual backups. To cover for this, I have regular backups taken of important VMs and separate backup tasks specifically for databases. For anything that has files, like Plex or Nextcloud, that data comes from my TrueNAS system using a network file system like NFS or Samba. Again, local storage for speed and shared storage where it really matters.

This setup gives me a lot of the benefits without a ton of overhead which helps me keep costs down. Each Proxmox node, while clustered, still work more or less independently of each other. I can recover from issues by restoring a backup to either node or recover databases quickly and easily. I don’t ever have to debug shared file system issues and any file system issues I do face are localized to the affected node. In the event of a severe issue, recovery is simplified because I can remove the replace the bad drive and simply restore any affected VMs on that node. The HA features of Proxmox are very good and I encourage their use when it makes sense, but you can avoid their complexity and maintenance and still have a reliable home lab that is easy and straight forward to live with.

One of the first things I really hoped ZFS could do when I heard about it (and its ability to share using iSCSI) was the ability to resize things at will. Resizing file systems is something that has been possible for a while but it has never been this easy, at least in my mind. With the ability to resize storage volumes you can put a ton of disks into a single system and then share out exactly what is needed to your systems and then resize if you need more later on. Today I got a chance to test ZFS’s ability to resize volumes as well as how Windows handles the task.

Although the ability to resize file systems has been around for a while it has never been as easy as it is today. Linux has been able to resize file systems for some time and the latest versions of Windows provides the ability right in Disk Management. I run a number of Windows systems and the ability to resize NTFS iSCSI volumes is what I’m primarily interested in.

Click read more to learn how this is done. This isn’t a full how-to but more of an overview of how to make it all happen.

Continue reading

One of the things I need to test is using iSCSI to store data on some Windows servers. Here is a quick synopsis of how to create a storage pool and then create a ZFS dataset that can be shared using iSCSI

Create the pool from the available disks, if they don’t already exist. Be sure to read docs on what kind of pool you want to create. I’m using raid-z

zpool create raid-z test /dev/dsk/c0t1d0 /dev/dsk/c0t2d0 /dev/dsk/c0t3d0 /dev/dsk/c0t4d0

Create the data set and share it using iSCSI

zfs create -s -V40G test/iscsi
zfs set shareiscsi=on test/iscsi

You should now have 40GB of iscsi based storage available. Use iscsi-initiator on Windows XP/Vista/Server 2003 to attach to the iscsi target, assign a drive letter and format.

A really good friend of mine likes call me an OS whore from time to time. It’s all in fun but he is right, I am. I can’t make up my mind which OS I like the best. Windows, Linux, Windows, Mac? Which is it? To be honest, I don’t know. I change my mind depending on my current needs, current capabilities of the operating systems of the day and I really just like to tinker. I also like to use whatever works based on what I need to get done.

Although my current favorite OS is definitely OS X I’m not exactly afraid to try out other operating systems. OpenSolaris is an OS I’ve played with before simply because I wanted to get to know ZFS, an incredible file system that should not be over looked. I have written about ZFS before but haven’t really worked with it much since then.

My interest in OpenSolaris and ZFS has been renewed as of late because of the need for a good amount of storage in the most cost effective manner possible. In the coming weeks I’ll be posting quite bit as I learn how to use OpenSolaris. Many posts will simply be reference information for myself and others might be more educational. Stay tuned.