random strings - zfshttps://blog.randomstring.org/2018-07-09T09:02:52-04:00zfs is pretty good2018-07-09T09:02:52-04:002018-07-09T09:02:52-04:00-dsr-tag:blog.randomstring.org,2018-07-09:/2018/07/09/zfs-is-pretty-good/Someone on the <code>debian-user</code> list just discovered that
<code>gpt</code> partitioning allows you to put 128 partitions on a
disk. They got really excited at the granularity that would allow. They
weren’t thinking about the exciting maintenance procedures that would
inevitably follow – not enough space here, too much space there, backups
all over the place.<p>Someone on the <code>debian-user</code> list just discovered that
<code>gpt</code> partitioning allows you to put 128 partitions on a
disk. They got really excited at the granularity that would allow. They
weren’t thinking about the exciting maintenance procedures that would
inevitably follow – not enough space here, too much space there, backups
all over the place.</p>
<p>This is the sort of thing that ZFS excels at.</p>
<p>For example, let’s say you have five 1 TB disks, and you have less
than 1TB that you really really want to have safe.</p>
<p>You could set up the following:</p>
<p>mainpool: a zpool composed of 2 mirrored pairs backuppool: a zpool
composed of one disk set up as a mirrored pair with one missing.</p>
<p>This gives you 2TB of space on mainpool and 1TB on backuppool.</p>
<p>Now you create zfs filesystems:</p>
<ul>
<li>mainpool/home
<ul>
<li>/images</li>
<li>/videos</li>
<li>/print</li>
<li>/finance</li>
<li>/temp</li>
</ul></li>
</ul>
<p>You don’t need to configure a particular size on any of these. They
all share the underlying mainpool. It makes sense to turn transparent
compression on for home, finance and temp, but not for images, video and
print.</p>
<p>You set up a cron job to take zfs snapshots of everything in mainpool
once an hour, once a day, and once a week, and another to delete hourly
snaps after 48 hours, delete daily snaps after a week, and delete weekly
snaps after two months.</p>
<p>You then set up a cron job to run a zfs send -> zfs receive from
the more important mainpool filesystems to backuppool once a day or
so.</p>
<p>(All these cron jobs are made even simpler by running the
not-yet-Debian-packaged <a
href="https://github.com/jimsalterjrs/sanoid">syncoid</a> and <a
href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=869845">sanoid</a>
scripts.)</p>
<p>If you run out of space on mainpool, you can add pairs of disks to
it, or upgrade all the disks one at a time, or both.</p>
<p>If you run out of space on backuppool, you can switch to a larger
disk or add disks.</p>
<p>There are, of course, many other options available.</p>
zfs internal error2018-03-28T13:00:55-04:002018-03-28T13:00:55-04:00-dsr-tag:blog.randomstring.org,2018-03-28:/2018/03/28/zfs-internal-error/Useful tip (for me): any time ZFS emits <em>internal error: Invalid
argument</em>, what it probably means is that the DKMS module and the
userland utilities are different versions.<p>Useful tip (for me): any time ZFS emits <em>internal error: Invalid
argument</em>, what it probably means is that the DKMS module and the
userland utilities are different versions.</p>
<p><code>sudo modinfo zfs|head</code> will tell you what version your
kernel is using; <code>apt show -a zfsutils-linux</code> will tell you
what versions of the user commands are available. Look for a
mismatch.</p>