Someone on the
debian-user list just discovered that
gpt partitioning allows you to put 128 partitions on a disk. They got really excited at the granularity that would allow. They weren't thinking about the exciting maintenance procedures that would inevitably follow -- not enough space here, too much space there, backups all over the place.
This is the sort of thing that ZFS excels at.
For example, let's say you have five 1 TB disks, and you have less than 1TB that you really really want to have safe.
You could set up the following:
mainpool: a zpool composed of 2 mirrored pairs backuppool: a zpool composed of one disk set up as a mirrored pair with one missing.
This gives you 2TB of space on mainpool and 1TB on backuppool.
Now you create zfs filesystems:
You don't need to configure a particular size on any of these. They all share the underlying mainpool. It makes sense to turn transparent compression on for home, finance and temp, but not for images, video and print.
You set up a cron job to take zfs snapshots of everything in mainpool once an hour, once a day, and once a week, and another to delete hourly snaps after 48 hours, delete daily snaps after a week, and delete weekly snaps after two months.
You then set up a cron job to run a zfs send -> zfs receive from the more important mainpool filesystems to backuppool once a day or so.
If you run out of space on mainpool, you can add pairs of disks to it, or upgrade all the disks one at a time, or both.
If you run out of space on backuppool, you can switch to a larger disk or add disks.
There are, of course, many other options available.