We’ve been told that there are some factors that sometimes lead to an amount different from 128 MB of disk space being available. I’m aware that filesystem overhead and transparent compression are two of those effects. This post documents some experiments on the available disk space that exercise extreme cases for these effects.
Our storage device
$ mount /dev/rbd0 on /app type btrfs (rw,relatime,compress=zlib,ssd,space_cache,subvolid=5,subvol=/) $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT rbd0 251:0 0 128M 0 disk /app
The underlying device is 128 MB. Lots of mount options, some of which look like they help save space.
I deleted everything I could.
$ find . . ./.glitch-meta ./.glitch-meta/deletedgit ./.glitch-meta/backup.md5 $ du -sh . 24K . $ df -h Filesystem Size Used Avail Use% Mounted on /dev/rbd0 128M 17M 87M 16% /app
A small directory owned by root remains. It takes 17 MB to represent the filesystem in this sate, and 87 MB of space is available.
Filling with random data
Next, I create as big of a file as possible filled with random bytes. That should be the worst case for compression, and it should actually take a little more space than the file itself due to some overhead.
On the other hand, creating a single file with “all the data” should minimize the filesystem metadata and block fragmentation overhead. Let’s see what happens.
$ cat /dev/urandom > blob cat: write error: No space left on device $ ls -lh total 71M -rw-r--r-- 1 app app 71M Oct 17 00:57 blob
We can create a file of up to 71 MB.
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/rbd0 128M 88M 128K 100% /app
After that, only a tiny bit of space is left. It takes 88 MB to represent the filesystem in this state.
In the worst case, you can store up to 71 MB, a lot less than 128 MB. I think that proportion of overhead is too high for compression alone and too high for the overhead of storing a single file. There must be some other effects at work here.