We have seen how to try out LXD containers on Ubuntu on DigitalOcean. In this post, we will see how to use the new DigitalOcean block storage support (just out of beta!).
This new block storage has the benefit of being additional separate disk space that should be faster to access. Then, software such as LXD would benefit from this. Without block storage, the ZFS pool for LXD is stored as a loopback file on the ext4 root filesystem. With block storage, the ZFS pool for LXD is stored on the block device of the block storage.
When you start a new droplet, you get by default the ext4 filesystem and you cannot change it easily. Some people managed to hack around this issue, https://github.com/fxlv/docs/blob/master/freebsd/freebsd-with-zfs-digitalocean.md though there are no instructions on how to do with a Linux distribution. The new block storage allows to get ZFS on additional block devices without hacks.
Actually, this block storage feature is so new that even the DigitalOcean page still asks you to request early access.
When you create a VPS, you have now the option to specify additional block storage. The pricing is quite simple, US$0.10 per GB, and you can specify from 1 GB and upwards.
It is also possible to add block storage to an existing VPS. Finally, as shown in the screenshot, block storage is currently available at the NYC1 and SFO2 datacenters.
For our testing, we created an Ubuntu 16.04 $20/month VPS at the SFO2 datacenter. It is a dual-core VPS with 2GB of RAM.
The standard disk is
Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 4CF812E3-1423-1923-B28E-FDD6817901CA Device Start End Sectors Size Type /dev/vda1 2048 83886046 83883999 40G Linux filesystem
While the block device for the block storage is
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
Here is how to configure LXD to use the new block device,
root@ubuntu-2gb-sfo2-01:~# lxd init Name of the storage backend to use (dir or zfs): zfs Create a new ZFS pool (yes/no)? yes Name of the new ZFS pool: mylxd-pool Would you like to use an existing block device (yes/no)? yes Path to the existing block device: /dev/sda Would you like LXD to be available over the network (yes/no)? no Do you want to configure the LXD bridge (yes/no)? yes Warning: Stopping lxd.service, but it can still be activated by: lxd.socket LXD has been successfully configured.
Let’s see some benchmarks! We run bonnie++, first on the standard storage, then on the new block storage,
# bonnie -d /tmp/ -s 4G -n 0 -m STANDARDSTORAGE -f -b -u root
Version 1.97 | Sequential Output | Sequential Input | Random Seeks |
Sequential Create | Random Create | |||||||||||||||||||||
Size | Per Char | Block | Rewrite | Per Char | Block | Num Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||
K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | |||
STANDARDSTORAGE | 4G | 749901 | 92 | 611116 | 80 | 1200389 | 76 | +++++ | +++ | |||||||||||||||||
Latency | 50105us | 105ms | 7687us | 11021us | Latency |
# bonnie -d /media/blockstorage -s 4G -n 0 -m BLOCKSTORAGE -f -b -u root
Version 1.97 | Sequential Output | Sequential Input | Random Seeks |
Sequential Create | Random Create | |||||||||||||||||||||
Size | Per Char | Block | Rewrite | Per Char | Block | Num Files | Create | Read | Delete | Create | Read | Delete | ||||||||||||||
K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | K/sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | /sec | % CPU | |||
BLOCKSTORAGE | 4G | 193923 | 23 | 96283 | 14 | 217073 | 18 | 2729 | 58 | |||||||||||||||||
Latency | 546ms | 165ms | 8882us | 35690us | Latency |
The immediate benefits are that the latency is much lower with the new block storage, and the CPU usage is also low.
Let’s try with dd,
root@ubuntu-2gb-sfo2-01:~# dd if=/dev/zero of=/tmp/standardstorage.img bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 4.91043 s, 875 MB/s
root@ubuntu-2gb-sfo2-01:~# dd if=/dev/zero of=/media/blockstorage/blockstorage.img bs=4M count=1024
1024+0 records in
1024+0 records out
4294967296 bytes (4.3 GB, 4.0 GiB) copied, 19.8969 s, 216 MB/s
On the other hand, the standard storage appears four times faster than the new block storage.
I am not sure how these should be interpreted. I look forward to reading other reports about this.
Recent Comments