Introduction
For a long time I've used VMware instances to prototype and test systems and network topologies I've used in my work. I also use them to run my own home projects.The list includes:
- Redundant load-balancers and firewalls (iptables, Pacemaker, Corosync, ldirectord, iproute2)
- NoSQL solutions, such as Riak and Tokyo Cabinet.
- Oracle, PostgreSQL, MySQL databases
- IBM GPFS clustered filesystem with clustered NFS on top using ISCSI nodes.
- Firewall and NSM solution to DMZ each of my kids' sub-networks at home.
Recently I upgraded my ESXi 5 home-lab server with 8 x Intel 520 240GB SSDs to bring some more IO speed to my testing facilities, but also enough room to test NoSQL and distributed storage solutions with close to 2 TB of data. Sidenote: These 8 SSD drives cost in total about the same amount as one Seagate 2GB SCSI disk I bought in 1994.
The ESXi server also contains an Areca raid card with 256MB cache (w/BBU) connected to an external enclosure with 8 SATA disks, so these 8 SSDs should deliver a nice speedup when testing distributed storage solutions.
For this test I created an Ubuntu 12.04 server instance with 12GB RAM and 4 cores.
The other instances running on this ESXi host were shutdown during the IOZONE testing.
I'm using two ICYDOCK hot-swap cabinets for the 8 SSDs.
Partitioning: gdisk
All 8 SSDs were partitioned using gdisk.
$ sudo gdisk -l /dev/sdb
GPT fdisk (gdisk) version 0.8.1
Found valid GPT with protective MBR; using GPT.
Disk /dev/sdb: 468862128 sectors, 223.6 GiB
Logical sector size: 512 bytes
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 468862094
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 468862094 223.6 GiB 8300 Linux filesystem
Filesystem: XFS
I've been using XFS on Linux servers since around 2001, and have been very satisfied with it.
#!/bin/bash
DEVICES="sdb sdc sdd sde sdf sdg sdh sdi"
devno=1
for dev in $DEVICES; do
echo /dev/${dev}1
mkfs.xfs -L ssd0$devno /dev/${dev}1
let devno=$devno+1
done
$ mount|grep xfs|sort
/dev/sdb1 on /ssd01 type xfs (rw,noatime,nobarrier)
/dev/sdc1 on /ssd02 type xfs (rw,noatime,nobarrier)
/dev/sdd1 on /ssd03 type xfs (rw,noatime,nobarrier)
/dev/sde1 on /ssd04 type xfs (rw,noatime,nobarrier)
/dev/sdf1 on /ssd05 type xfs (rw,noatime,nobarrier)
/dev/sdg1 on /ssd06 type xfs (rw,noatime,nobarrier)
/dev/sdh1 on /ssd07 type xfs (rw,noatime,nobarrier)
/dev/sdi1 on /ssd08 type xfs (rw,noatime,nobarrier)
IOZONE
For this test I used IOZONE version 3.420 (compiled from source).Setup configuration:
- Ubuntu RAM size: 12 GB
- Total file-size used: 24 GB
- Record-size used: 4 KB
Below are the results from two of the test-runs I did.
Benchmark IOPS with 1 SSD drive
Ran 1 IOZONE process with 24 threads.Each thread writes a 1048576 Kbyte file in 4 Kbyte records.
iozone -l24 -T -k 1000 -s1g -r4k -i0 -i1 -i2 -O -+u -R -b res.xls
Benchmark IOPS with 8 SSD drives
Ran 8 IOZONE processes in the parallel (one process on each SSD), each running 12 threads.Each thread writes a 262144 Kbyte file in 4 Kbyte records.
iozone -l12 -T -k 1000 -s256m -r4k -i0 -i1 -i2 -O -+u -R -b res.xls
All 8 IOZONE processes started at the same second and ended within 1 second a part.
This is by no means the absolute result, but an indication of the performance the server hardware is capable of delivering with the configuration and setup chosen. YMMV.
Twitter: @NorSoulX
Soulful Goodness