Skip to main content

Disk Alignment and perf testing in Linux

It should be noted that I am JUST starting to educate myself on testing performance of disks.  Do NOT assume that what I am doing here is the work of someone that actually knows about this topic.

I decided that Bonnie (or Bonnie++ more accurately) would be the tool I use, in addition to benchmarking a mkfs and dd of /dev/zero.

    wget http://www.coker.com.au/bonnie++/bonnie++-1.03e.tgz
    tar -xvzf bonnie++-1.03e.tgz
    cd bonnie++-1.03e
    ./configure
    make && make install

On my system I have 2 LUNs from the same array.  I am assuming they are on the same, or similar, RAID Parity Groups on the Array. 

My concern is that a misaligned partition table and partitions would have a noticeable performance impact.

[root@dvgllprhvsrv91 sysadmin]# parted -s /dev/dm-6 mklabel msdos mkpart primary ext3 2048s 100%
[root@dvgllprhvsrv91 sysadmin]# parted -s /dev/dm-5 mklabel msdos mkpart primary ext3 0 100%
Warning: The resulting partition is not properly aligned for best performance.

Test 1 -- dm-6 (aligned), dm-5 (not-aligned)

Once I added the partition I ended up with the following diskmapper devices:
dm-6 -> dm-14 (aligned)
dm-5 -> dm-15 (not-aligned)

I mounted them as /dev/dm-14 and /dev/dm-15 (respectively).

[root@dvgllprhvsrv91 sysadmin]# time mkfs /dev/dm-14
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13106944 blocks
655347 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424

Writing inode tables: done                           
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 30 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

real    0m3.836s
user    0m0.016s
sys    0m1.087s

[root@dvgllprhvsrv91 sysadmin]# time mkfs /dev/dm-15
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107199 blocks
655359 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424

Writing inode tables: done                           
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

real    0m5.537s
user    0m0.010s
sys    0m5.017s

[root@dvgllprhvsrv91 sysadmin]# dd if=/dev/zero of=/mnt/dm-14/test bs=512 count=3072000
3072000+0 records in
3072000+0 records out
1572864000 bytes (1.6 GB) copied, 4.40712 s, 357 MB/s

[root@dvgllprhvsrv91 sysadmin]# dd if=/dev/zero of=/mnt/dm-15/test bs=512 count=3072000
3072000+0 records in
3072000+0 records out
1572864000 bytes (1.6 GB) copied, 4.41995 s, 356 MB/s

[root@dvgllprhvsrv91 bonnie++-1.03e]# bonnie++ -d /mnt/dm-14/ -s 2g -r1g -n 0 -m localhost -f -b -u root
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost        2G           262728  26 267633  20           2287646 100 +++++ +++
localhost,2G,,,262728,26,267633,20,,,2287646,100,+++++,+++,,,,,,,,,,,,,

[root@dvgllprhvsrv91 bonnie++-1.03e]# bonnie++ -d /mnt/dm-15/ -s 2g -r1g -n 0 -m localhost -f -b -u root
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Version 1.03e       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
localhost        2G           243774  25 243384  20           2359965  99 +++++ +++
localhost,2G,,,243774,25,243384,20,,,2359965,99,+++++,+++,,,,,,,,,,,,,





Comments

Popular posts from this blog

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...

Sun USS 7100 foo

TIP: put ALL of your LUNs into a designated TARGET and INITIATOR group when you create them.  If you leave them in the "default" group, then everything that does an discovery against the array will find them :-( I'm struggling to recognize a reason that a default should even be present on the array. Also - who, exactly, is Sun trying to kid.  The USS is simply a box.. running Solaris .. with IPMP and ZFS.  Great.  If you have ever attempted to "break-in" or "p0wn" your IBM HMC, you know that there are people out there that can harden a box - then.. there's Sun.  After a recent meltdown at the office I had to get quite intimate with my USS 7110 and learned quite a bit.  Namely: there's a shell ;-) My current irritation is how they attempt to "warn you" away from using the shell (my coverage expired a long time ago to worry about that) and then how they try to hide things, poorly. I was curious as to what version of SunOS it ...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.