Skip to main content

Better is not always... better? HP Smart Array and Linux

I am currently working through an issue with my 3-node RAC clusters (RHEL 5.6 x86_64 and Oracle RAC 11g running on HP DL580-G7 servers). They seem to enjoy rebooting themselves at will. There is nothing glaring for a root cause, other than some messages in the syslog about blocking for more than 120 seconds. Anyhow - after quite a bit of research I have discovered some things I really like about Linux. They have made the disk scheduler modular (in a sense). Therefore you can utilize your disk access in one of 4 methods. The CCISS is the HP Smart Array driver which is loaded and should be consistent in most Linux releases.

If you initially look at the "scheduler" file - you can see your four options. The one in use is surrounded by brackets. I am hoping that by changing the access for ONLY the cciss device to noop, that my reboots go away - and I leave a positive legacy behind at my customer-site ;-)


root@dbslp0067:/root
# cd /sys/block/cciss\!c0d0/queue/
root@dbslp0067:/sys/block/cciss!c0d0/queue
# cat scheduler
noop anticipatory deadline [cfq]
root@dbslp0067:/sys/block/cciss!c0d0/queue
# echo noop > scheduler
root@dbslp0067:/sys/block/cciss!c0d0/queue
# cat scheduler
[noop] anticipatory deadline cfq
root@dbslp0067:/sys/block/cciss!c0d0/queue

After trying this work-around on my system, I'm disappointed to report it did not help my cause. I will leave this out there, as I may need to tune a system for this at a later point.


Turned out to be a bad "CPU on a SAN switch blade". Not sure why multipath didn't handle the event better than the box locking up and subsequently rebooting itself. Might have to investigate the Multipath tunables?


Comments

Popular posts from this blog

PXE boot a LiveCD image

Summary: I have wanted to build a kickstart environment which hosted a "rescue CD" or LiveCD to allow you to boot over the network after you blew your stuff up and needed to repair a few things.  Today I have worked through a method of doing so, with the help of the people who published a succinct script with the Red Hat Enterprise Virtualization Hypervisor.  (the script will be at the bottom of this post - if I have somehow not followed the GPL, please let me know and I will correct whatever is necessary) NOTE/Warning: The boot will fail due the initrd being too large (645mb).  I'm not sure how to proceed.  This procedure worked for RHEVh, because it is quite a bit smaller.  Hopefully I can report back with progress on this? :-$ Procedure: download your LiveCD image to /export/isos/RESCUE/Fedora-16-i686-Live-Desktop.iso # cd /var/tmp # vi livecd-iso-to-pxeboot (populate the file with the script shown below) # chmod 754 ./livecd-iso-to-pxeb...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...