We utilize a USS7110 for the ability to have shared storage to our 2 ESX servers, via iSCSI. For the most part the device is rather tremendous. The other part... well, not so much. As it turns out the USS7110 is actually just a Sun x4240 running Solaris 11 and 16 HDDs. 2 for the OS, and the rest for data. Our array was configured to use Double Parity with a spare. Very good resiliency with minimal sacrifice of space, IMO.
So, as I sat there sulking over this debacle... I started to formulate a plan. I happen to have an identical array in my basement, which had not been used in quite some time. So, in a nutshell... I fired up the alt-array, configured it to resemble the dead array (which was a waste of time, in hindsight), shut it down, pulled the 2 OS drives and went back to the office.
The city of Wayzata sent out their Dream Team to do something involving machinery and digging, etc.. in front of our building. Even though there are literally hundreds of those little flags marking whatever it is they mark, they somehow managed to go through the power feed, apparently. The UPS did not signal to the ESX servers or the Array to shutdown.. so everything just simply crashed. This happens quite frequently so I figured it was not a big deal. I was horribly mistaken.
The actual significance of what took place is probably quite minimal. A file or 2 became corrupt on the OS disks of the "appliance" and it would only partially boot. Unfortunately one of those files was the dbus.xml used by SMF which prevented HAL from loading, which subsequently prevented rvolmgr from loading. (I'll explain how/why I know all this later). After multiple reboots, because that is what a real admin will do, right... still no love from the array. Put a call into Oracle. They walk me through a few basic checks and advised me to run a few commands. Still, nothing. :-( They basically threw in the towel and said "I hope you have backups". Awesome...
So, as I sat there sulking over this debacle... I started to formulate a plan. I happen to have an identical array in my basement, which had not been used in quite some time. So, in a nutshell... I fired up the alt-array, configured it to resemble the dead array (which was a waste of time, in hindsight), shut it down, pulled the 2 OS drives and went back to the office.
NOTE: the following assumes you have an array that "almost works" and has a completely functional ILOM.
Also - your warranty is ABSOLUTELY VOID if you follow these instructions. I imagine if you had maintenance on your device, you wouldn't be in this boat anyhow.
- 1st step -- power down the broken array and remove ALL the drives. And by "remove" I mean pull them out far enough so the connections in the back are not engaged. I don't recommend pulling them from the chasis as they will probably get mixed up.
- pull out the corrupt OS drives. Set them aside.
- install only 1 of the alt-drives in Slot 0 (lower left hand corner)
- power on the USS using the ILOM web interface
- ssh to the ILOM
- start the console
- connect to "the shell"
- remove all the zpool entries which point to disks that aren't there
- cleanly/gracefully shut the box down again
- power on the array
- connect to the BUI (https://10.10.31.54:215/)
- import the zpool
- setup the shares again.. and you're golden.
I will be attempting to document this in a reasonable format (and not just blog about it). This may be my first document I publish in my blog. ;-) If you happen to come across this and would like a detailed explanation, contact me. I'm happy to explain all that went down (no pun intended) on that fateful evening.
Comments
Post a Comment