Skip to main content

What a Cluster... VCS 5.1 on ESX (vSphere)

I had originally planned on this being a lengthy post, detailing my experience with building a 2-node Veritas Cluster Server environment using VMware vSphere 4.1 hosting Red Hat 5.7 and VCS 5.1 SP1GA.

Once I had the cluster up and running, it seemed to be somewhat flaky and I decided to focus on more important things.  I might get back to this someday.

I guess my greatest surprise is that fencing appears to be working (even though the "hardware test" failed --
Output:
Preempt and abort key KeyA using key KeyB on node vcs02 ................ Failed
An even greater surprise is why fencing does not work in more environments...

Regardless, here it is...
[root@vcs01 rhel5_x86_64]# hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A vcs01 RUNNING 0
A vcs02 RUNNING 0

-- GROUP STATE
-- Group System Probed AutoDisabled State

B ClusterService vcs01 Y N ONLINE
B ClusterService vcs02 Y N OFFLINE
B cvm vcs01 Y N ONLINE
B cvm vcs02 Y N ONLINE
[root@vcs01 rhel5_x86_64]# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 5dfc01 membership 01
Port b gen 5dfc15 membership 01
Port d gen 5dfc08 membership 01
Port f gen 5dfc21 membership 01
Port h gen 5dfc18 membership 01
Port u gen 5dfc1e membership 01
Port v gen 5dfc1a membership 01
Port w gen 5dfc1c membership 01

I believe I will be able to test the fencing by removing a disk or two from a node and letting it run it's course.

And.. here is the key:
create a directory on a datastore visible to the ESX server
# mkdir /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/
Then... run the following commands to create the volumes.
#!/bin/sh
vmkfstools -c 512m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/quorum-1.vmdk
vmkfstools -c 512m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/quorum-2.vmdk
vmkfstools -c 512m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/quorum-3.vmdk
vmkfstools -c 2048m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/sharedDB-1.vmdk
vmkfstools -c 2048m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/sharedDB-2.vmdk
vmkfstools -c 2048m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/sharedDB-3.vmdk

Attach them to your 2 VCS-VMs.

Comments

Popular posts from this blog

RHN Satellite Server (spacewalk) repomd.xml not found

"repomd.xml not found" If you add a channel, or if your RHN cache gets corrupted, and one of your guests complains that it cannot find repomd.xml for jb-ews-2-x86_64-server-5-rpm (for example) - you need to rebuild your repodata cache. Normally this is an automated job - which is exemplified by the fact that you have obviously built out your entire Satellite environment and never had to do any of the steps you are about to do. So - some prep work: Open 3 terminals to your Satellite Server and run: # Term 1 cd /var/cache/rhn watch "ls -l | wc -l" # Term 2 pwd cd /var/log/rhn tail -f rhn_taskomatic_daemon.log # Term 3 satellite-sync --channel=jb-ews-2-x86_64-server-5-rpm Once the satellite-sync has completed, you >should< see the count increment by one.  If you are unlucky (like me) you will not. You then need to login to the Satellite WebUI as the satellite admin user. Click on the Admin tab (at the top) Task Schedules (on the left) fin

Install RHEL 7 on old HP DL380 g5

Someone at work had been running RHEL on an HP DL380 G5 and blew it up.  After several attempts at doing an installation that made me conclude the hardware was actually bad... I kept digging for the answer. Attempt install and Anaconda could not find any disks - try a Drivers Disk (dd.img) both cciss and hpsa.   -- once we did that, when the system would reboot it would say it could not find a disk. hmmm. Boot from your installation media and interrupt the startup at grub. Add hpsa.hpsa_allow_any=1 hpsa.hpsa_simple_mode=1 to the line starting with linuxefi press CTRL-X to boot. Once the system restarts after the install, you need to once again interrupt the startup and add the line from above. After the system starts, edit /etc/default/grub and add those 2 parameters to the end of the line starting with GRUB_CMDLINE_LINUX (which likely has quiet at the end of the line currently). then run # cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.orig # grub2-mkconfig -o /boot/grub2

MOTD with colors! (also applies to shell profiles)

I'm not sure why I had never looked into this before, but this evening I became obsessed with discovering how to present different colored text in the /etc/motd. A person had suggested creating a shell script (rather than using special editing modes in vi, or something) and I agree that is the simplest way of getting this accomplished quickly. This most noteworthy portion of this script is the following: RESET="\033[0m" that puts the users shell back to the original color. I typically like a green text on black background. Also - a great reference for the different colors and font-type (underscore, etc...) https://wiki.archlinux.org/index.php/Color_Bash_Prompt I found this example on the web and I wish I could recall where so that I could provide credit to that person. #!/bin/bash #define the filename to use as output motd="/etc/motd" # Collect useful information about your system # $USER is automatically defined HOSTNAME=`uname -n` KERNEL=`un