Skip to main content

What a Cluster... VCS 5.1 on ESX (vSphere)

I had originally planned on this being a lengthy post, detailing my experience with building a 2-node Veritas Cluster Server environment using VMware vSphere 4.1 hosting Red Hat 5.7 and VCS 5.1 SP1GA.

Once I had the cluster up and running, it seemed to be somewhat flaky and I decided to focus on more important things.  I might get back to this someday.

I guess my greatest surprise is that fencing appears to be working (even though the "hardware test" failed --
Output:
Preempt and abort key KeyA using key KeyB on node vcs02 ................ Failed
An even greater surprise is why fencing does not work in more environments...

Regardless, here it is...
[root@vcs01 rhel5_x86_64]# hastatus -sum

-- SYSTEM STATE
-- System State Frozen

A vcs01 RUNNING 0
A vcs02 RUNNING 0

-- GROUP STATE
-- Group System Probed AutoDisabled State

B ClusterService vcs01 Y N ONLINE
B ClusterService vcs02 Y N OFFLINE
B cvm vcs01 Y N ONLINE
B cvm vcs02 Y N ONLINE
[root@vcs01 rhel5_x86_64]# gabconfig -a
GAB Port Memberships
===============================================================
Port a gen 5dfc01 membership 01
Port b gen 5dfc15 membership 01
Port d gen 5dfc08 membership 01
Port f gen 5dfc21 membership 01
Port h gen 5dfc18 membership 01
Port u gen 5dfc1e membership 01
Port v gen 5dfc1a membership 01
Port w gen 5dfc1c membership 01

I believe I will be able to test the fencing by removing a disk or two from a node and letting it run it's course.

And.. here is the key:
create a directory on a datastore visible to the ESX server
# mkdir /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/
Then... run the following commands to create the volumes.
#!/bin/sh
vmkfstools -c 512m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/quorum-1.vmdk
vmkfstools -c 512m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/quorum-2.vmdk
vmkfstools -c 512m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/quorum-3.vmdk
vmkfstools -c 2048m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/sharedDB-1.vmdk
vmkfstools -c 2048m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/sharedDB-2.vmdk
vmkfstools -c 2048m -a lsilogic -d eagerzeroedthick /vmfs/volumes/datastore1/SharedDisk-VCS01-VCS02/sharedDB-3.vmdk

Attach them to your 2 VCS-VMs.

Comments

Popular posts from this blog

PXE boot a LiveCD image

Summary: I have wanted to build a kickstart environment which hosted a "rescue CD" or LiveCD to allow you to boot over the network after you blew your stuff up and needed to repair a few things.  Today I have worked through a method of doing so, with the help of the people who published a succinct script with the Red Hat Enterprise Virtualization Hypervisor.  (the script will be at the bottom of this post - if I have somehow not followed the GPL, please let me know and I will correct whatever is necessary) NOTE/Warning: The boot will fail due the initrd being too large (645mb).  I'm not sure how to proceed.  This procedure worked for RHEVh, because it is quite a bit smaller.  Hopefully I can report back with progress on this? :-$ Procedure: download your LiveCD image to /export/isos/RESCUE/Fedora-16-i686-Live-Desktop.iso # cd /var/tmp # vi livecd-iso-to-pxeboot (populate the file with the script shown below) # chmod 754 ./livecd-iso-to-pxeb...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...