Skip to main content

GPT and Software RAID on Linux (RHEL)

I was rebuilding one of my lab boxes which has 4 x SATA drives (2 x 500GB and 2 x 1TB).  During the install I configured the Software RAID on the 2 x 500's for the OS, etc.. afterwards I wanted to use Software RAID to mirror the 2 x 1TB.
2 x 500GB - OS, swap, Virtual Machines
2 x 1TB - iSCSI and NFS share to be used by my RHEV 3 lab

I acknowledged (just today) that GPT is the future... so I decided to use GPT to accomplish all of this.

parted -s /dev/sdc -- mklabel gpt mkpart primary ext4 1 -1 set 1 raid on
parted -s /dev/sdd -- mklabel gpt mkpart primary ext4 1 -1 set 1 raid on
mdadm --create /dev/md127 --level=mirror --raid-devices=2 /dev/sdc1 /dev/sdd1


Conversely, if you are still using the MSDOS partition scheme (which is perfectly fine/normal for drives under 2TB), then the following would work also

echo -e "o\nn\np\n1\n\n\nt\nfd\nw\n" | fdisk /dev/sdc
echo -e "o\nn\np\n1\n\n\nt\nfd\nw\n" | fdisk /dev/sdd
mdadm --create /dev/md127 --level=mirror --raid-devices=2 /dev/sdc1 /dev/sdd1

... more randomness...


pvcreate /dev/md127
vgcreate vg_USS /dev/md127
# iSCSI
lvcreate -L300g -nlv_tgtd vg_USS
mkfs.ext4 /dev/mapper/vg_USS-lv_tgtd
yum install scsi-target-utils iscsi-initiator-utils
echo "/dev/mapper/vg_USS-lv_tgtd        /var/lib/tgtd   ext4 defaults 0 0" >> /etc/fstab
mkdir -p /var/lib/tgtd
mount /var/lib/tgtd
# NFS
lvcreate -L100g -nlv_nfs vg_USS
mkfs.ext4 /dev/mapper/vg_USS-lv_nfs
echo "/dev/mapper/vg_USS-lv_nfs        /export/nfs   ext4 defaults 0 0" >> /etc/fstab
mkdir -p /export/nfs
mount /export/nfs
setsebool nfs_export_all_rw on



Comments

Popular posts from this blog

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...

Extending SNMP to run arbitrary shell script

Why are we here... This is not likely something I would have pursued under normal circumstances.  I happen to be working for a customer/client who is not afforded a lot of flexibility to accomplish their goals.  In this case, the rigor is justified.  They have to sometimes be fairly creative with how they solve problems. In this case they would like to utilize an existing snmp implementation to execute a command (or shell script) on a remote system.  They came to me with the idea of using Net-SNMP extend. https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/sect-System_Monitoring_Tools-Net-SNMP-Extending.html NOTE:  This is NOT a good implementation strategy in the "real world"  it will simply allow you to test the functionality.  There are a TON of security implications which would need to be taken in to consideration. Implementation Steps: [root@rh7tst01 ~]# yum -y install net-snmp net-snmp-utils ...

RHN Satellite Server (spacewalk) repomd.xml not found

"repomd.xml not found" If you add a channel, or if your RHN cache gets corrupted, and one of your guests complains that it cannot find repomd.xml for jb-ews-2-x86_64-server-5-rpm (for example) - you need to rebuild your repodata cache. Normally this is an automated job - which is exemplified by the fact that you have obviously built out your entire Satellite environment and never had to do any of the steps you are about to do. So - some prep work: Open 3 terminals to your Satellite Server and run: # Term 1 cd /var/cache/rhn watch "ls -l | wc -l" # Term 2 pwd cd /var/log/rhn tail -f rhn_taskomatic_daemon.log # Term 3 satellite-sync --channel=jb-ews-2-x86_64-server-5-rpm Once the satellite-sync has completed, you >should< see the count increment by one.  If you are unlucky (like me) you will not. You then need to login to the Satellite WebUI as the satellite admin user. Click on the Admin tab (at the top) Task Schedules (on the left) fin...