Skip to main content

I just parted...

The recent disk management utility in the Fedora/RHEL installers are a bit challenging.   Since I don't know why things had changed, I won't bother commenting.  But, I don't find them nearly as easy/convenient as before.... Anyhow...

I installed a 500GB SSD in my Intel NUC to build up as a "NAS" host (NFS and iSCSI).  The OS will take around 20GB, then SWAP, /boot.. whatever.  So, I wanted the remainder of the disk to be in it's own VG

[root@rhel7-nas registry]# parted -l | grep Disk
Disk /dev/sda: 480GB
Disk Flags:
Disk /dev/mapper/vg_exports-lv_registry: 21.5GB
Disk Flags:
Disk /dev/mapper/rhel7--nas-home: 1074MB
Disk Flags:
Disk /dev/mapper/rhel7--nas-swap: 8389MB
Disk Flags:
Disk /dev/mapper/rhel7--nas-root: 32.2GB
Disk Flags:
[root@rhel7-nas ~]# parted -s /dev/sda print free
Model: ATA INTEL SSDSC2BP48 (scsi)
Disk /dev/sda: 480GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  211MB   210MB   fat16        EFI System Partition  boot
 2      211MB   735MB   524MB   xfs
 3      735MB   42.4GB  41.7GB                                     lvm
        42.4GB  480GB   438GB   Free Space

[root@rhel7-nas ~]# parted -s /dev/sda mkpart pri ext3 42.4GB 100% set 4 lvm on
[root@rhel7-nas registry]# parted /dev/sda print free
Model: ATA INTEL SSDSC2BP48 (scsi)
Disk /dev/sda: 480GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name                  Flags
        17.4kB  1049kB  1031kB  Free Space
 1      1049kB  211MB   210MB   fat16        EFI System Partition  boot
 2      211MB   735MB   524MB   xfs
 3      735MB   42.4GB  41.7GB                                     lvm
 4      42.4GB  480GB   438GB                pri                   lvm
        480GB   480GB   860kB   Free Space
[root@rhel7-nas ~]# partprobe /dev/sda
[root@rhel7-nas ~]# ls -l /dev/sda*
brw-rw----. 1 root disk 8, 0 Dec 15 18:53 /dev/sda
brw-rw----. 1 root disk 8, 1 Dec 15 18:53 /dev/sda1
brw-rw----. 1 root disk 8, 2 Dec 15 18:53 /dev/sda2
brw-rw----. 1 root disk 8, 3 Dec 15 18:53 /dev/sda3
brw-rw----. 1 root disk 8, 4 Dec 15 18:53 /dev/sda4
[root@rhel7-nas ~]# pvcreate /dev/sda4
  Physical volume "/dev/sda4" successfully created
[root@rhel7-nas ~]# vgcreate vg_exports /dev/sda4
  Volume group "vg_exports" successfully created


Comments

Popular posts from this blog

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...

Sun USS 7100 foo

TIP: put ALL of your LUNs into a designated TARGET and INITIATOR group when you create them.  If you leave them in the "default" group, then everything that does an discovery against the array will find them :-( I'm struggling to recognize a reason that a default should even be present on the array. Also - who, exactly, is Sun trying to kid.  The USS is simply a box.. running Solaris .. with IPMP and ZFS.  Great.  If you have ever attempted to "break-in" or "p0wn" your IBM HMC, you know that there are people out there that can harden a box - then.. there's Sun.  After a recent meltdown at the office I had to get quite intimate with my USS 7110 and learned quite a bit.  Namely: there's a shell ;-) My current irritation is how they attempt to "warn you" away from using the shell (my coverage expired a long time ago to worry about that) and then how they try to hide things, poorly. I was curious as to what version of SunOS it ...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.