Skip to main content

Posts

Showing posts from November, 2011

My other blog...

I decided to blog about my experience with Red Hat Enterprise Virtualization and Satellite. http://rhevdup.blogspot.com I think creating online docs motivate you to complete something, as though you are being held accountable.  Even though I am probably the only person who ever reads this crap anyhow.  It will be populated with (hopefully) useful bits regarding RHEV (specifically RHEV 3), conjecture and opinion on the product and comparisons with VMware ESX/ESXi.  I found the vcritical.com blog offensive, childish - mostly because the individual actually works for VMware (EMC) and identifies this on his blog.  As a result, I have decided that RHEV deserves an audience as well ;-)  Although, I will not go on tirades trying to trash VMware. So - for the record: I like VMware, a lot.  I realize that I have barely scratched the surface of the capabilities that VMware offers.  For the most part, the experience I have with VMware has been practical...

Sun USS 7100 foo

TIP: put ALL of your LUNs into a designated TARGET and INITIATOR group when you create them.  If you leave them in the "default" group, then everything that does an discovery against the array will find them :-( I'm struggling to recognize a reason that a default should even be present on the array. Also - who, exactly, is Sun trying to kid.  The USS is simply a box.. running Solaris .. with IPMP and ZFS.  Great.  If you have ever attempted to "break-in" or "p0wn" your IBM HMC, you know that there are people out there that can harden a box - then.. there's Sun.  After a recent meltdown at the office I had to get quite intimate with my USS 7110 and learned quite a bit.  Namely: there's a shell ;-) My current irritation is how they attempt to "warn you" away from using the shell (my coverage expired a long time ago to worry about that) and then how they try to hide things, poorly. I was curious as to what version of SunOS it ...

Fedora 16, Gnome 3, Grub2

Within hours of receiving my new Lenovo T520 I had ripped the restore media and removed the 500gig drive to keep in case of an apocalypse, or something. With a new 750gig drive installed I executed the Lenovo restore to the new drive (installing Windows 7). Post-restore I had to remove the restore partition and resize the Windows partition. I shrunk it to leave 250Gig dedicated to Windows, leaving around 500Gig for Linux. Initially I found many things about the new Fedora release unsettling. I didn't care for the Gnome 3 interface. As I went to attempt to customize my Grub I became more irritated. I still have a number of things to become comfortable with again, but within days I am digging the Gnome 3 interface (although I think a lot of work still needs to be done, I think I understand why it has went through significant changes and I believe those are good changes), Fedora 16 has some fundamental changes - and like Gnome, there is still some work to be done, but I beli...

Grub 2, progress?

Solution : be more open-minded...  In hindsight, Grub2 is not all bad.  Still not sure it was necessary, but.. whatever.  I can now manage to do most everything I need/want to do with it. Issue : I have installed Fedora 16 on my recently acquired laptop. Overall I'm starting to appreciate the direction Fedora seems to be going. They apparently eliminated /etc/inittab, updated the boot process to use Gnome 3. One thing I can not comprehend is Grub 2. Grub was not exactly difficult to begin with. It was flat file with a bunch of similarly configured stanzas. Now it is a bunch of directories and config files and dependencies. At this point in time, Grub2 seems far less configurable than its predecessor. For example, I can not find a decent explanation of how to remove the Windows System partition from showing up in my boot menu. Nor can I locate how to update the legacy "splash.xpm.gz" to use a cool graphic background at boot time. I'd like to know what ...

My Lenovo T520 experience

Summary:  An overview of my experience with a Lenovo T520.  This machine is no joke.  It's seriously fast/powerfull, everything seems to work between Windows 7 Pro x64 and Fedora 16 x64.  I have not booted into Windows very much, but that seems to perform flawlessly. My system: Intel(R) Core(TM) i7-2760QM CPU @ 2.40GHz Integrated Intel VGA, secondary Nvidia VGA (optimus) 750 GB Scorpio Black 7200 RPM, 16MB http://browse.geekbench.ca/geekbench2/516398  - GeekBench of 13338  Why: I'm having a bit of a career focus shift. My primary goal/focus is to become as proficient as I once was with Solaris, or even more so. This will provide credibility and experience to leverage with my customers. I felt it doesn't represent my undying devotion to Linux, if I walk in and plop a MacBook Pro on the table. I needed a laptop to run a reasonably new Red Hat (read: fedora) release. I talked to a friend who works for Red Hat to see what his experience. ...

User Managment - sudo, network logins

This is a Work In Progress. I'm still disappointed at how great the user management tools are for Windows and even Netware... and how lacking they are for Unix. USERS +user:::::: +@netgroup:::::: SUDO Defaults:esshscr !requiretty Cmnd_Alias HSCR= /sbin/fdisk -l, \ /sbin/iscsi-ls, \ /sbin/iscsiadm --mode session, \ /usr/sbin/vgdisplay -v, \ /sbin/dmsetup ls, \ /sbin/multipath -l, \ /bin/netstat -s, \ /opt/QLogic_Corporation/SANsurferCLI/scli -i all, \ /opt/QLogic_Corporation/SANsurferCLI/scli -t all, \ /opt/QLogic_Corporation/SANsurferCLI/scli -l *, \ /usr/sbin/hbacmd listhbas, \ /usr/sbin/hbacmd listHBAs, \ /usr/sbin/hbacmd HBAAttrib *, \ /usr/sbin/hbacmd portattrib *, \ /usr/sbin/hbacmd TargetMapping *, \ /sbin/vxprint -h, \ /sbin/vxprint -l, \ /sbin/vxdisk path, \ /sbin/vxdisk list, \ /sbin/vxdisk list *, \ /usr/sbin/vxassist list, \ /etc/powermt display dev=all, \ /opt/DynamicLinkManager/bin/dlnkmgr view -lu

MOTD with colors! (also applies to shell profiles)

I'm not sure why I had never looked into this before, but this evening I became obsessed with discovering how to present different colored text in the /etc/motd. A person had suggested creating a shell script (rather than using special editing modes in vi, or something) and I agree that is the simplest way of getting this accomplished quickly. This most noteworthy portion of this script is the following: RESET="\033[0m" that puts the users shell back to the original color. I typically like a green text on black background. Also - a great reference for the different colors and font-type (underscore, etc...) https://wiki.archlinux.org/index.php/Color_Bash_Prompt I found this example on the web and I wish I could recall where so that I could provide credit to that person. #!/bin/bash #define the filename to use as output motd="/etc/motd" # Collect useful information about your system # $USER is automatically defined HOSTNAME=`uname -n` KERNEL=`un...

Add a mirror with ZPOOL/ZFS

Recently I had a huge debacle involving a USS7110, which happens to be a Sun x4240 with ZFS, moderately over-glorified ;-) As a result of the disaster I was essentially pulling the mirror disk from my alternate system (uss02) and putting it into my primary system (uss01) to get that system running again. Simple enough... I guess. The problem I ran into, and this will be specific to the USS only, is that the drives in the array are either identified as "system" or "data" (which you can see from the BUI). During my drive swapping, I had pulled the "data" spare, and mirrored the "system" OS disk to it. Somehow it left the data disk-type stamp on it. The drive mirrored fine and all, but when I would go to create a new zpool, the OS drive would appear to be a valid disk to put the data on. I obviously didn't go through with that step, but I can't imagine what would happen if I had. Chances are you will never be in the position ...

USS 7110 apacolypse

We utilize a USS7110 for the ability to have shared storage to our 2 ESX servers, via iSCSI. For the most part the device is rather tremendous. The other part... well, not so much. As it turns out the USS7110 is actually just a Sun x4240 running Solaris 11 and 16 HDDs. 2 for the OS, and the rest for data.  Our array was configured to use Double Parity with a spare. Very good resiliency with minimal sacrifice of space, IMO. The city of Wayzata sent out their Dream Team to do something involving machinery and digging, etc.. in front of our building. Even though there are literally hundreds of those little flags marking whatever it is they mark, they somehow managed to go through the power feed, apparently. The UPS did not signal to the ESX servers or the Array to shutdown.. so everything just simply crashed. This happens quite frequently so I figured it was not a big deal. I was horribly mistaken. The actual significance of what took place is probably quite min...