Skip to main content

Repo Sync to non-current CentOS release

WARNING:  This might actually screw up your Spacewalk system :-(  I think the correct path would be to sync 6.3 first, then 6.4, etc...

Feel free to attempt it though.

I'm attempting to build out a FOSS (emphasis on FREE ;-) replica of my RHEL environment at work in my home.  Seems simple enough.  One significant difference is how Satellite varies from Spacewalk in initial sync.

My goal is to use spacewalk-clone-by-date to show how you can utilize "rolling" channel release dates and use the clone to bring the channel up to a date that you would like all your hosts patched "current" to.  The problem is: I built my spacewalk server in November of 2013 and the only release it seems to have grabbed is 6.4.  So, any arbitrary date I select, simply defaults back to 6.4 as that is all I have.

So - I don't want to get terribly in-depth, but I decided I can conclude my testing if I had two minor-release versions - 6.4 and 6.3.

My spacewalk repo for "base" or "centos6-x86_64" points to http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os (as it should).

CentOS 6.3 release 2012-07-09
CentOS 6.4 release 2013-03-09

So - to populate my Spacewalk Repo with the previous version, I ran
spacewalk-repo-sync --type=yum --url='http://vault.centos.org/6.3/os/x86_64' --channel='centos6-x86_64'

spacewalk-repo-sync --type=yum --url='http://vault.centos.org/6.3/extras/x86_64/' --channel='centos6-x86_64-extras'

spacewalk-repo-sync --type=yum --url='http://vault.centos.org/6.3/updates/x86_64/' --channel='centos6-x86_64-updates'


Comments

Popular posts from this blog

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...

Sun USS 7100 foo

TIP: put ALL of your LUNs into a designated TARGET and INITIATOR group when you create them.  If you leave them in the "default" group, then everything that does an discovery against the array will find them :-( I'm struggling to recognize a reason that a default should even be present on the array. Also - who, exactly, is Sun trying to kid.  The USS is simply a box.. running Solaris .. with IPMP and ZFS.  Great.  If you have ever attempted to "break-in" or "p0wn" your IBM HMC, you know that there are people out there that can harden a box - then.. there's Sun.  After a recent meltdown at the office I had to get quite intimate with my USS 7110 and learned quite a bit.  Namely: there's a shell ;-) My current irritation is how they attempt to "warn you" away from using the shell (my coverage expired a long time ago to worry about that) and then how they try to hide things, poorly. I was curious as to what version of SunOS it ...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.