Skip to main content

Red Hat Satellite: Cleanup "Actions" history

I had run in to a situation where my Satellite had several hundred-thousand previously run scheduled jobs sitting in the history... I could delete them 25 at a time... and if I did "Select All" it would puke.

So - I created the following

#!/usr/bin/env python
import xmlrpclib
import time

SATELLITE_URL = "http://rhnsat01.company.com/rpc/api"
SATELLITE_LOGIN = "satadmin"
SATELLITE_PASSWORD = "notMyPassword"

client = xmlrpclib.Server(SATELLITE_URL, verbose=0)
delete = 1

key = client.auth.login(SATELLITE_LOGIN, SATELLITE_PASSWORD)

###############################
# CHANGE THINGS AFTER THIS LINE
#

## Delete all Failed Actions
x = "Deleting all Failed Actions"
print(x)
failed_list = client.schedule.listFailedActions(key)
action_ids=[]
for action in failed_list:
    action_ids.append(action['id'])

archive_result=client.schedule.archiveActions(key,action_ids)

## Archive all Completed Actions
## Generate an Array of size (max_num_ids) and purge it
x = "Archiving all Completed Actions"
print(x)
counter = 0
max_num_ids = 100
archived_list = client.schedule.listCompletedActions(key)
action_ids=[]
for action in archived_list:
    print action.get('id')
    action_ids.append(action['id'])
    counter = counter + 1
    if counter == max_num_ids:
        del_result=client.schedule.archiveActions(key,action_ids)
        time.sleep(.1)
        action_ids=[]
        counter = 0

## Delete all Archived Actions
## Generate an Array of size (max_num_ids) and purge it
x = "Deleting all Completed Actions"
print(x)
counter = 0
max_num_ids = 100
archived_list = client.schedule.listArchivedActions(key)
action_ids=[]
## Traverse Array in reverse [::-1]
#for action in archived_list[::-1]:
## Traverse Array in forward [:]
for action in archived_list:
    print action.get('id')
    action_ids.append(action['id'])
    counter = counter + 1
    if delete == 1:
        if counter == max_num_ids:
          del_result=client.schedule.deleteActions(key,action_ids)
          time.sleep(.1)
          action_ids=[]
          counter = 0

##
## CHANGE THINGS BEFORE THIS LINE
###############################
client.auth.logout(key)

Comments

Popular posts from this blog

PXE boot a LiveCD image

Summary: I have wanted to build a kickstart environment which hosted a "rescue CD" or LiveCD to allow you to boot over the network after you blew your stuff up and needed to repair a few things.  Today I have worked through a method of doing so, with the help of the people who published a succinct script with the Red Hat Enterprise Virtualization Hypervisor.  (the script will be at the bottom of this post - if I have somehow not followed the GPL, please let me know and I will correct whatever is necessary) NOTE/Warning: The boot will fail due the initrd being too large (645mb).  I'm not sure how to proceed.  This procedure worked for RHEVh, because it is quite a bit smaller.  Hopefully I can report back with progress on this? :-$ Procedure: download your LiveCD image to /export/isos/RESCUE/Fedora-16-i686-Live-Desktop.iso # cd /var/tmp # vi livecd-iso-to-pxeboot (populate the file with the script shown below) # chmod 754 ./livecd-iso-to-pxeb...

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)"

"Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)" One issue that may cause this to arise is if you managed to break your /etc/fstab We had an engineer add a line with the intended options of "nfsvers=3" but instead added "-onfsvers=3" and it broke the system fairly catastrophically.

P2V using dd for KVM-QEMU guest

Preface: I have certainly not exhaustively tested this process.  I had a specific need and found a specific solution that worked. Situation:  I was issued a shiny new laptop running Red Hat Enterprise Linux 7 (with Corp VPN, certs, Authentication configuration, etc...)  The image was great, but I needed more flexibility on my bare metal.  So, my goal was to P2V the corporate image so I could just run it as a VM. * Remove corporate drive and install new SSD * install corp drive in external USB-3 case * Install RHEL 7 on new SSD * dd old drive to a disk-image file in a temp location which will be an image which is the same size as your actual drive (unless you have enough space in your destination to contain a temp and converted image) * convert the raw disk-image to a qcow file while pushing it to the final location - this step should reduce the disk size - however, I believe it will only reduce/collapse zero-byte blocks (not just free space - i.e. if you de...