Sync Redhat repositories to a local deployment server [RHEL6/7]

Redhat offers a great tool with Redhat satellite server especially with Satellite 6.  But the price is quite high, and if you need to provides packages and updates to multiple separated environments it’s out of the picture.
(For those who are interessted – Katello is the upstream project of Redhat’s Satellite 6).

So the question is now, how to get the the Redhat repos to your deployment server.
First a subscription is needed and only channels/repos included in this subscription can be synchronized.
If we have the channels/repos availalbe on our system which we want to use to sync we can use reposync to pull all the repositoriers from the internet.
The follwoing example shows a list of repositories which are available on my machine:

[pastacode lang=”bash” message=”Available channels” highlight=”” provider=”manual”]

[root@server~]# yum repolist
Loaded plugins: downloadonly, refresh-packagekit, rhnplugin, security
This system is receiving updates from RHN Classic or RHN Satellite.
repo id                                    repo name                                                          status
epel                                       Extra Packages for Enterprise Linux 6 - x86_64                     11,588
rhel-x86_64-server-6                       Red Hat Enterprise Linux Server (v. 6 for 64-bit x86_64)           14,872
rhel-x86_64-server-6-rhscl-1               Red Hat Software Collections 1 (RHEL 6 Server x86_64)               3,003
rhel-x86_64-server-dts-6                   Red Hat Developer Toolset (for RHEL 6 Server for x86_64)               84
rhel-x86_64-server-dts2-6                  Red Hat Developer Toolset 2 (for RHEL 6 Server for x86_64)            469
rhel-x86_64-server-extras-6                RHEL Server Extras (v. 6 for 64-bit x86_64)                            17
rhel-x86_64-server-ha-6                    RHEL Server High Availability (v. 6 for 64-bit x86_64)                423
rhel-x86_64-server-optional-6              RHEL Server Optional (v. 6 64-bit x86_64)                           8,313
rhel-x86_64-server-rh-common-6             Red Hat Common (for RHEL 6 Server x86_64)                              59
rhel-x86_64-server-supplementary-6         RHEL Server Supplementary (v. 6 64-bit x86_64)                        514
repolist: 39,342
[root@server~]#

[/pastacode]

 

The easiest way to sync a repo is to just call the following command:

[pastacode lang=”bash” message=”Sync repos” highlight=”” provider=”manual”]

reposync -p /path/to/storeage/dir/RHserv6-general_64/ --repoid=rhel-x86_64-server-6 -l

[/pastacode]

Important is to add the -l option to the command. The command enables the yum-plugins and to be able to sync directly from Redhat those plugins have to be enabeld!

But I think the most people don’t want to run the synchronization by hand and some metadata also would be nice – so you can use/adapt the following script and add it as a cronjob.

[pastacode lang=”bash” message=”automated sync script” highlight=”” provider=”manual”]

	#!/bin/bash
############################################################################
#
#       @Author: fweixelb@fawcs.info
#
#       @Description:
#               Script synchronizes the RH-repos
#
###########################################################################


datetime=$(date +"%F_%T")

#General
reposync -p /path/to/storeage/dir/RHserv6-general_64/ --repoid=rhel-x86_64-server-6 -l --download-metadata > /var/log/reposync/general_$datetime.log
compsfile=$(echo "$(ls -d /path/to/storeage/dir/RHserv6-general_64/* | grep rhel)/comps.xml")
createrepo /path/to/storeage/dir/RHserv6-general_64/ -g $compsfile --update

#Common
reposync -p /path/to/storeage/dir/RHserv6-common_64/ --repoid=rhel-x86_64-server-rh-common-6 -l --download-metadata > /var/log/reposync/common_$datetime.log
compsfile=$(echo "$(ls -d /path/to/storeage/dir/RHserv6-common_64/* | grep rhel)/comps.xml")
createrepo /path/to/storeage/dir/RHserv6-common_64/ -g $compsfile --update

#Optional
reposync -p /path/to/storeage/dir/RHserv6-optional_64/ --repoid=rhel-x86_64-server-optional-6 -l --download-metadata > /var/log/reposync/optional_$datetime.log
compsfile=$(echo "$(ls -d /path/to/storeage/dir/RHserv6-optional_64/* | grep rhel)/comps.xml")
createrepo /path/to/storeage/dir/RHserv6-optional_64/ -g $compsfile --update

#rhel Software Collection
reposync -p /path/to/storeage/dir/RHserv6-rhscl_64/ --repoid=rhel-x86_64-server-6-rhscl-1 -l --download-metadata > /var/log/reposync/scl_$datetime.log
compsfile=$(echo "$(ls -d /path/to/storeage/dir/RHserv6-rhscl_64/* | grep rhel)/comps.xml")
createrepo /path/to/storeage/dir/RHserv6-scl_64/ -g $compsfile --update

[/pastacode]

So, what does this script do?
Because it’s intended to be used as a cronjob it logs the sync-process to /var/log/reposync – I don’t do any logrotation because the files are quite small and the cronjobs runs only once a week – so there is not too much space used by the logfiles and I clean them from hand. but if its ran more often you should think about log rotation.

Reposync itself creates a subfolder in the spceified directory which is named like the repoid and contains the metadata (comps & updateinfo + a directory called getPackage which contains the rpm-packages).  The comps-file contains all the group-infos and updateinfo is needed if you want to use yum just for security updates.
Herefor the yum-plugin “yum-plugin-security” is a good choice.

 

UPDATE:
With the “–download-metadata” parameter in the reposync command the updateinfo.xml-files are also downloaded (as far as they exist for the repo). Normaly the file is downloaded in the following format: <HASH>-updateinfo.xml.gz

To use the updateinfo-file in your synchronized repo the file has to be gunziped and renamed to updateinfo.xml – so the hash at the beginning of the filname needs to be striped away. Otherwise it is likely that the updates are not recognized correctly! So convert your downloaded updateinfo-metadata file to updateinfo.xml and run use modify repo to update your repository with the security update informations from the updateinfo.xml.

[pastacode lang=”bash” message=”update the metadata of the local repo” highlight=”” provider=”manual”]

modifyrepo updateinfo.xml repodata

[/pastacode]

 

Cisco SFP – useful DOM

For some of Ciscos SFP modules Cisco implemented some DOM (Digital optical monitoring) capabilities. This means, that you can debug fibre-links to a certain grade with thos DOM-enabeld fibre modules. For example the GLC-LH-SMD is such a modules.

With DOM-capbale modules the command

[pastacode lang=”python” message=”Cisco – Show transceiver status” highlight=”” provider=”manual”]

sh interface transceiver

[/pastacode]
can be executed on the switch and the RX & TX power on the modules links are displayed. It’s a cool feature to quickly debug a fibre link and check the attenuation on a link.

Further infos and an example can be found at:
https://supportforums.cisco.com/document/75181/digital-optical-monitoring-dom

Getting vCenter alarms to Zabbix

VMware is a relay nice product, but there is one little problem. It’s realy hard to monitor VMware products with SNMP or any other “old school” technologies.
The actual problem is to get an alarm in Zabbix if there occures an error on the vCenter. So Zabbix is used as an umbrella monitoring for the whole environment.
All this could also be done with SNMP-Traps what would be a lot easier – at first appereance, but Zabbix is … how do I say … not the best tool to monitor events. It’s designed to monitor statuses.

So it’s designed to continuously monitor as specific value – if this value raises over a defined alert-value an alert is displayed and when it falls below the value the problem disappears.
With events there is the problem that we get only one single value which describes the error. So firstly we have to analyze the received value/message and secondly – how do we know when the problem is okay again? And thats one of the design flaws of Zabbix – you do not have any possibilty to reset such events to “OK” if such an event happend.
So we need to monitor the vCenter alarms, because this alerts are raised if an problem occures and disappear if the problem changes to OK again.

So how do we get all the vCenter alarms to zabbix? I don’t want to copy/create all the alarms by hand because its a dynamic environment and alarms could be added or deleted, so the system has to “import” the alarms “on the fly” from the vCenter.
Since Zabbix 2.0 there exist discovery rules which are kind of helpful to import dynamic values. So I’m using a discovery to peridodically pull the data from the vCenter and create an item for every alarm. All the alarms in the vCenter need to be configured to run a custom alarm when an alarm becomes active which sends the current status to zabbix and voilá – we are done.

Continue reading Getting vCenter alarms to Zabbix

Get raid information from a Fujitsu RX200S8/Rx300S8 via BMC

With the S8 generation of Fujtitus RX300/200 servers, which use the iRMC S4 ,Fujitsu implementent the ability to poll SNMP-data from the BMC (iRMC). To enable the ability to poll data the BMC has to be flashed with an up to date Firmware version and SNMP-polling has to be enableed in the iRMC-web-GUI. I’ve tested it with 7.69F from Dec. 2014 .
The first shippings of the rx300/200 servers came with an older firmeware version which did not implment the needed Fujitsuu MIBs to query HW-status (see http://manuals.ts.fujitsu.com/file/11470/irmc-s4-ug-en.pdf Page: 18 for details about the supported MIBs).
The problem is, that there is no possibility to query the status of the Raid controller and its disks via SNMP (or I haven’t found it till now) but it’s displayed on the iRMC web-GUI. So I wrote a script which extracts the useful informations (controller status, disk status + details and logical drive status) from the web-interface.

Atm. it’s just an alpha release but I’ll modify the script to be used by Zabbix for an auto discovery and push all the data into Zabbix:

Source:

[pastacode lang=”php” message=”web-iRMCS RAID-query” highlight=”” provider=”manual”]

#!/usr/bin/php
<?php
/****************************************
################################################################
#   DESCRIPTION: Script to query RAID-Informations from the Fujitsu iRMC S4 webinterface.
#   COPYRIGHT ©: 2015 by fawcs, GPL freeware
#        AUTHOR: fawcs, weixeflo@fawcs.info
#       LICENSE: GPL freeware
# CREATION DATE: 2015-Mar-19
################################################################
*****************************************/
$username="admin";
$password="admin";
$host="127.0.0.1";

$siteIDs=array('controller'=>87,'disks'=>88,'logicalVolumes'=>89);
//get Raid-Controller-Data
//$returnController=getWebsiteContent($host."/".$siteIDs['controller'],$username,$password,true);
$returnDisks=getWebsiteContent($host."/".$siteIDs['disks'],$username,$password,true);
//$returnVolumes=getWebsiteContent($host."/".$siteIDs['logicalVolumes'],$username,$password,true);
//$id['controller']=parseWebSiteToIDs($returnController);
$id['disks']=parseWebSiteToIDs($returnDisks);
//$id['logicalVolumes']=parseWebSiteToIDs($returnVolumes);
//print_r($id);


$ctrl=$id['disks'][0]['id'][1];
foreach($id['disks'][0]['data'] AS $disk)
{
	//print_r($disk);
	$diskWebContent=getWebsiteContent($host."/pdrive?ctrl=".$ctrl."&pd=".$disk['detailID'][1],$username,$password,true);
	//print_r($diskWebContent);
	$diskTableArray=parseWebSiteToIDs($diskWebContent);
	print_r($diskTableArray);	
}
/**/

/**
retrieves an html-site
	
	@return: String -> whole HTML site
**/
function getWebsiteContent($url,$username,$password,$measuretime=false)
{
	if($measuretime===true) 
	{
		$timer=microtime(true);
	}
	$process = curl_init($url);
	curl_setopt($process, CURLOPT_USERPWD, $username . ":" . $password);  
	curl_setopt($process, CURLOPT_HTTPHEADER, array('Content-Type: application/xml'));
	curl_setopt($process, CURLOPT_HEADER, 1);
	curl_setopt($process, CURLOPT_USERPWD, $username . ":" . $password);
	curl_setopt($process, CURLOPT_TIMEOUT, 30);
	curl_setopt($process, CURLOPT_POST, 1);
	//curl_setopt($process, CURLOPT_POSTFIELDS, $payloadName);
	curl_setopt($process, CURLOPT_RETURNTRANSFER, TRUE);
	$return = curl_exec($process);
	curl_close($process);
	if($measuretime===true) 
	{
		echo "Content for ".$url." retrieved in ".round(microtime(true)-$timer,3)."s".chr(10);
	}
	return $return;
}

/**
retrieves the table-content of an IRMC Site / parses the listed sites for interesting data
possible sites are:
	http://$host/87 -> Controller Infos
	http://$host/88 -> Physical Disks
	http://$host/89 -> Logical Drives
	
	@return -> 	array which contains the ID of the controller, disks, LVs
				the table headings
				the table data
**/


function parseWebSiteToIDs($return)
{
	$ret=array(); 
	$cutFrom=strpos($return,"<!-- Menu end-->");
	$cutTo=strpos($return,'<div id="bottom">');
	$return=str_replace('</td>','</td>'.chr(10),substr($return,$cutFrom,$cutTo-$cutFrom));
	$return=str_replace('</th>','</th>'.chr(10),$return);
	$Loop=$return;
	
	//echo $table;
	$count=0;
	//go through all tatles which are found in the passed web-content-string & extract the tables 
	while(strpos($Loop,'<table')!==false)
	{
		
		//$table[]=substr($return, strpos($return,"<table"),strpos($return,"</table>")-strpos($return,"<table>"));
		$Loop=substr($Loop, strpos($Loop,'<table'),strlen($Loop)-strpos($Loop,'<table'));
		//get the table id
		$id=substr($Loop,strpos($Loop,"summary=\"")+9,strpos($Loop,"\">")-(strpos($Loop,"summary=\"")+9));
		$ret[$count]['id']=explode("_",$id);
		$Loop=substr($Loop, strpos($Loop,'<tr'),strlen($Loop)-strpos($Loop,'<tr'));
		//save just the html-table to the var & add ***###*** as a delimitter after every tablerow
		$currentTable=str_replace("</tr>","</tr>***###***",substr($Loop,0,strpos($Loop,"</table>")));
		$currentTable=substr($currentTable,0,strlen($currentTable)-9);
		//explode all table rows by the added delimiter
		$rows=explode("***###***",$currentTable);
		$count2=0;
		//iterate through the table rows and extract useful information
		foreach($rows AS $row)
		{
			unset($detailID);
			//check if submit button for details is listed in the table - if yes - get the submit-value for correct id -> is appended to the URL when querying the details page
			if(strpos($row,'<input class="submit" type="submit" value="Details"')!==false)
			{
				$detailID=trim(substr($row,strpos($row,'<input class="submit" type="submit" value="Details"')+strlen('<input class="submit" type="submit" value="Details"'),strpos(strtolower($row),'onclick=')-(strpos($row,'<input class="submit" type="submit" value="Details"')+strlen('<input class="submit" type="submit" value="Details"'))));
				//eg: extracted name="pd_10" from row
				$detailID=str_replace("\"","",$detailID);
				$detailID=explode("=",$detailID);
				$detailID=$detailID[1];
			}
			$row=trim(strip_tags($row));
			//if $count==0 -> first table row which stores only the headings/descriptions -> are saved in a seperate node in the array
			if($count2==0)
			{
				$ret[$count]["description"]=explode(chr(10),$row);
			} else {
			//all the other rows contain data -> also store the in the array
				$rowArray=explode(chr(10),$row);
				$ret[$count]["data"][]=$rowArray;
				if(isset($detailID))
				{
					$ret[$count]["data"][$rowArray[0]]["detailID"]=explode("_",$detailID);
				}
			}
			$count2++;
		}	
		$count++;
	}
	return $ret;
}



?>

[/pastacode]

UNIX/DOS-encoding – ^M

Ever had the problem that you tried running a script on a linux-machine and got an error message like the following one?
-bash: ./getRaidFromIrmc.php: /usr/bin/php^M: bad interpreter: No such file or directory

The ^M indicatees that e file you are trying to run is DOS-encoded. This means that its using a CHAR 13 instead of CHAR 10 for a line break and Linux does not like that kind of line break. That often happens if you are writing a script or config file on a Windows machine and transfer it to a Linux machine.
If you try to run/parse the file in linux -> wrong encoding and BAM

The first time I ran into this problem was while deploying a RHEL machine with a faulty kickstart file, but if you know what the problem is it’s quite easy to fix it. Just run “dos2unix” over your file and everything should work again as it should work.

If you often have to modify Linux-files on widows machines I’ld recommend you to use Notepadd++, because it has a feature implemented to set the correct EOL-conversion and lots of more cool and useful plugins like the built in FTP/SFTP-plugin.

Windows performance counters and zabbix

There already quite a lot of blog posts out there which describe how to add performance counters to zabbix. In fact – its not that hard – the tricky thing is, to have one template which gets performance counters from systems wich have different languages installed.

In this case it’s a good idea to use the index of a counter. Indexes of all performance counters can be obtained from teh registry.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ Windows NT\CurrentVersion\Perflib
In this path there should be a key named “009” (would be englisch, “007” german) or similar, which contains all the key-value pairs.
The easiest way is to copy everything to an editor and look up the keys you need.

I tried to verify the counter ids by using them with “typeperf” – eg: typeperf \234(_Total)\202 but most times the result was:
Error: No valid counters.
If i used the names instead of the keys everything worked fine.

Atm I still don’t know what the reason for this error is, but, if you try it with  zabbix_get everything works fine.
zabbix_get -s server-01 -k “perf_counter[\234(_Total)\202]”
0.000000

Further details about performance counters in Zabbix can be found at:
https://www.packtpub.com/books/content/monitoring-windows-zabbix-18

Unable to mount the WIM, so the update process cannot continue.

I got the following Error after installing the MDT on my system and trying to update a Deploymentsahre which is located on my NAS.

On technet i read a tipp about restarting the machine after installing the MS AIK but that didn’t fix my problem. After another 5 mins of investigations I found out that wrong permissions could also be the reaseon for the error.

As I’m using the MDT on my private PC which has a non Admin user as default user I retried it with admin rights and now it works.
So if you’r also encountering this error – check your permissions.

Useful tools for PXE

In this post a collection of useful tools which can be used in PXE is introduced.

Continue reading Useful tools for PXE

PXE-Boot on a Vigor2130/on your local network

Today I wanted to configure my router to support PXE-booting in my homenetwork.  Herefore the following components are requeired:

  • DHCP-server configured to distribute Bootserver
  • TFTP-Server which provides the PXE

Continue reading PXE-Boot on a Vigor2130/on your local network

Slow SSH connection / SSH needs some time to establish connection

While connecting with putty (or any other ssh-client) it could happen, that the initial connection needs some time (up to a lot of time) to establish the connection to the target system.

The Problem could occure if the system is trying to look up the DNS entry of the connecting system. If you have not configured DNS in your environment or the system can’t be found it always takes some time till the connection is established.

To disable this problem the following line has to be added/uncommented in the SSHD-conf of the target system:

#UseDNS yes

to

UseDNS no