Redhat offers a great tool with Redhat satellite server especially with Satellite 6. But the price is quite high, and if you need to provides packages and updates to multiple separated environments it’s out of the picture.
(For those who are interessted – Katello is the upstream project of Redhat’s Satellite 6).
So the question is now, how to get the the Redhat repos to your deployment server.
First a subscription is needed and only channels/repos included in this subscription can be synchronized.
If we have the channels/repos availalbe on our system which we want to use to sync we can use reposync to pull all the repositoriers from the internet.
The follwoing example shows a list of repositories which are available on my machine:
The easiest way to sync a repo is to just call the following command:
Important is to add the -l option to the command. The command enables the yum-plugins and to be able to sync directly from Redhat those plugins have to be enabeld!
But I think the most people don’t want to run the synchronization by hand and some metadata also would be nice – so you can use/adapt the following script and add it as a cronjob.
So, what does this script do?
Because it’s intended to be used as a cronjob it logs the sync-process to /var/log/reposync – I don’t do any logrotation because the files are quite small and the cronjobs runs only once a week – so there is not too much space used by the logfiles and I clean them from hand. but if its ran more often you should think about log rotation.
Reposync itself creates a subfolder in the spceified directory which is named like the repoid and contains the metadata (comps & updateinfo + a directory called getPackage which contains the rpm-packages). The comps-file contains all the group-infos and updateinfo is needed if you want to use yum just for security updates.
Herefor the yum-plugin “yum-plugin-security” is a good choice.
With the “–download-metadata” parameter in the reposync command the updateinfo.xml-files are also downloaded (as far as they exist for the repo). Normaly the file is downloaded in the following format: <HASH>-updateinfo.xml.gz
To use the updateinfo-file in your synchronized repo the file has to be gunziped and renamed to updateinfo.xml – so the hash at the beginning of the filname needs to be striped away. Otherwise it is likely that the updates are not recognized correctly! So convert your downloaded updateinfo-metadata file to updateinfo.xml and run use modify repo to update your repository with the security update informations from the updateinfo.xml.
For some of Ciscos SFP modules Cisco implemented some DOM (Digital optical monitoring) capabilities. This means, that you can debug fibre-links to a certain grade with thos DOM-enabeld fibre modules. For example the GLC-LH-SMD is such a modules.
With DOM-capbale modules the command
can be executed on the switch and the RX & TX power on the modules links are displayed. It’s a cool feature to quickly debug a fibre link and check the attenuation on a link.
Further infos and an example can be found at:
VMware is a relay nice product, but there is one little problem. It’s realy hard to monitor VMware products with SNMP or any other “old school” technologies.
The actual problem is to get an alarm in Zabbix if there occures an error on the vCenter. So Zabbix is used as an umbrella monitoring for the whole environment.
All this could also be done with SNMP-Traps what would be a lot easier – at first appereance, but Zabbix is … how do I say … not the best tool to monitor events. It’s designed to monitor statuses.
So it’s designed to continuously monitor as specific value – if this value raises over a defined alert-value an alert is displayed and when it falls below the value the problem disappears.
With events there is the problem that we get only one single value which describes the error. So firstly we have to analyze the received value/message and secondly – how do we know when the problem is okay again? And thats one of the design flaws of Zabbix – you do not have any possibilty to reset such events to “OK” if such an event happend.
So we need to monitor the vCenter alarms, because this alerts are raised if an problem occures and disappear if the problem changes to OK again.
So how do we get all the vCenter alarms to zabbix? I don’t want to copy/create all the alarms by hand because its a dynamic environment and alarms could be added or deleted, so the system has to “import” the alarms “on the fly” from the vCenter.
Since Zabbix 2.0 there exist discovery rules which are kind of helpful to import dynamic values. So I’m using a discovery to peridodically pull the data from the vCenter and create an item for every alarm. All the alarms in the vCenter need to be configured to run a custom alarm when an alarm becomes active which sends the current status to zabbix and voilá – we are done.
Continue reading Getting vCenter alarms to Zabbix
With the S8 generation of Fujtitus RX300/200 servers, which use the iRMC S4 ,Fujitsu implementent the ability to poll SNMP-data from the BMC (iRMC). To enable the ability to poll data the BMC has to be flashed with an up to date Firmware version and SNMP-polling has to be enableed in the iRMC-web-GUI. I’ve tested it with 7.69F from Dec. 2014 .
The first shippings of the rx300/200 servers came with an older firmeware version which did not implment the needed Fujitsuu MIBs to query HW-status (see http://manuals.ts.fujitsu.com/file/11470/irmc-s4-ug-en.pdf Page: 18 for details about the supported MIBs).
The problem is, that there is no possibility to query the status of the Raid controller and its disks via SNMP (or I haven’t found it till now) but it’s displayed on the iRMC web-GUI. So I wrote a script which extracts the useful informations (controller status, disk status + details and logical drive status) from the web-interface.
Atm. it’s just an alpha release but I’ll modify the script to be used by Zabbix for an auto discovery and push all the data into Zabbix:
Ever had the problem that you tried running a script on a linux-machine and got an error message like the following one?
-bash: ./getRaidFromIrmc.php: /usr/bin/php^M: bad interpreter: No such file or directory
The ^M indicatees that e file you are trying to run is DOS-encoded. This means that its using a CHAR 13 instead of CHAR 10 for a line break and Linux does not like that kind of line break. That often happens if you are writing a script or config file on a Windows machine and transfer it to a Linux machine.
If you try to run/parse the file in linux -> wrong encoding and BAM
The first time I ran into this problem was while deploying a RHEL machine with a faulty kickstart file, but if you know what the problem is it’s quite easy to fix it. Just run “dos2unix” over your file and everything should work again as it should work.
If you often have to modify Linux-files on widows machines I’ld recommend you to use Notepadd++, because it has a feature implemented to set the correct EOL-conversion and lots of more cool and useful plugins like the built in FTP/SFTP-plugin.
There already quite a lot of blog posts out there which describe how to add performance counters to zabbix. In fact – its not that hard – the tricky thing is, to have one template which gets performance counters from systems wich have different languages installed.
In this case it’s a good idea to use the index of a counter. Indexes of all performance counters can be obtained from teh registry.
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ Windows NT\CurrentVersion\Perflib
In this path there should be a key named “009” (would be englisch, “007” german) or similar, which contains all the key-value pairs.
The easiest way is to copy everything to an editor and look up the keys you need.
I tried to verify the counter ids by using them with “typeperf” – eg: typeperf \234(_Total)\202 but most times the result was:
Error: No valid counters.
If i used the names instead of the keys everything worked fine.
Atm I still don’t know what the reason for this error is, but, if you try it with zabbix_get everything works fine.
zabbix_get -s server-01 -k “perf_counter[\234(_Total)\202]”
Further details about performance counters in Zabbix can be found at:
I got the following Error after installing the MDT on my system and trying to update a Deploymentsahre which is located on my NAS.
On technet i read a tipp about restarting the machine after installing the MS AIK but that didn’t fix my problem. After another 5 mins of investigations I found out that wrong permissions could also be the reaseon for the error.
As I’m using the MDT on my private PC which has a non Admin user as default user I retried it with admin rights and now it works.
So if you’r also encountering this error – check your permissions.
In this post a collection of useful tools which can be used in PXE is introduced.
Continue reading Useful tools for PXE
Today I wanted to configure my router to support PXE-booting in my homenetwork. Herefore the following components are requeired:
- DHCP-server configured to distribute Bootserver
- TFTP-Server which provides the PXE
Continue reading PXE-Boot on a Vigor2130/on your local network
While connecting with putty (or any other ssh-client) it could happen, that the initial connection needs some time (up to a lot of time) to establish the connection to the target system.
The Problem could occure if the system is trying to look up the DNS entry of the connecting system. If you have not configured DNS in your environment or the system can’t be found it always takes some time till the connection is established.
To disable this problem the following line has to be added/uncommented in the SSHD-conf of the target system: