Category Archives: Linux

some linux topics, hacks, fixes, bugs and so on — everything what seems to be important to me.

Dovecot: convert mdbox to mbox format

Dovecot comes with a proprietary mailbox-format (sd/mdbox) that provides some benefits in regard of compression and performance but is not really readable with nothing else than an editor.

It seems that there are very little information on how to convert the sd/mdbox-fomat to mbox (at least nothing copy and paste like if you aren’t a dovecot admin and have no clue ;))

So here is one possible-way it could be archived:

To work around adapting the dovecot-config and running a dovecot-service for dsync to use, we can also pass configuration parameters (by default dsync uses the dovecot-configuration file stored at /etc/dovecot/dovecot.conf) directly to dsync.
With that trick there is no need to configure & start a dovecot service.

An example command to convert a mailbox in mdbox-format located in the users home dir under mail to mbox format would look like:

dsync -o "mail_location=mdbox:~/mail" backup mbox:~/dstest

Converting to other formats like maildir might require additional configuration parameters within the /etc/dovecot/dovecot.conf file as namespaces or similar.

IMPORTANT:
Override options specified via “-o” must be passed directly after the dsync command and before “backup”


For converting a mailbox the “backup” attribute is recommended as this will do a 1-way-sync.

Kiwix – Make Wikipedia anD other websites availAble offline

In some cases it might be handy to have a website available offline for cases where no internet connection is available.

With Kiwix and the ZIM-package format it’s quite easy to do so. It can easily be run on a Raspberry and made accessible on the local network.

To automate updates of ZIM packages I wrote some little scripts which are available in the following Github repo: https://github.com/fawcs-at/zim-downloader

Information on how to use the scripts can be found in the readme in the GIT repo.

To automate the process of updating ZIM packages once a month the “updateZim.sh” should be added as a cronjob to your crontab:

e.g.:

#cat /etc/crontab
45 2    1 * *  <username> /<path_to_script>/updateZim.sh

Will start an update on ever 1st day of the month at quarter to 3 in the morning.

(35) error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small

When using Zabbix on a Centos8/RHEL8 machine the following error occurred whil trying to monitor an HTTPS-website via the build in web scenarieos:

(35) error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small 

The error itself also shows up when trying to use curl to connect to the website:

$ curl -D - https://<some-legacy-website-> -k
curl: (35) error:141A318A:SSL routines:tls_process_ske_dhe:dh key too small

That error occurs if the server uses an older cipher-suite that’s considered unsafe by the default crypto policy used in Centos8/RHEL8.

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/using-the-system-wide-cryptographic-policies_security-hardening

To work around that problem, the legacy cipher suites must be enabled by:

# update-crypto-policies --set LEGACY

Although a restart is recommended after issuing the command, for me it also worked without the need of issuing a reboot.

Bash – Monitor directory size for change

A simple bash-script to easily monitor if a directory has grown or shrunk in size:

while [ 1 ]; do result=$(du -s * | egrep "bitcoin-0.21.1$"); echo -e "\e[95m$result\e[0m"; curSize=$(echo $result | cut -d" " -f1); if [ $curSize -lt $oldSize ]; then echo -e
 "\e[92mShrunk: $curSize\e[0m"; else echo -e "\e[91mGrown: $curSize\e[0m"; fi; oldSize=$curSize; sleep 5; done

Script needs to be executed in the parrent directory of the monitored dir and directory name must be adapted: bitcoin-0.21.1$ -> to whatever you want to grep for

Batch import beats dashboards

cd /usr/share/
for BEAT in $(ls | grep -e "beat$"); do
echo -e "\e[92mBEAT: $BEAT\e[0m"
./$BEAT/bin/$BEAT setup --pipelines -c /etc/$BEAT/$BEAT.yml.* -path.home /usr/share/$BEAT/
./$BEAT/bin/$BEAT setup --dashboards -c /etc/$BEAT/$BEAT.yml.* -path.home /usr/share/$BEAT/
done

Zabbix SELinux policy generation

Commands to query the auditlog for Zabbix relevant queries and create/import a compiled policy file within Zabbix

Could be adapted to generate policies for any other system.

The suggestion is to set SELinux to permissive (setenforce=0) execute the action and afterwards create the policy based on the logged events. If the policy does not work on the first try after re-enabeling SELinux again it it could happen that a call was blocked (which is also logged within the auditlog) that was not blocked with SELinux in permissive mode. Therefore it could help creating a new human readable policy (.te-file) and checking the first version vs. the second version + merging them. 

filename=zabbix-server
cat /var/log/audit/audit.log  | grep zabbix | audit2allow -m $filename >> $filename.te
checkmodule -M -m -o $filename.mod $filename.te
semodule_package -o $filename.pp -m $filename.mod
semodule -i $filename.pp
 
 
#restorecon -R -v /run/zabbix/zabbix_server_alerter.sock    #suggested by the policygenerator

Ansible handlers (within roles) – run multiple tasks

Sometimes it could happen that we want to run multiple tasks after configuration file has changed instead of just one.

My specific usecase is, that I’m having a role that configures a SSL-certificate and additional SSL settings for an Apache webserver which could run standalone on a server or as a pacemaker resource.

If it is running directly on the server it’s quite simple and a handler is sufficient to restart the Apache service after the role ran through. In case the Apache is running as a pacemaker resource the resource should be restarted instead of the whole service to make sure pacemaker does not get confused. Therefore it is necessary to first check if we have the service running as a pacemaker resource first and execute the corresponding task afterwards so a single task (within our handler) is not sufficient.
Using the block statement will also not lead to success but fail with : ERROR! The requested handler 'Apply Apache Config' was not found in either the main handlers list nor in the listening handlers list

My approach to tackle this issue was to “missuse” the handler to only set a variable if something changed.
At the end of the role I’m checking if the variable is ture and if so, I’m including/executing my “handler-block”.

defaults/main.yml

common_linux_zabbix_server_web_ssl_path_private_apply_apache_config: no

handlers/main.yml

---
# handlers file for common_linux_zabbix_server_web_certificate

#set a fact which is checked at the end of the role-tasks
- name: "Apply Apache Config"
  set_fact:
    common_linux_zabbix_server_web_ssl_path_private_apply_apache_config: yes

tasks/main.yml

---
...
all your other tasks are executed before these following tasks
...


# workaround to run multiple tasks within a handler -> run the handlers if any of the above tasks did change something to notify the handler
- meta: flush_handlers

- name: "Run multiple tasks as a handler"
  include_tasks: ./restart_apache_resource_or_service.yml

tasks/restart_apache_resource_or_service.yml


---
- name: "Check if cluster resource exists"
  shell: "pcs resource | grep zbx_srv_httpd"
  ignore_errors: yes
  register: check

- name: "Restart cluster resource:"
  shell: "pcs resource restart zbx_srv_httpd"
  register: resource_restart
  when: check.rc == 0

- fail:
    msg: "An error occured when restarting the cluster resurce!"
  when: resource_restart.rc != 0

#Just restart the httpd service if no cluster resource was found
- name: "Restart httpd service"
  service:
    name: "httpd"
    state: restarted
  when: check.rc != 0[

Get CVE information from NIST NVD and RHEL

Just two littel scripts that come handy if you want to download all the CVE info in JSON format for offline use.

[pastacode lang=”bash” manual=”%23!%2Fbin%2Fbash%0Aurls%3D%24(curl%20https%3A%2F%2Fnvd.nist.gov%2Fvuln%2Fdata-feeds%23JSON_FEED%20%7C%20grep%20’https%3A%2F%2F’%20%7C%20grep%20-i%20json.gz%20%7C%20sed%20’s%2F.*href%3D%2F%2Fg’%20%7C%20cut%20-d%5C’%20%20-f2)%0A%0Amkdir%20-p%20.%2FnistNvdJson%0Acd%20nistNvdJson%0Afor%20l%20in%20%24urls%3B%0Ado%0Awget%20%24l%0Adone%0Agunzip%20*%0A” message=”Donwload NIST NVD CVEs in JSON” highlight=”” provider=”manual”/]

 

[pastacode lang=”bash” manual=”%23!%2Fbin%2Fbash%0A%0A%0AloopVar%3D1%0AdataDir%3D%22rhelCveData%22%0Amkdir%20%24dataDir%20-p%0Aecho%20%22getting%20data%3A%22%0AT%3D%22%24(date%20%2B%25s)%22%0Awhile%20%5B%5B%20%24loopVar%20-ne%200%20%5D%5D%3B%0Ado%0A%20%20%20%20%20%20%20%20echo%20-n%20%22-%24loopVar-%20%22%0A%20%20%20%20%20%20%20%20data%3D%24(curl%20-s%20https%3A%2F%2Faccess.redhat.com%2Flabs%2Fsecuritydataapi%2Fcve.json%3Fpage%3D%24loopVar)%0A%20%20%20%20%20%20%20%20if%20%5B%5B%20%22%24data%22%20%3D%3D%20%22%5B%5D%22%20%5D%5D%3B%20then%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20loopVar%3D0%0A%20%20%20%20%20%20%20%20else%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20toFile%3D%24toFile%24%7Bdata%3A1%3A-1%7D%22%2C%20%22%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20%20let%20loopVar%3DloopVar%2B1%0A%20%20%20%20%20%20%20%20fi%0Adone%0AT%3D%22%24((%24(date%20%2B%25s)-T))%22%0Aecho%20%22%5B%24%7BtoFile%3A%3A-2%7D%5D%22%20%3E%3E%20%22%24dataDir%2FrhelCve.json%22%0Ased%20-i%20’s%2F%5E%5C%5B%5C%5D%24%2F%2Fg’%20%22%24dataDir%2FrhelCve.json%22%0Aprintf%20%22Got%20data%20in%3A%20%2502dd%3A%2502dh%3A%2502dm%3A%2502ds%5Cn%22%20%22%24((T%2F86400))%22%20%22%24((T%2F3600%2524))%22%20%22%24((T%2F60%2560))%22%20%22%24((T%2560))%22%0A” message=”Get CVE infos for RHEL” highlight=”” provider=”manual”/]

 

Additional information:
If you query the NIST NVD Data and search for RHEL CPEs you won’t get a lot of hits as only a smal percentage of the CVEs that affect Red Hat software has the correct CPE attached. However – NIST NVD is nice to have because in the Red Hat CVEs only the total CVSS score is listed but no detailed vulnerability metrics are included.

vCenter alarm Polling (v2) – Cross platform version (Windows & Appliance)

As it has been some time since my last post about a solution on how to get vCenter alarms to Zabbix, and VMware also evolved I followed a new approach on that topic as my initial post only supports Windows vCenters. Furthermore the solution is not as stable as I wished that it would be, so my new approach is to query all alarms from a vCenter via it’s SDK.
Initially all the alarms are discovered and created in Zabbix and in a second step the values for the discovered alarms are polled.
Currently the script used the data center object of the vCenter to discover alarms, so it can’t be used on a standalone ESXi-Server. However – if the code is changed to use whatever object is needed to get the alarms directly from the ESXi-server it should also be possible to get alarms directly from a server without the need of a vCenter (but I didn’t implement that till now as there wasn’t the need/time).

vCenter alarms – SDK (tested with ESXi 6.0+ and Zabbix 3.0 on RHEL 7)

To install the vCenter alarms the attached zip needs to be downloaded and the VMware Perl SDK must be installed on the Zabbix Server.
The template needs to be imported into Zabbix and the vCenter username and password need to be set in the username/password macros of the template.

The other two files (vcenterAlarms.pl & vcenterAlarms.wrapper) need to be extracted to the externalscripts folder of the Zabbix Server. The wrapper script is just a shell script that is executed by a Zabbix item to call the per script and send the that to Zabbix via Zabbix Sender.  As the VMware API is quite slow the wrapper also starts itself again with NOHUP because otherwise the timeout defined in the Zabbix Server configuration would cause an exit of the script. For my setup it always took longer than 30 Seconds till tall data where gathered and therefor the Zabbix Server would kill the script in the middle of the execution and no data would be sent to Zabbix. That’s why I added this workaround. Furthermore it also checks if there are less than eleven vcenterAlarms.wrapper processes running, and only starts if there are less, to ensure that Zabbix does not spawn hundreds of NOHUP-processes.

 

 

 

 

Apache – force autocomplete=off for password fields

If 3rd party software is installed it is quite likely that the autocomplete attribute for password fields is not set to off. Editing such settings directly in the sourceode is possible most of the time, but it’s not the nicest way and you also run into the problem that everything could be gone again after an update of the software.

A nice workaround is to use the substitute module to accomplish that.

[pastacode lang=”markup” manual=”%3CLocation%20%22%2F%22%3E%0A%20%20%20%20AddOutputFilterByType%20SUBSTITUTE%20text%2Fhtml%0A%20%20%20%20Substitute%20%22s%7C%3Cinput%20type%3Dpassword%7C%3Cinput%20type%3Dpassword%20autocomplete%3Doff%7Ci%22%0A%3C%2FLocation%3E” message=”disable autocomplete for password fields” highlight=”” provider=”manual”/]