Sysadmin (4)


Optimizing Ansible for high performance

So, I have made a custom Ansible setup for more than 4000 servers in 12 different countries across the planet, and that gave me some insight into how to make it perform better.

First of all, sadly Ansible doesn’t yet support “proxy / caching servers” as in servers that you could use to execute playbook through. You can configure SSH proxy server, but that won’t help with performance. Only way to execute playbook from another server is to install Ansible there as well, sync the playbooks somehow and execute from this host.

Anyway, now for the performance hacks.

Redis caching

Major boost in performance. Simply install redis server on same host as Ansible and put this to configuration of ansible:

[defaults]
fact_caching = redis
fact_caching_timeout = 86400
fact_caching_connection = localhost:6379:0
gathering = smart

This will put all facts of every server you connect to into redis cache and next time you execute anything on that server (within 1 day), ansible will not gather facts again, but it would take them from redis cache.

Pipelining

Minor boost. But slightly helps:

[ssh_connection]
retries=3
pipelining=True

Multithreading

Major boost, but not very stable, often causes troubles. Putting more than 20 makes Ansible quite unstable.

[default]
forks = 10

Example config

This config works pretty well to me:

[defaults]
fact_caching = redis
fact_caching_timeout = 86400
fact_caching_connection = localhost:6379:0
gathering = smart
host_key_checking = False
timeout = 20
retry_files_save_path = /home/ansible/retry/
forks = 10
log_path=/var/log/ansible.log

[ssh_connection]
retries=3
pipelining=True

 




How to install mdadm to XenServer 7

Based on https://discussions.citrix.com/topic/378478-xenserver-7-raid1-mdadm-after-install-running-system/

# 1. Install Xenserver 7 with normal single disk configuration, don't create SR storage
# 2. copy partition talbe from sda to sdb
 
# !!! important don't write the order wrongly, from sda to sdb is like the following
sgdisk /dev/sda -R /dev/sdb


# Important! The partition layout may differ with XenServer version, basically there are 2 partitions with same size for OS and backup
# and at least 1 for GRUB and 1 for swap

parted /dev/sdb
# print
# quit

# You should see a list of partitions, one of them will be flagged as legacy_boot, grub, let's call it BOOT

## flag them as raid disks for sdb partitions
sgdisk --typecode=1:fd00 /dev/sdb # OS
sgdisk --typecode=2:fd00 /dev/sdb # Backup OS
sgdisk --typecode=3:fd00 /dev/sdb # ??
sgdisk --typecode=4:ef02 /dev/sdb # BOOT
sgdisk --typecode=5:fd00 /dev/sdb # LOGS
sgdisk --typecode=6:fd00 /dev/sdb # SWAP

## note that BOOT partition is not the same like the others. Because in the new disk configuration they changed the boot partition.

# 5. create the software raid partitions

mdadm --create /dev/md0 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb1 missing
mdadm --create /dev/md1 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb2 missing
mdadm --create /dev/md2 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb3 missing
mdadm --create /dev/md3 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb4 missing
mdadm --create /dev/md4 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb5 missing
mdadm --create /dev/md5 --level=1 --raid-devices=2 --metadata=0.90 /dev/sdb6 missing
mkswap /dev/md5

# 6. copy the contents of / and /var/log directories to the new partitions

mkfs.ext3 /dev/md0
mkfs.ext3 /dev/md4

# 7. mount newly created/formatted partitions
mount /dev/md0 /mnt
mkdir -p /mnt/var/log
mount /dev/md4 /mnt/var/log

# 8. copy contents to the newly mounted directory

cp -xR --preserve=all / /mnt

# 9. create a mdadm file for boot process (!!!if you forget the file the MD devices will have different names)

### the head of the file should include these lines
echo "MAILADDR root" > /mnt/etc/mdadm.conf
echo "auto +imsm +1.x -all" >> /mnt/etc/mdadm.conf
echo "DEVICE /dev/sd*[a-z][1-9]" >> /mnt/etc/mdadm.conf
mdadm --detail --scan >> /mnt/etc/mdadm.conf

# 10. copy the contents to the root folder
cp /mnt/etc/mdadm.conf /etc

# 11. configure mount points
sed -i 's/LABEL=root-[a-zA-Z\-]*/\/dev\/md0/' /mnt/etc/fstab
sed -i 's/LABEL=swap-[a-zA-Z\-]*/\/dev\/md5/' /mnt/etc/fstab
sed -i 's/LABEL=logs-[a-zA-Z\-]*/\/dev\/md4/' /mnt/etc/fstab
sed -i '/md5/ a\/dev/md5          swap      swap   defaults   0  0 ' /mnt/etc/fstab
cp /mnt/etc/fstab /etc

# 12. change the label name for /dev/sdb1 partition
e2label /dev/sda1 |xargs -t e2label /dev/sdb1

# 13. bind mount dev sys proc to the mnt folder
mount --bind /dev /mnt/dev
mount --bind /sys /mnt/sys
mount --bind /proc /mnt/proc
chroot /mnt  /bin/bash

# 14. install grub on /dev/sdb
grub-install /dev/sdb

# 15. backup initrd
cp /boot/initrd-$(uname -r).img /boot/initrd-$(uname -r).img.bck

# 16. create new initrd for raid
dracut --mdadmconf --fstab --add="mdraid" --filesystems "ext3 tmpfs devpts sysfs proc" --add-drivers="raid1 raid456 mdraid1x mdraid09" --force /boot/initrd-$(uname -r).img $(uname -r) -M

###never change the boot configuration via grub-mkconfig.. it will kill xenserver.. change the GRUB configuation, by hand inside the files

# 17. change grub configuration
sed -i 's/quiet/rd.auto rd.auto=1 rhgb quiet/' /boot/grub/grub.cfg
sed -i 's/LABEL=root-[a-zA-Z\-]*/\/dev\/md0/' /boot/grub/grub.cfg
sed -i '/search/ i\   insmod gzio' /boot/grub/grub.cfg
sed -i '/search/ i\   insmod part_msdos' /boot/grub/grub.cfg
sed -i '/search/ i\   insmod diskfilter mdraid09' /boot/grub/grub.cfg
sed -i '/search/ c\   set root=(hd0,gpt1)' /boot/grub/grub.cfg

# 18. exit from chroot
exit

# 19. change the same things in sda1 partition so after reboot you don't need to boot from second disk
cp /mnt/boot/initrd-3.10.0+10.img /boot/

sed -i 's/quiet/rd.auto rd.auto=1 rhgb quiet/' /boot/grub/grub.cfg
sed -i 's/LABEL=root-[a-zA-Z\-]*/\/dev\/md0/' /boot/grub/grub.cfg
sed -i '/search/ i\   insmod gzio' /boot/grub/grub.cfg
sed -i '/search/ i\   insmod part_msdos' /boot/grub/grub.cfg
sed -i '/search/ i\   insmod diskfilter mdraid09' /boot/grub/grub.cfg
sed -i '/search/ c\   set root=(hd0,gpt1)' /boot/grub/grub.cfg

# 20. reboot

# !!!!!!reboot the server the server will boot from software raid..!!!!!
# 21. After the reboot add the /dev/sda to the new MD disks.

sgdisk /dev/sdb -R /dev/sda

mdadm -a /dev/md0 /dev/sda1
mdadm -a /dev/md1 /dev/sda2
mdadm -a /dev/md2 /dev/sda3
mdadm -a /dev/md3 /dev/sda4
mdadm -a /dev/md4 /dev/sda5

# This will take a while for resync of all disks

grub-install /dev/sda

# Create SR

xe sr-create content-type=user device-config:device=/dev/md2 host-uuid=<host-uuid> name-label=”SRRaid1-Local” shared=false type=lvm

This post is basically just a backup of that forum post in case it become dead link




Letsencrypt kung-fu

Let’s encrypt CLI client is by far the most shittiest software ever invented, there is probably no doubt about it, but sadly, it’s the only interface that is supported, and unless you want to pay money for SSL certificate you need to live with that.

First of all – yes, their client (without asking or telling you) WILL run sudo and WILL use root and most likely WILL install garbage on your server that you don’t want to have there. If you never used letsencrypt client before, run it on testing VM first, before it desecrates your favorite web server with random garbage you don’t want there.

The letsencrypt client is written for dumb people, and it is based on undocumented black magic that I will try to uncover here a bit. The client basically works with a component called “certbot” which is a software that run on your server and does something to prove that you really own the domains for which you want to generate your SSL certificate. Because letsencrypt staff doesn’t want to bother you with technicalities they created this crap of a software to deal with them for you, in their own way, like it or not. It uses so called ACME (Automatic Certificate Management Environment) protocol to verify that you are owner. This thing is not a rocket science, and in a nutshell all it does is publish some data used to prove your ownership through your webserver, usually located on webroot/.well_known, their counter-party server will try to locate these by accessing your.domain/.well_known and in order to make it possible to verify your domain without modifications to your webserver, all you need to do is to create a central webroot and then make a symlink from all domain webroots to this one (just ln -s /var/www/letsencryptshite/.well_known /var/www/your.uber.tld/.well_known).

Once you do that, always pass these 2 parameters to their “software”:

--webroot --webroot-path /var/www/letsencrypt_shite

I also strongly recommend you to maintain a comma separated list of all domains for which you want to get your certificate and store it somewhere like /etc/letsencrypt/domains because you will need to provide this list very often.

Now a little cheat sheet:

Renewing all domains

This can even be in your cron

./letsencrypt-auto renew --webroot --webroot-path /var/www/letsencrypt_shite

You may need to restart / reload your web server after doing this, since the certificate will be overwritten, and Apache seems to be caching it somehow.

Adding or remove a domain and regenerate certificate

Modify your /etc/letsencrypt/domains list and run

./certbot-auto certonly --webroot --webroot-path /var/www/letsencrypt_shite/ --agree-tos --expand -d `cat /etc/letsencrypt/domains`

Common locations:

/etc/letsencrypt – root of this thing’s config

/etc/letsencrypt/live – symlinks to current certificates, that’s where you can find chains for your domains

Example apache config that uses letsencrypt cert

<VirtualHost *:80>
    ServerName bena.rocks
    DocumentRoot /var/www/bena.rocks
</VirtualHost>

<VirtualHost *:443>
    SSLEngine on
    SSLCertificateFile /etc/letsencrypt/live/insw.cz/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/insw.cz/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/insw.cz/chain.pem
    ServerName bena.rocks
    DocumentRoot /var/www/bena.rocks
</VirtualHost>

 




How to install mdadm on citrix xen 6.5

For some reason citrix doesn’t like mdadm so they make everything possible to stop it from working on their xen server.

Here is a guide that would make it work there, but it may not survive system patching

Setup

Connect at least 2 disks to your box. Install a xen server without local storage on first disk.

Installing mdadm

The default install contains mdadm but it doesn’t load raid modules to kernel. In order to enable it, following needs to be done:

echo "modprobe raid1" > /etc/sysconfig/modules/raid.modules
modprobe raid1
chmod a+x /etc/sysconfig/modules/raid.modules

Partitioning the disks

Now we create a final schema we want to use on our server on disk /dev/sdb, xen needs to have at least 3 partitions, 1 is for boot loader, second is for OS, I recommend 20gb or more, because this disk is pretty much impossible to extend, although citrix defaults it to 4GB, last partition is for local storage and it should take all remaining space on disk.

Note: Citrix by default creates 3 partitions, 1 for OS, second is empty, same sized as first one and probably used for system upgrade. Third is used for local storage LVM. You don’t have to create second partition for it to work, but system upgrades may not be available if you don’t create it. On other hand system upgrades will likely not work anyway as citrix doesn’t support mdadm installations.

In this guide I will use old MS-DOS partition table because although it’s old, it’s much better supported and it just works. You can also use GPT partitions if you want, but I had some issues getting them work with mdadm and syslinux.

We will have a separate /boot partition for boot loader, because syslinux shipped with xen is having troubles booting from raid device for some reason.

So this is how the layout of sdb should look after we finish the partitioning:

  • /dev/sdb1 (2 GB) for bootloader
  • /dev/sdb2 (20 GB) for OS
  • /dev/sdb3 (rest) for LVM
# Configure disk
sgdisk --zap-all /dev/sdb
# Now run fdisk and create a new dos partition table
# make 2 partitions, one is for boot (/dev/sdb1 2gb) and one for Dom0 (/dev/sdb2 20gb)
#### after it's done create md device
mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2
mkfs.ext3 /dev/md0
mkfs.ext3 /dev/sdb1
mount /dev/md0 /mnt
cp -vxpR / /mnt
cd /mnt
mv boot /tmp/old_boot
mkdir boot
mount /dev/sdb1 /mnt/boot
mv /tmp/old_boot/* boot
# Fix /mnt/etc/fstab - replace LABEL with /dev/md0 and insert a record for /boot
# EXAMPLE:
head -2 /etc/fstab
/dev/md0    /         ext3     defaults   1  1 /dev/sda1   /boot     ext3     defaults   1  1  
# Update boot loader 
# You need to open /boot/extlinux.conf and replace all references to old disk with root=/dev/md0
mkdir /mnt/root/initrd-raid mkinitrd -v --fstab=/mnt/etc/fstab /mnt/root/initrd-raid/initrd-`uname -r`-raid.img `uname -r` cd /mnt/root/initrd-raid zcat initrd-`uname -r`-raid.img | cpio -i
mdadm --detail --scan >> etc/mdadm.conf
find . -print | cpio -o -Hnewc | gzip -c > /mnt/boot/initrd-`uname -r`-raid.img rm /mnt/boot/initrd-3.10-xen.img cd /mnt/boot ln -s initrd-`uname -r`-raid.img initrd-3.10-xen.img
extlinux -i boot/
cat /usr/share/syslinux/mbr.bin > /dev/sdb
# Open /mnt/boot/extlinux.conf
# remove absolute path to xen.gz to relative, /boot will be root device for bootloader (/xen.gz), replace LABEL with /dev/md0

###### example conf file for syslinux that works ######
# location mbr ui vesamenu.c32 serial 0 115200 default xe prompt 1 timeout 50

label xe
menu label XenServer
kernel mboot.c32
append xen.gz dom0_mem=752M,max:752M watchdog dom0_max_vcpus=2 crashkernel=128M@32M cpuid_mask_xsave_eax=0 console=vga vga=mode-0x0311 --- /boot/vmlinuz-3.10-xen root=/dev/md0 ro hpet=disable xencons=hvc console=hvc0 console=tty0 --- initrd-3.10-xen.img 
#######################EOF###################### 

reboot

Now you should be able to boot from /dev/sdb if you are not there is something wrong with the setup, you need to figure out if your problem is with

  • MBR (No bootable device)
  • Boot loader (Missing operating system.)
  • /boot (Linux will start booting but die in progress – try removing quiet and splash from parameters)

 Syncing the disks

Now if you were able to boot up you need to setup the sda disk

Create the same 3 partitions as you did on sdb on sda and then

dd if=/dev/sdb1 of=/dev/sda1
# mbr
cat /usr/share/syslinux/mbr.bin > /dev/sda
mdadm --add /dev/md0 /dev/sda2

Wait for disks to sync, meanwhile you can create a new local storage

mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb3
 xe sr-create content-type=user device-config:device=/dev/md1 name-label="Local Storage" shared=false type=lvm