MySQL / MariaDB: “Table ./mysql/proc is marked as crashed and should be repaired

Image Source, [1], [2]

Eventually you know this: Your server crashed. The server had a running MySQL or MariaDB instance with software which are using the instance to store their data. After a fresh new start after your server crashed, the software you’re normally running constantly claims the following: “Table ./mysql/proc is marked as crashed and should be repaired.”.

How To fix

A fix for that problem is very easy. First, ensure that your server and your MySQL Daemon is normally running again. You can check the MySQL Daemon like this:

root@server:~# service mysql status

If you are running MariaDB, you have to use the following command:

root@server:~# service mariadb status

If the Daemon is not running, you can start them with this command:

root@server:~# service mysql start

Or, if you use MariaDB:

root@server:~# service mariadb start

When you’re sure that MySQL / MariaDB is running, you can issue the following command to get rid of the problem:

root@server:~# mysqlcheck --auto-repair -A -u root -p

Of course, for this command you need your MySQL / MariaDB root password. The now started process can take a long time. It checks every single table in your database on corruption. If there is a corruption found, it will be fixed. After the process, you should restart your MySQL Daemon:

root@server:~# service mysql restart

Or for MariaDB:

root@server:~# service mariadb restart

From now on, everything should work as expected again.

How To get your Realtek RTL8111/RTL8168 working (updated guide)

Realtek Logo
Image Source:

A lot of people will remember my guide how to get a RTL8111/RTL8168 running under your Linux box. This guide is almost 5 years old now and I wanted to make a complete overhaul, because a lot of things has changed since then.

Why do I need this driver anyway?

Some people asked me, “Why do I need this driver anyway? Doesn’t the Linux Kernel ship it?”. This is of course a valid question. As far as I can see this, the RTL8111/RTL8168 is not Open Source and this would be of course the reason why the driver isn’t included into the Linux Kernel. As long as the driver isn’t Open Sourced, we have to build it on our own.

The installation methods

A lot of things have changed since I written the initial article about how to compile the driver under Ubuntu / Debian. Today we can use 2 methods for installing the driver. The following lines describes both of them.

The automatic way

NOTE: Thanks to the user “Liyu” who gave me this hint!
NOTE2: For this way you need a working internet connection. You could use WLAN or a USB ethernet card like this one to get a temporary internet connection. You could also download every needed single package onto USB from another PC and install them in the right order.

As I said ealier, 5 years is a long time. And today Ubuntu and Debian have the driver included in it’s repository. For Debian you have to enable the non-free package sources. For Ubuntu you have to enable the universe package sources. You can easily do this by open your /etc/apt/sources.list as root with your editor of choice and add for each line starting with “deb” non-free or universe at the end. So for example, if you use Debian a line like:

deb jessie main contrib

would become to

deb jessie main contrib non-free

The same for Ubuntu:

deb xenial main restricted

would become to

deb xenial main restricted universe

After this you have to do a:

sudo apt-get update

You can of course use graphical ways to enable non-free or universe. After you enabled the missing package repository, you will be ready to install the driver. This can be easily done with the following command:

sudo apt-get install r8168-dkms

The procedure will take some time, depending on your CPU because the driver will be build for your working Kernel. The good side is, that if any Kernel update happens on your machine, the kernel will be rebuild against the new Kernel automatically after the update because of the use of dkms.
After the procedure is finished, you should be able to use your network card instantly. If not, you should consider a reboot of your PC then.

The manual way

Well, the manual way is almost the same as it was before in the initial article. Anyway, I want to rewrite the steps here again. This is also tested against newer Kernels ( >= 4.0) which caused a lot of trouble for some people in the past.

  • 1. Install dependencies: Once more you need a working internet connection for this. You could also use the Debian or Ubuntu DVD which includes the needed packages. To install the dependencies just enter the following command:
    sudo apt-get install build-essentials
  • 2. Download the driver: You can download the driver from the official Realtek homepage. This is the link: click me. From the table, select the “LINUX driver for kernel 3.x and 2.6.x and 2.4.x” for download.
  • 3. Blacklisting the r8169 driver: The r8169 is loaded when the r8168 is not found on your system. This will give you a network and internet connection, but with the r8169 driver your RTL8168 card will be very unstable. This means slow download rates, homepages taking hours to load and so on. To avoid that the r8169 is loaded, we blacklist it. This is be done by entering the following command:
    user@linux:~$ sudo sh -c 'echo blacklist r8169 >> /etc/modprobe.d/blacklist.conf'
  • 4. Untar the archive: After you successfully downloaded the driver, cd into the directory where the driver is downloaded and untar the driver with the following command:
    user@linux:~$ tar xfvj 0005-r8168-8.042.00.tar.bz2

    NOTE: Your tar filename can of course differs if you download a newer version in the future for e.g.

  • 5. Compiling and installing the driver: Now we have to start compiling the driver. For this you cd into the extracted directory:
    user@linux:~$ cd r8168-8.042.00

    NOTE: Don’t forget to change your version number in the future here.
    Now that you are in the right directory, we can start with the real compiling process. For this Realtek brings an easy to use script which is called So, to start compiling and installing the driver enter:

    user@linux:~/r8168-8.042.000$ sudo ./

    You should see a output which looks like this:

    Check old driver and unload it.
    rmmod r8168
    Build the module and install
    At main.c:222:
    - SSL error:02001002:system library:fopen:No such file or directory: bss_file.c:175
    - SSL error:2006D080:BIO routines:BIO_new_file:no such file: bss_file.c:178
    sign-file: certs/signing_key.pem: No such file or directory
    Backup r8169.ko
    rename r8169.ko to r8169.bak
    DEPMOD 4.4.0-31-generic
    load module r8168
    Updating initramfs. Please wait.
    update-initramfs: Generating /boot/initrd.img-4.4.0-31-generic

    You can ignore the SSL error for now. The driver should be successfully compiled and installed into your system. The driver is already loaded and should work.

  • 6. Check the driver: As a final step, you could start checking if the driver is really loaded into your Kernel. For this you can use the command lsmod. lsmod lists all drivers, which are usable by your Kernel. So, if everything was successful, you should see an output like this:
    user@linux:~/r8168-8.042.000$ lsmod | grep r8168
    r8168                 491520  0

    If your driver isn’t loaded until now, you should go with a reboot before further investigation.

That’s it

And that’s it. Now you’re ready to use your RTL8168/RTL8111 with the official Realtek drivers. If you have any questions and / or suggestions, please let me know in the comments.

My new helper Ansible

Ansible Logo
Image source:

For some weeks ago, I decided to use Ansible as my central configuration tool of choice. The following text should give you a short intro in how to deploy files with Ansible.

What exactly is Ansible?

Now, what exactly is Ansible? Ansible is an auto configuration tool, which helps you to keep your configuration files central managed. You will benefit from easier configuration and you will save a lot of time. For example, if you have 3 DNS servers and you want to ensure that all this systems have the same db and configuration files used, you could use either a network storage (which is obviously something over the top here) or keep them central and push them to all DNS servers.

And Ansible is exactly doing that. It pushes your configuration files on given hosts. Another really pro for Ansible is, that it uses SSH to do so. This means that you don’t have to install a service to get your machines configured. Ansible is agent-less.

How to install Ansible

For almost every distribution out there in the wild, Ansible is available in the system repositories. For Ubuntu / Debian you can easily do:

sudo apt-get install ansible

as well as for openSUSE you can just do:

sudo zypper install ansible

After this, you can find the standard Ansible configuration under /etc/ansible.

Make SSH ready for Ansible

Every host which will be managed via Ansible needs to have a Public Key and a user which is allowed to use this Public Key to connect to the Host.

For my personal purposes I created a user which is allowed to change the files which are coming from Ansible. This is of course optional. You could also do this with the user root even if that means a little bit more of a security risk. Anyway, you have to generate a Public Key which is done with the following command for the user ant:

ant@ansible:~$ ssh-keygen -t rsa -b 4096

For our scenario, you shouldn’t set a password for the key. The other questions can be confirmed with pressing ENTER without any changes. After this your SSH Key is ready and you can push them to your Host which will be configured with Ansible later. After the following command is issued, the user ant on your Ansible system will be able to login as user ant on the system target without entering a password. As described above, you could also do this with the user root here:

ant@ansible:~$ ssh-copy-id -i ~/.ssh/ ant@target

Now you should be able to login via ssh as the user ant on the system target without entering a password.

Configure Ansible

Now that you have the target System fed with an Public Key, you are able to make the target System known in Ansible. The Ansible configuration files are located at /etc/ansible. First of all you should add the target system in the /etc/ansible/hosts file:


Let me explain this file a little bit. You could enter a hostname per line in this file. Ansible then knows them and will deploy the given files. To make it much easier configuration wise and for the human eye to read it, there are groups available. A group start with [ and ends with ]. So in this case the host target.local.dom would be in the group testsystems. It is possible to get one host into multiple groups.

So now the host is known for Ansible. As next we need to define the files which will be pushed and where their going to be deployed at the root filesystem of the target host.
For this, Ansible is using so called Playbooks. As its name implies, the Playbook is a collection of things which has to be done on the target system. You can do almost everything here which you also would be able to do by hand in the console. There a plenty of modules which can be used for e.g. Update your system, set file permission, and so on. And even if there is no module available which fits your needs, you can always use the bash module and write down what the system should do by yourself. A complete list of the modules and how to use them can be found at the official documentation of Ansible. In this example we will push files to the target system. So we have to define this in the Playbook. You can either use the copy or the synchronize function within the file module to push the wanted files to the target. The following example will use the synchronize function:

- hosts: testsystems
files: /etc/ansible/files/testsystems/
gather_facts: false
become: false

- name: copy files
synchronize: src={{ files }} dest=/opt
notify: restart ssh

- include: handlers.yml

So what does this Playbook do now? Let me explain this file step by step:

  • hosts: Here we insert the group(s) which has been declared in the /etc/ansible/hosts file. You can name here single hosts as well as groups. It’s always recommended to use groups here.
  • vars: In vars we can declare variables which are used within this Playbook configuration. There is actually one variable defined which is called files. This variable is used later in the tasks section.
  • gather_facts: This is true or false. The standard is true. gather_facts is collecting informations about your system which can be used within the modules. Here it is disabled because we know that this Playbook will running well with the settings we give Ansible.
  • become: In earlier versions of Ansible this was called sudo. Become decides wether this Playbook needs root /sudo privileges to run, or not. The way how the system will become the root is defined in the central ansible.cfg. If set to true, you have to ensure that the “become user”, is available on the target system and has sudo permissions.
  • tasks: In tasks we define what to do on the target system. In this case we have one task, which is named “copy files”. It uses synchronize which is provided by the file module. The source path is the path which is defined in the variable files at the beginning of the Playbook-file. The Destination is the absolute path on the target system. In this case “/opt”. At the end, we use a notifier. This notifier is calling “restart ssh” if a file has really changed on the target system. “restart ssh” is written down in the “handlers.yml” file. This file has to be in the same directory as this Playbook.

This is a really short and easy example of the capabilitys of Ansible. As I said earlier, you can do a lot of more stuff. For this you should consider to read the official documentation of Ansible.

You can now save the file where ever you want. I recommend a place like /etc/ansible/conf.d. The filename ending should be .yml. So for e.g. we could save the file “testservers.yml” under /etc/ansible/conf.d.

The handlers.yml file

As mentioned before, the handlers.yml file is just an addition to your existing Playbook. In the handlers file you can write down commands which can be reused a lot of more times. For example the “restart ssh” command, which has been called in our Playbook, can be also needed by other upcoming hosts. To prevent for writing down the same commands again and again, we use an external extra file, which holds all the reusable commands. This file is here called handlers.yml and has to be stored in the same directory as your Playbooks. An example handlers file looks like this:

- name: restart bind
service: name=named state=restarted

- name: restart ssh
service: name=ssh state=restarted

So as you can see, we use the module service to restart the services ssh and bind. The services are restarted on the target system, when they get called in a Playbook. In our Playbook-example above, the “restart ssh” command is triggered after copying files.

Testing our new configuration

We have a valid hosts and Playbook file and our target system has the needed Public Key. So we should be ready to go. To start or “call” a Playbook you have to issue the following command on your command line:

ant@ansible:~$ ansible-playbook /etc/ansible/conf.d/testservers.yml

NOTE: Don’t forget, that you have to issue the command on your Ansible system, with the user which has the Public Key stored on the target system.
Now your Ansible server system should start pushing the data to the target system. A output like this should be shown to you:

PLAY [testservers] ************************************************************

TASK: [copy files] **************************************************
changed: [target.dom.local ->]

NOTIFIED: [restart ssh] ******************************************************
changed: [target.dom.local]

PLAY RECAP ********************************************************************
target.dom.local : ok=2 changed=0 unreachable=0 failed=0

This means the files are successfully copied and the ssh service was restarted. This means your first Ansible Playbook is running fine without issues. Now you can go on in adding more tasks.
Don’t for get the official documentation to do so🙂

Linux Mint: MDM fails to load after login

[Linux Mint Logo]
Image source:

Some annoying problem has been occured on one of the client machines I work with. Every time when I tried to login, the MDM has thrown an error which says that it was unable to login due to undefined commands and variables. The client runs Linux Mint 17, but the problem happens with 18 as well.

The problem

If I tried to login MDM failed to load Cinnamon, Gnome, MATE or whatever I tried to use. If I expanded the error login message, it reported, that it was unable to use the additional scripts for the profiles, which were stored in /etc/profile.d/. This scripts, which are made by myself for a longer time now, are using a lot of specific bash stuff (variables, built-in commands, and so on).
Sadly, the MDM XSession file comes with a /bin/sh shebang which does not have the same spectrum of commands as bash.

The solution

The solution is rather simple. If MDM yells, that it is unable to load the Desktop because of a script error due to a file which is located in /etc/profile.d you simply have to modify the XSession script which comes with MDM.
The XSession Script for MDM is located at /etc/mdm/XSession. A simple change of the shebang does solve the problem. Just change the first line of the XSession file from




and you should be able to successfully login again.

You should always keep an eye of possible MDM updates. If the MDM Login Manager is updated on your system, it’s very likely, that the XSession file is getting overwritten and you will have the do this change again.

OpenVPN Error: Linux route add command failed

OpenVPN Logo

Image source:

Everbody knows OpenVPN. A powerful and easy to configure VPN client, which is cross-platform available for BSD, Linux, MAC and Windows.
A lot of my Linux boxes are OpenVPN clients, starting with Virtual Machines as well as physical boxes. If I use my OpenVPN server as a default gateway, some machines having trouble to create the necessarily route. The output in the most cases is something like this:

Sun Jun 19 14:03:20 2016 /bin/ip route add via
RTNETLINK answers: No such device
Sun Jun 19 14:03:20 2016 ERROR: Linux route add command failed: external program exited with error status: 2

So this means that the OpenVPN tried to create a new route with the help of the ip command which failed (error code 2). But how to fix this?

Add the route by your own

I’ve searched around the internet and nobody really had an answer to this. Well, the solution is rather simple. Directly after the successful connection to your OpenVPN server, add the route by your own. The following example would do this for the shown error above:

sudo route add -host dev enp4s0

As you can see, there is no gateway address to reach the host. It’s simply the Ethernet device which is stated here (enp4s0 is the name of the first wired Ethernet device under openSUSE when using Network Manager (formerly known as eth0)).

This error also occurs, if you want to use a OpenVZ container as a OpenVPN client. By default, the first virtual network device of a OpenVZ container is called venet0. So you would have to enter the following command to get this error fixed:

sudo route add -host dev venet0

After you added the host to your routing table with the correct outgoing network device, you’re ready to go to use the VPN as your default gateway.

Permanent Fix

To be honest, until now I wasn’t able to find a permanent fix for this. So this also means that you have to redo the route add command every time, when you have connected to your VPN.
If you know a permanent fix for this problem, just let me know in the comments below. Your help is appreciated🙂

Convert IMG (raw) to QCOW2

KVM Logo

Most of you will know the Kernel-based virtual machine. It’s already included with the latest Linux kernels and it gives you full virtualization under Linux which provides the capability to run almost every x86 OS you want inside a virtual machine.

Some versions ago, if you created a new virtual machine in KVM, the virtual hard disk was a RAW .img container. The new container type is QCOW2 and one of it’s main features is to enable the snapshot functionality of KVM.
So this means, if you have virtual machines which have a IMG HDD attached, than you will not be able to create snapshots of this virtual machine. Luckily the KVM developers are providing tools, which helps you to convert existing IMG HDDs to QCOW2 HDDs.

The convert process

First of all, this will take some time and it depends of course on the size of the HDD. Also, you should shutdown the virtual machine so that the convert process has the standalone access on the HDD while converting. The following example would convert a .img HDD to a .qcow2 HDD:

qemu-img convert -f raw -O qcow2 /path/to/your/hdd/vm01.img /path/to/your/hdd/vm01.qcow2

To explain the command a litte bit more:

  • qemu-img is the command which should be executed
  • convert says qemu-img that we want to convert an existing HDD
  • the switch -f raw lets qemu-img know, that the existing format of the HDD is RAW (in this case with .img filename ending)
  • the -O qcow2 switch tells the qemu-img command that the destination HDD should be QCOW2
  • the first file is the exisiting raw HDD, the second one is the filename of the new QCOW2 type HDD

So, let us say we want to convert a raw HDD which is located in /var/lib/libvirt/images (standard path for new KVM machines) to a QCOW2 HDD:

qemu-img convert -f raw -O qcow2 /var/lib/libvirt/images/machine01.img /var/lib/libvirt/images/machine01.qcow2

After you have done this, you just have to change the path from your HDD in your virtual machine from the raw .img to the .qcow2 file. NOTE: The .img file is not deleted after the successful convert process. You have to do this on your own.

At the end, you should be able to create snapshots for your virtual machine. One of the best features while using virtual machines at all😉

Unitymedia, Amazon Fire TV und Netflix …

Netflix Logo


NOTE: This post is exceptionally in german. This has to do with the content because it’s only relevant for people who live in Germany.

Unitymedia, Amazon Fire TV und Netflix … was hab ich mich nicht die letzten Tage mit diesen drei Komponenten herumgeärgert. Aber warum eigentlich?

Um das zu erläutern, muss ich erst einmal ein wenig ausholen. Ich bin seit geraumer Zeit Unitymedia Kunde. Um genau zu sein seit ca. einem halben Jahr. Vor etwa einem Monat muss ein “stilles Update” von Seitens Unitymedia auf meinem Router durchgeführt worden sein. Nun ist das ja nichts Neues, seit einem Zwangsrouter von Seitens der Provider verordnet werden. Das Schlimme an diesem Update jedoch ist, dass ich als Netflix Kunde und Amazon Fire TV Nutzer seither keinerlei Konnektivität mehr zu den Netflix Servern über die offizielle Netflix App auf dem Amazon Fire TV mehr herstellen kann. Es erscheint lediglich bei jedem Starten der App der Fehler “ui-113” mit der Information, dass keine Verbindung zu den Netflix Servern hergestellt werden konnte.

Der Workaround als Lösung

Ich habe etliche Stunden damit verbracht das Netz zu durchforsten. Das Problem wird nämlich immer wieder auch in diversen Foren besprochen.
Eine sehr hilfreiche Diskussion fand dabei im Forum statt. Hier konnte ich entnehmen, dass wohl das Anpassen der MTU hilft. In vielen Fällen wird von 1450 Byte anstelle von den standardmäßigen 1500 Byte gesprochen. Ich selbst habe mittlerweile meine MTU auf 1400 angepasst und seither funktioniert Netflix bei mir auch wieder auf dem Amazon Fire TV.

Viele im Forum haben geschrieben, dass es ihnen auf Grund ihres Routers, den sie von Unitymedia gestellt bekommen haben, nicht möglich sei die MTU zu ändern. Der Wert sei fest von Unitymedia vorgegeben.
Ich selbst habe ein “Connect Box” von Unitymedia. Leider kann ich nicht sagen, um welches Gerät es sich genauer handelt, jedoch steht mir die Schaltfläche zum Ändern der MTU zu Verfügung:


Alternativ hab ich auch zwei weitere Methoden versucht, welche ebenfalls beide dazu geführt haben, dass Netflix über den Amazon Fire TV wieder funktioniert, auch wenn die MTU auf dem Router von Unitymedia immer noch auf 1500 steht:

  1. Weiterleiten des Datenverkehrs über einen Home Server: Dies funktioniert natürlich nur, wenn ihr einen Homeserver zu Hause habt. In diesem Fall kann dieser Server als Gateway für den Fire TV genutzt werden. Eine entsprechende Konfiguration des Servers im Vorfeld versteht sich natürlich.
    Durch das Anpassen der MTU auf dem Netzwerkinterface des Servers, hat dies auch entsprechend Einfluss auf die MTU des Fire TVs. Dem Fire TV muss die IP Adresse des Servers als Gateway hinterlegt werden.
  2. Kauf eines Routers, welche hinter die Unitymedia Box geschalten wird: In diesem Fall kauft ihr euch bspw. eine kleine Variante der FritzBox von AVM. Alternativ gibt es natürlich auch noch andere Router wie z. B. von TP-Link, von denen viele Modelle ebenfalls das Anpassen der MTU unterstützen. In diesem Fall muss der (neue) Router jedoch so konfiguriert werden, dass dieser kein DHCP Server mehr anbietet (das macht nämlich bereits die Box von Unitymedia). Außerdem muss die IP Adresse entweder ermittelt werden, sofern der Router über DHCP von der Unitymedia Box eine IP Adresse zugewiesen bekommt, oder diese manuell gesetzt werden (empfohlen). Sofern diese Punkte beachtet werden, kann abschließend die MTU im (neuen) Router gesetzt, und anschließend die IP Adresse eben dieses Routers im Amazon Fire TV als Gateway hinterlegt werden.

Gibt es mit der Umstellung der MTU irgendwelche Nachteile?

Im Grunde nicht. Es werden mehr Pakete verschickt wie bisher, da die maximale Paketgröße nun 1400 Bytes anstelle von 1500 Bytes ist, jedoch konnte ich in keinem Fall feststellen, dass darunter bspw. mein Ping oder meine Downloadrate zu leiden hatte. Insofern ist das alles “in Ordnung” und in gewisser ein Workaround, der auch getrost permanent genutzt werden kann.

Was sagen eigentlich Unitymedia, Amazon und Netflix zu diesem Problem?

Wie man im Netz an verschiedenen Stellen liest, machen die drei keinen großen Hehl daraus, dass es derzeit Probleme gibt. Jedoch schieben diese sich gegenseitig den schwarzen Peter zu. Immerhin liest man in einem Forum, dass sich alle drei zusammensetzen um das Problem zu lösen.

Das Ganze erinnert einen auch an die Vorkommnisse im Playstation Network, als der Login einfach nicht mehr möglich war. Damals hatte man sich ebenfalls damint beholfen, dass man die MTU in der PS4 von 1500 auf 1450 herabsetzt und schon war der Login wieder möglich … schon komisch, was manchmal 50 Byte hin oder her ausmachen können🙂