KVM VM Not starting – could not get access to acl tech driver ‘ebiptables’

Issue

KVM VM not starting with the following error when you try to start:

could not get access to acl tech driver 'ebiptables'

There is a nwfilter module for libvirt. If for some reason, it comes up with an issue, the above error would appear. To fix this, you need to update (If any update is available) / reinstall (If no update is available) the following module using Yum:

libvirt-daemon-config-nwfilter

The command would be like the following:

yum update libvirt-daemon-config-nwfilter

That shall fix the issue.

The name org.fedoraproject.FirewallD1 was not provided by any .service files – OpenVZ 7 KVM VM Start Erro

Problem

When starting an Openvz 7 based KVM VM, an error appears with the following:

The name org.fedoraproject.FirewallD1 was not provided by any .service files

How to solve the problem?

Resolution

The error is appearing because OpenVZ 7 expects to have the Firewalld daemon running. If it can’t find it, it will throw the above error and prevent from starting the VM. To solve the problem, just start your firewalld daemon.

service firewalld start

Now if you try to start the VM, it shall start.

How to Add RAW Image into An Existing OpenVZ 7 VM

OpenVZ 7 supports both KVM VM and OpenVZ containers. For a customer, we were trying to import a CentOS 7 based KVM VM to OpenVZ 7 KVM VM. But the problem is, the VM was created in RAW format. But OpenVZ 7 KVM does not support RAW files for KVM, it supports QCOW2. Hence, we had to first convert the RAW image into a QCOW2 image first.

How to convert RAW KVM VM Image to QCOW2 VM Image

To convert raw to qcow2, you may use qemu-img kvm tool. This tool comes with the KVM setup. Let’s say your raw image is ‘harddisk.hdd.raw’, and you would like to produce a qcow2 image, called ‘harddisk.hdd’. To do that, you may run the following:

qemu-img convert -f raw -O qcow2 /root/harddisk.hdd.raw /root/harddisk.hdd.qcow2

Make sure, you are running this script from the place where you have the harddisk.hdd.raw stored.

Replace the disk in OpenVZ 7

In our case, our files are stored under /vz/vmprivate/. Under this folder of OpenVZ, you would find folders with VM id. In our case, it was under the following:

/vz/vmprivate/9d07cfef-42bf-4fbe-ac6a-7af9c85c9475

You can also see the list of VMs UUID, by typing the following command:

prlctl list --all

First, stop the VM

prlctl stop 9d07cfef-42bf-4fbe-ac6a-7af9c85c9475

Now, all you need to do, is to replace the ‘harddisk.hdd’ with the one we had converted

cd /vz/vmprivate/9d07cfef-42bf-4fbe-ac6a-7af9c85c9475
mv harddisk.hdd harddisk.hdd_backup
mv /root/harddisk.hdd.qcow2 harddisk.hdd

Now, start the VM, and you should be good to go

prlctl start 9d07cfef-42bf-4fbe-ac6a-7af9c85c9475

How To Run a Command in All OpenVZ Containers

You can run single command in a container using the following:

vzctl exec 201 service httpd status

How to find out all the VZ containers:

vzlist -a

The other way? Yes, there is. VZ list is stored inside a file /proc/vz/veinfo, and we can use it with the help of shell to run command in each VZ as following:

for i in `cat /proc/vz/veinfo | awk '{print $1}'|egrep -v '^0$'`; \
do echo "Container $i"; vzctl exec $i <your command goes here>; done

An example, can be the following:

for i in `cat /proc/vz/veinfo | awk '{print $1}'|egrep -v '^0$'`; \
do echo "Container $i"; vzctl exec $i service httpd status; done

This should show all the httpd status of the VZ.