error: Unable to create cgroup for vm**: No such file or directory

The error can appear for any type of KVM VM installation like Virtualizor  or SolusVM or Proxmox. You may face the same error if you are simply using virtmanager to manage your KVM installation. The error appear when you try to start/create the VM from the xml file:

[root@vps8 addvs]# virsh create /etc/libvirt/qemu/v1015.xml
error: Failed to create domain from /etc/libvirt/qemu/v1015.xml
error: Unable to create cgroup for v1015: No such file or directory

It appears because for some CentOS starts unmounting the cgroup and breaks libvirt. Easy way to fix this is to restart libvirtd:

service libvirtd restart

The error is more common in CentOS 7 than CentOS 6, as systemd is known to have the bug:

https://bugzilla.redhat.com/show_bug.cgi?id=678555

device-mapper: cache: You have created a cache device with a lot of individual cache blocks

The error would be similar to the following to be exact:

[172593.817178] device-mapper: cache: You have created a cache device with a lot of individual cache blocks (3276800)
[172593.817182] All these mappings can consume a lot of kernel memory, and take some time to read/write.
[172593.817185] Please consider increasing the cache block size to reduce the overall cache block count.

It usually appears because you have created a large cache pool, while using a small chunk size.

Here is what to be said about chunk size in lvmcache2 manual:

The size of data blocks managed by a cache pool can be specified with
the –chunksize option when the cache LV is created. The default
unit is KiB. The value must be a multiple of 32KiB between 32KiB and
1GiB.

Using a chunk size that is too large can result in wasteful use of
the cache, where small reads and writes can cause large sections of
an LV to be mapped into the cache. However, choosing a chunk size
that is too small can result in more overhead trying to manage the
numerous chunks that become mapped into the cache. Overhead can
include both excessive CPU time searching for chunks, and excessive
memory tracking chunks.

Basically chunk size is determining the block size at which your caches are going to be stored. To find your current chunk size run:

lvs -o+chunksize vgname/cachevolumename

Should return something like the following:

#lvs -o+chunksize vg0/newvz
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Chunk
newvz vg0 Cwi-aoC— 1.50t [cache] 14.14 0.75 5.12 64.00k

A good chunk size for a large cache device of size 200GB+ would be 256k. You may set the chunk size while setting up the device :

lvcreate –type cache-pool –chunksize 256K -L CacheSize -n CachePoolLV VG FastPVs

Backing up LVM Cache Volume?

I have been trying to explore what options do we have to use SSD Cache with a HDD driven servers to create faster writes. There are both software and hardware solutions. Hardware solution comes to CacheCade which isn’t really costly at all (roughly costs 250$ extra per license), though I was interested to explore all the software solution that are currently available in the market.

There are bcache, flashcache & lvm cache, that are mostly used in production servers. I firstly discarded bcache because it requires you to format the disk with bcache, that triggers the less flexibility check for a module. I tried flashcache before, and don’t want to go with it in a production server any longer as the module is discontinued (It still works, don’t get me wrong). All it seems, lvm cache is the only one which is stable and going to improve over days.

LVM Cache does work great. With the smq lvm cache policy, writeback cachemode & deadline scheduler, you can reach 220MBps write speed with Intel SSD in RAID 1, which is normally available in a RAID 1 Intel SSD. You can double the speed by putting a RAID 10 SSD array to back the cache. Although, after all the test was done, I realized that lvm cache doesn’t support snapshot unfortunately. At least not yet, at the time I am writing the blog. Without the snapshot facility, the performance benchmark actually goes in vain.

Snapshots of cache type volume vg0/newvz is not supported.

Hardware solutions are always useful as the backend setup goes transparent to the OS, which allows us to use our own tool without worrying about the caching setup. Cachecade is probably the only available solution right now with all facilities for SSD cache in production servers.

Check File System for Errors with Status/Progress Bar

File system check can be tedious sometimes. User may want to check the progress of the fsck, which is not enabled by default. To do that, add -C (capital C) with the fsck command.

fsck -C /dev/sda1

The original argument is:

fsck -C0 /dev/sda1

Although, it would work without number if you put the -C in front of other arguments, like -f (forcing the file system check) -y (yes to auto repair). A usable fsck command could be the following:

fsck -fy -C0 /dev/sda1

or

fsck -C -fy /dev/sda1

Please note, -c (small C) would result a read only test. This test will try to read all the blocks in the disk and see if it is able to read them or not. It is done through a program called ‘badblock’. If you are running badblock test on a large system, be ready to spend a large amount of time for that.

How to create /var/cpanel/userdata files using /var/cpanel/users files

While reverting back from a reverse proxy like nginx/varnish installation, you might end up having an inconsistent /var/cpanel/userdata and /var/cpanel/users files. To be noted, /var/cpanel/userdata is used to create the httpd.conf file, while /var/cpanel/users are used to create the dns zone files. If you have an inconsistency between this two, you will have two different IP for named and httpd, which is undesirable. As the reverse proxy plays with /var/cpanel/userdata, which is why, we usually see the userdata folder containing incorrect data. Cpanel comes with a feature to reset userdata directory through it’s tools. Although, the tool uses a valid httpd.conf file or a backup to create userdata directory. Interestingly, if your httpd.conf isn’t valid that was created using an inconsistent userdata, can’t be used backward.

This is when you might require to create a /var/cpanel/userdata directory using the valid /var/cpanel/users directory. We ended up in this situation and written the following bash code to do the job for us:

#!/bin/sh

ls /var/cpanel/users/* > users.txt

while read line; do

UIP=$(grep “IP” /var/cpanel/users/$line|cut -d’=’ -f2)
sed -ie “/^ip: / c ip: $UIP” /var/cpanel/userdata/$line/*

#sed -i ‘/ip:/s/.*/ip: `echo $UIP`/’ /var/cpanel/userdata/$line/*

rm -f /var/cpanel/userdata/$line/*.cache

done < users.txt

 

The code deletes the cache files as well. Cpanel ships a script to rebuild the cache files, but for some reason it didn’t work for us. So we manually deleted the cache using rm. Please remember to backup the /var/cpanel/userdata and /var/cpanel/users directory before running the scripts.

Cpanel given cache rebuild command is:

/scripts/updateuserdatacache

Once the cache removal/build is completed, you may now rebuild your httpd conf file:

/scripts/rebuildhttpd.conf

service httpd restart

Why does Your New Site Take Ages to Load?

I was trying to track down a couple of website slow down reports lately. There is an interesting change of slow down behaviour these days in web application. From a conventional standpoint, people firmly believes that their static contents are not going to affect the performance of their websites other than images being heavy.
 
In reality, they are ignoring the fact that they are using jQuery plugins of many kinds from multiple developers. Hence cumulative number & sizes of JS files are pretty large these days comparing with all the plugins were coming from a single developer. Once the number of static file increases and goes beyond 100 per page, a cookie domain can hit some serious performance penalty. Geolocation for these small files and accessing them from single source can also increase the time geometrically. There is undoubtedly a large market of CDN due to the nature of development in web application.
 
I have seen, people these days are more aware about handling large data wisely than before. If you are using a Cloud from any provider, you are possibly using an E5 core or multiple (Mellowhost uses only E5 nodes right at this moment), that usually comes with access to a 16/24/32MB cache. Your static handling going to be more important in performance on these type of resources than your database, as threading is more of a concern than a single process handling in these virtualised resources.

How to install fusecompress in CentOS 6

What is fusecompress?

FuseCompress provides a mountable Linux filesystem which transparently compresses its content. Files stored in this filesystem are compressed on the fly and FUSE allows to create a transparent interface between compressed files and user applications. FuseCompress supports different compression methods: LZO, gzip, bzip2, and LZMA.

How to install fusecompress in CentOS 6?

Follow the following to install fusecompress in CentOS 6

Install pre-tools

# yum groupinstall 'Development Tools'
# yum install boost boost-devel boost141-iostreams
# yum install fuse fuse-devel zlib-devel bzip2-devel lzo-devel
Download and Install fusecompress using git
# git clone git://github.com/tex/fusecompress.git
# cd fusecompress
# ./configure --with-z --with-bz2 --with-lzo2 --without-lzma --with-boost-serialization=boost_serialization --with-boost-iostreams=boost_iostreams --with-boost-program-options=boost_program_options --with-boost-file --with-boost-filesystem=boost_filesystem
# make
# make install

How to install node.js in a shared cpanel hosting account

What is node.js?

Node.js is event-driven, asynchronous I/O server-side JavaScript environment based on V8 engine.

Pre-Requirements

You don’t need full root access to install node.js in a cpanel hosting account. Although, you require to have access to the following from your hosting provider:

1. Compiler Access
2. SSH Access
3. An open port
4. Automatic Process Killing Exclusion

You need to first contact your provider to ensure you can access ‘1’ & ‘2’. In case 3 and 4, mention the port you want to use for your node.js app. Provider will exclude the port and your cpanel username in the firewall.

Installation: Step 1

First, find out if you have python 2.6 or above and bzip2-devel installed in the server. You can do using the following commands:

# which python
/usr/bin/python
# python -V
Python 2.6.6
# rpm -qa|grep bzip2-devel
bzip2-devel-1.0.5-7.el6_0.x86_64

Most of the latest cPanel server will have Python 2.6.6 or 2.6.7. So, as long as your provider isn’t running some old version of cPanel. If you have Python 2.6, you don’t need to recompile python. Python in cPanel server executable by the users. You might not see bzip2-devel installed. In case, it is not, your provider would need to install it using yum.

# yum install bzip2-devel

Step 2

Once the above ‘Step 1’ things are available, you can download the latest node.js binary and compile it. While writing this tutorial, node.js is running 0.10.24. You can download the latest node.js source from here:

http://nodejs.org/download/

# wget http://nodejs.org/dist/v0.10.24/node-v0.10.24.tar.gz
# tar -xvzf node-v*
# cd node*
# ./configure --prefix=$HOME
# make
# make install

Out of all the commands above, you need to note the ‘prefix’. It should use $HOME if you are running from the user’s shell. A root user can use the path to the home directory of the cpanel user like /home/cpanelusername in place of $HOME to install node.js in a cpanel user’s account.

Step 3

If you have installed node.js from the root account, change the permission for /home/cpanelusername/bin, /home/cpanelusername/include, /home/cpanelusername/lib to the cpanelusername.

# cd /home/cpanelusername
# chown -Rf cpanelusername:cpanelusername node* bin include lib

Now you can run a node.js node. A simple script to open a port can be the following:

# nano sock.js

Insert the following:

var net = require(‘net’);
var server = net.createServer(function (socket) {
socket.write(‘Open Serverrn’);
socket.pipe(socket); });
server.listen(674, ‘203.20.20.20);
console.log(‘Server running at http://203.20.20.20:674/’);

Now you can run the node as following:

# node sock.js

 

‘ca-certificates’ update error

Since the last “ca-certificates” release for RHEL/CentOS 6 or Fedorar 14 with version stating 2013, there is a roaming error of not being able to update “ca-certificate” version starting from 2010 to the latest 2013. The error is somewhat looks the following through yum:

Running Transaction
Updating : ca-certificates-2013.1.94-65.0.el6.noarch 1/2
Error unpacking rpm package ca-certificates-2013.1.94-65.0.el6.noarch
warning: /etc/pki/java/cacerts created as /etc/pki/java/cacerts.rpmnew
warning: /etc/pki/tls/certs/ca-bundle.crt created as /etc/pki/tls/certs/ca-bundle.crt.rpmnew
warning: /etc/pki/tls/certs/ca-bundle.trust.crt created as /etc/pki/tls/certs/ca-bundle.trust.crt.rpmnew
error: unpacking of archive failed on file /etc/ssl/certs: cpio: rename
Verifying : ca-certificates-2013.1.94-65.0.el6.noarch 1/2
ca-certificates-2010.63-3.el6_1.5.noarch was supposed to be removed but is not!
Verifying : ca-certificates-2010.63-3.el6_1.5.noarch 2/2

Failed:
ca-certificates.noarch 0:2010.63-3.el6_1.5 ca-certificates.noarch 0:2013.1.94-65.0.el6

The following part of the error to be noted:

error: unpacking of archive failed on file /etc/ssl/certs: cpio: rename

It looks like the new “ca-certificate” is using a new structure. In old versions, “certs” is a folder which contains old certificates. While updating with the new one, it is trying to create a symlink with the /etc/pki/tls/certs/ by removing the old directory, which it fails with yum.

To solve the error, run the following:

$ mv /etc/ssl/certs /etc/ssl/certs.back
$ ln -s /etc/pki/tls/certs /etc/ssl/certs
$ yum update ca-certificate*

This should solve the error.

How to Solve Force File System Quota Check on Every Boot RHEL/CentOS

I had been seeing an uprising issue of forced file system quota check on every boot after migrating to RHEL 6 or CentOS 6. I hadn’t seen the same issue before. I had been tackling it by changing the quotacheck file on each boot to something different. Quotacheck file is located:

$ ls /sbin/quotacheck

I couldn’t get enough information on something was changed in RHEL 6 which triggered this event. I went to trace out the root reason of this starting from Linux Boot Init script.

Continue reading “How to Solve Force File System Quota Check on Every Boot RHEL/CentOS”