Backing up LVM Cache Volume?

I have been trying to explore what options do we have to use SSD Cache with a HDD driven servers to create faster writes. There are both software and hardware solutions. Hardware solution comes to CacheCade which isn’t really costly at all (roughly costs 250$ extra per license), though I was interested to explore all the software solution that are currently available in the market.

There are bcache, flashcache & lvm cache, that are mostly used in production servers. I firstly discarded bcache because it requires you to format the disk with bcache, that triggers the less flexibility check for a module. I tried flashcache before, and don’t want to go with it in a production server any longer as the module is discontinued (It still works, don’t get me wrong). All it seems, lvm cache is the only one which is stable and going to improve over days.

LVM Cache does work great. With the smq lvm cache policy, writeback cachemode & deadline scheduler, you can reach 220MBps write speed with Intel SSD in RAID 1, which is normally available in a RAID 1 Intel SSD. You can double the speed by putting a RAID 10 SSD array to back the cache. Although, after all the test was done, I realized that lvm cache doesn’t support snapshot unfortunately. At least not yet, at the time I am writing the blog. Without the snapshot facility, the performance benchmark actually goes in vain.

Snapshots of cache type volume vg0/newvz is not supported.

Hardware solutions are always useful as the backend setup goes transparent to the OS, which allows us to use our own tool without worrying about the caching setup. Cachecade is probably the only available solution right now with all facilities for SSD cache in production servers.

Check File System for Errors with Status/Progress Bar

File system check can be tedious sometimes. User may want to check the progress of the fsck, which is not enabled by default. To do that, add -C (capital C) with the fsck command.

fsck -C /dev/sda1

The original argument is:

fsck -C0 /dev/sda1

Although, it would work without number if you put the -C in front of other arguments, like -f (forcing the file system check) -y (yes to auto repair). A usable fsck command could be the following:

fsck -fy -C0 /dev/sda1

or

fsck -C -fy /dev/sda1

Please note, -c (small C) would result a read only test. This test will try to read all the blocks in the disk and see if it is able to read them or not. It is done through a program called ‘badblock’. If you are running badblock test on a large system, be ready to spend a large amount of time for that.

How to create /var/cpanel/userdata files using /var/cpanel/users files

While reverting back from a reverse proxy like nginx/varnish installation, you might end up having an inconsistent /var/cpanel/userdata and /var/cpanel/users files. To be noted, /var/cpanel/userdata is used to create the httpd.conf file, while /var/cpanel/users are used to create the dns zone files. If you have an inconsistency between this two, you will have two different IP for named and httpd, which is undesirable. As the reverse proxy plays with /var/cpanel/userdata, which is why, we usually see the userdata folder containing incorrect data. Cpanel comes with a feature to reset userdata directory through it’s tools. Although, the tool uses a valid httpd.conf file or a backup to create userdata directory. Interestingly, if your httpd.conf isn’t valid that was created using an inconsistent userdata, can’t be used backward.

This is when you might require to create a /var/cpanel/userdata directory using the valid /var/cpanel/users directory. We ended up in this situation and written the following bash code to do the job for us:

#!/bin/sh

ls /var/cpanel/users/* > users.txt

while read line; do

UIP=$(grep “IP” /var/cpanel/users/$line|cut -d’=’ -f2)
sed -ie “/^ip: / c ip: $UIP” /var/cpanel/userdata/$line/*

#sed -i ‘/ip:/s/.*/ip: `echo $UIP`/’ /var/cpanel/userdata/$line/*

rm -f /var/cpanel/userdata/$line/*.cache

done < users.txt

 

The code deletes the cache files as well. Cpanel ships a script to rebuild the cache files, but for some reason it didn’t work for us. So we manually deleted the cache using rm. Please remember to backup the /var/cpanel/userdata and /var/cpanel/users directory before running the scripts.

Cpanel given cache rebuild command is:

/scripts/updateuserdatacache

Once the cache removal/build is completed, you may now rebuild your httpd conf file:

/scripts/rebuildhttpd.conf

service httpd restart

Backend log: PHP Warning: File upload error – unable to create a temporary file in Unknown on line 0

It is a common php error appears when PHP handler unable to detect the server temporary folder automatically. A solution to the problem is to define the upload_tmp_dir of the php explicitly. You can do that by editing the php.ini file. Here is an example on how to do it:

To find the running php.ini file, use phpinfo:

root@chicago1 [~]# php -r ‘phpinfo();’|grep php.ini
Configuration File (php.ini) Path => /opt/cpanel/ea-php56/root/etc
Loaded Configuration File => /opt/cpanel/ea-php56/root/etc/php.ini

As the command suggests, your php.ini file is located under the following:

/opt/cpanel/ea-php56/root/etc/php.ini

Now, edit the file using your favorite editor:
nano /opt/cpanel/ea-php56/root/etc/php.ini
Find the location of upload_tmp_dir, which is usually commented out like the following:

;upload_tmp_dir =

Uncomment it by removing the semicolon from front and define the temporary directory location:

upload_tmp_dir = /tmp

Restart your apache.

There is another way you can do it. To do this, go to WHM >> MultiPHP INI Editor >> Editor Mode >> Find upload_tmp_dir and edit it accordingly.

This would create a user.ini file to be included with the original php.ini file.

If none of the above solution works for you, you are probably seeing the error from modsecurity, try following the following steps to see if that overcomes the error:

ModSecurity: Multipart parsing error: Multipart: Failed to create file: /root/tmp/20170526-122120-WSfJYO2KhTvEz5johZF8UQAAAEw-file-9mmG15

ModSecurity: Multipart parsing error: Multipart: Failed to create file: /root/tmp/20170526-122120-WSfJYO2KhTvEz5johZF8UQAAAEw-file-9mmG15

If you are using ‘inspectFile’ of modsecurity, you may face an error in cpanel server like following

ModSecurity: Multipart parsing error: Multipart: Failed to create file: /root/tmp/20170526-122120-WSfJYO2KhTvEz5johZF8UQAAAEw-file-9mmG15

This error appears because mod_security fails to detect the tmp folder automatically. To fix the problem, all you need to add the following line in your modsec_user.conf file for cpanel servers:

SecTmpDir /tmp

This file is available under

/etc/apache2/conf.d/modsec

Alternatively, you may add the line from Cpanel >> Modsecurity Tools >> Add Rules.

Why does Your New Site Take Ages to Load?

I was trying to track down a couple of website slow down reports lately. There is an interesting change of slow down behaviour these days in web application. From a conventional standpoint, people firmly believes that their static contents are not going to affect the performance of their websites other than images being heavy.
 
In reality, they are ignoring the fact that they are using jQuery plugins of many kinds from multiple developers. Hence cumulative number & sizes of JS files are pretty large these days comparing with all the plugins were coming from a single developer. Once the number of static file increases and goes beyond 100 per page, a cookie domain can hit some serious performance penalty. Geolocation for these small files and accessing them from single source can also increase the time geometrically. There is undoubtedly a large market of CDN due to the nature of development in web application.
 
I have seen, people these days are more aware about handling large data wisely than before. If you are using a Cloud from any provider, you are possibly using an E5 core or multiple (Mellowhost uses only E5 nodes right at this moment), that usually comes with access to a 16/24/32MB cache. Your static handling going to be more important in performance on these type of resources than your database, as threading is more of a concern than a single process handling in these virtualised resources.

How to change WHM reseller password!

After all these years, it never came to my mind that when somebody purchases a reseller, they usually do not change their WHM password for a long period. They keep it ‘as it is’ generated by WHMCS on purchasing the reseller package. The most interesting fact is that they don’t change it, because they fail to find an option to change it in WHM.

WHM doesn’t come with a distinct option saying ‘Change WHM Password’ unfortunately. That makes a percentage of reseller believe that they can not change their WHM password. In recent times, while investigating a couple of reseller hacks, I could determine, one of the primary reason of password leakage is, not changing the WHM password for longer period of time and keeping it ‘saved’ in browser. At a certain point of time, when the browser gets exposed to the hacker, user loose control over their WHM account.

Now the question comes, how to change a WHM password! Your WHM username is basically a cpanel username. It only granted to be able to own multiple cpanel accounts and that is the only difference, that’s all. To change the WHM password, simply login to your cpanel with the WHM details and use the ‘Change Password’ option. So if your WHM url is http://something.com/whm with username: something and password: anything, then you basically login with the same details in http://something.com/cpanel instead of whm. Once logged in, just visit the Change Password to change your WHM/Reseller password.

It is highly recommended for all the users to change the password once they receive their reseller welcome email. You should try changing the reseller password often to prevent any anonymous leakage from unknown attacks. It is also advised not to save the WHM password in your browser. Please keep in mind, your password can leak access to the cpanel accounts under you and cause great threat for their websites & domain reputation. They possibly have no reason to be so.

How to install fusecompress in CentOS 6

What is fusecompress?

FuseCompress provides a mountable Linux filesystem which transparently compresses its content. Files stored in this filesystem are compressed on the fly and FUSE allows to create a transparent interface between compressed files and user applications. FuseCompress supports different compression methods: LZO, gzip, bzip2, and LZMA.

How to install fusecompress in CentOS 6?

Follow the following to install fusecompress in CentOS 6

Install pre-tools

# yum groupinstall 'Development Tools'
# yum install boost boost-devel boost141-iostreams
# yum install fuse fuse-devel zlib-devel bzip2-devel lzo-devel
Download and Install fusecompress using git
# git clone git://github.com/tex/fusecompress.git
# cd fusecompress
# ./configure --with-z --with-bz2 --with-lzo2 --without-lzma --with-boost-serialization=boost_serialization --with-boost-iostreams=boost_iostreams --with-boost-program-options=boost_program_options --with-boost-file --with-boost-filesystem=boost_filesystem
# make
# make install

How to Download a Backup of Gmail?

What is Gmail?

Gmail is a free (gratis), advertising-supported email service provided by Google. Users may access Gmail as secure webmail,[6] as well as via POP3 or IMAP4 protocols.

How to Download/Take Backup of Full Gmail Account?

Google has started a service called “Takeout”. It allows you to backup any Google Products you are using. Google allows you to take a MBOX copy of your Gmail archive. You would need to generate it from “https://www.google.com/settings/takeout/”.

Once the generation is done, you can download the backup and load it in any MBOX environment.