device-mapper: cache: You have created a cache device with a lot of individual cache blocks

The error would be similar to the following to be exact:

[172593.817178] device-mapper: cache: You have created a cache device with a lot of individual cache blocks (3276800)
[172593.817182] All these mappings can consume a lot of kernel memory, and take some time to read/write.
[172593.817185] Please consider increasing the cache block size to reduce the overall cache block count.

It usually appears because you have created a large cache pool, while using a small chunk size.

Here is what to be said about chunk size in lvmcache2 manual:

The size of data blocks managed by a cache pool can be specified with
the –chunksize option when the cache LV is created. The default
unit is KiB. The value must be a multiple of 32KiB between 32KiB and
1GiB.

Using a chunk size that is too large can result in wasteful use of
the cache, where small reads and writes can cause large sections of
an LV to be mapped into the cache. However, choosing a chunk size
that is too small can result in more overhead trying to manage the
numerous chunks that become mapped into the cache. Overhead can
include both excessive CPU time searching for chunks, and excessive
memory tracking chunks.

Basically chunk size is determining the block size at which your caches are going to be stored. To find your current chunk size run:

lvs -o+chunksize vgname/cachevolumename

Should return something like the following:

#lvs -o+chunksize vg0/newvz
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Chunk
newvz vg0 Cwi-aoC— 1.50t [cache] 14.14 0.75 5.12 64.00k

A good chunk size for a large cache device of size 200GB+ would be 256k. You may set the chunk size while setting up the device :

lvcreate –type cache-pool –chunksize 256K -L CacheSize -n CachePoolLV VG FastPVs

Backing up LVM Cache Volume?

I have been trying to explore what options do we have to use SSD Cache with a HDD driven servers to create faster writes. There are both software and hardware solutions. Hardware solution comes to CacheCade which isn’t really costly at all (roughly costs 250$ extra per license), though I was interested to explore all the software solution that are currently available in the market.

There are bcache, flashcache & lvm cache, that are mostly used in production servers. I firstly discarded bcache because it requires you to format the disk with bcache, that triggers the less flexibility check for a module. I tried flashcache before, and don’t want to go with it in a production server any longer as the module is discontinued (It still works, don’t get me wrong). All it seems, lvm cache is the only one which is stable and going to improve over days.

LVM Cache does work great. With the smq lvm cache policy, writeback cachemode & deadline scheduler, you can reach 220MBps write speed with Intel SSD in RAID 1, which is normally available in a RAID 1 Intel SSD. You can double the speed by putting a RAID 10 SSD array to back the cache. Although, after all the test was done, I realized that lvm cache doesn’t support snapshot unfortunately. At least not yet, at the time I am writing the blog. Without the snapshot facility, the performance benchmark actually goes in vain.

Snapshots of cache type volume vg0/newvz is not supported.

Hardware solutions are always useful as the backend setup goes transparent to the OS, which allows us to use our own tool without worrying about the caching setup. Cachecade is probably the only available solution right now with all facilities for SSD cache in production servers.

Experience with Varnish!

When Mellowhost first launched her servers, all of them were using “DSO” module for serving php. I can remember, one of the most commonly used caching plugin was either Eaccelerator or Xcache. Eaccelerator was preferable as cpanel Easyapache have this in their option and can be compiled automatically while rebuilding apache. As time passed, we had to choose suphp instead of dso due to many factors involved. I hope to write them down at later time why we had to move to suphp. But that actually cut the idea of using dynamic cacher in the server. Suphp kills the php process after serving, this allows all the opcode cacher to be valueless. Due to cutting off a dynamic cacher, the server started showing pretty good load average. Although it was a good trade off of IO and CPU usage in cost of security. I was searching for a cacher that would work with suphp in the same technique Litespeed (A paid web server software) does.

Continue reading “Experience with Varnish!”