How to Disable Clamd & Spamassasign in Virtualmin


My clamscan is taking a lot of CPU & memory of my vps. I use Virtualmin as the control panel. How can I disable Clamscan and Spamassign from taking excessive cpu and memory?


If you are receiving an inbound spam attack or an email bounce attack on your VPS, and you use Clamscan and Spamassasign for your incoming mail filtering, it is highly possible that, you will run out of RAM and CPU. To solve the problem, you need to disable Clamscan and Spamassign for at least the time you are dealing with the attack or have developed a way to deal the attack. To disable Clamd and Spamassasign, you may follow the following steps

How to Disable Clamd and Spamassasign

First login to your virtualmin 10000 port, and go to System Settings >> Features and Plugins

Now uncheck the options ‘Spam filtering’ and ‘Virus filtering’ option and save it

Now go to your Webmin tab >> Under the System >> Bootup and Shutdown

Now select the services with the name ‘clam’ and ‘spamassassin.service’, then click on ‘Disable now and on boot’ option. Once saved, also make sure they are saying ‘No’ in running. If not, select them again and go to the bottom of the page, and press ‘Stop’ to shut them down immediately.

How To: Restore Zimbra Quarantined Email by Clam AKA Heuristics.Encrypted.PDF Release Point

Zimbra Mail Server automatically quarantines emails that get hit by the Antivirus scan using Clam when the mail is received. While putting the email on the recipient inbox, what it does, instead of giving the original email with the attachment, it sends a virus detected email with the following kind of error message:

Virus (Heuristics.Encrypted.PDF) in mail to YOU

Virus Alert
Our content checker found
virus: Heuristics.Encrypted.PDF

by Zimbra

It actually means, the original mail is now quarantined. Zimbra maintains a virus quarantine email account that is not normally available in the ‘Manage Account’ list of Zimbra Admin panel. You can find it if you search with ‘virus’ in the ‘Search’ box of the admin panel. What zimbra does in quarantine situation, is that, it pushes the mail to the quarantine email instead of original recipient.

Now, to get back the mail delivered to the original recipient, we need to first get the quarantine email account, get the message id, and then we need to inject the mail into the LMTP pipe that bypasses any scanning. Here are the steps on how to do this:

# First get to the zimbra user
$ su - zimbra

# Get the email account that is used to store virus detected mails
$ zmprov gcf zimbraAmavisQuarantineAccount
zimbraAmavisQuarantineAccount: [email protected]

# [email protected] this should be our quarantine email account, now we need to get the quarantine account's mailbox id
$ zmprov gmi [email protected]
mailboxId: 73
quotaUsed: 644183

# Mailbox id here for the quarantine account is 73. Now go to the message storage of this id using the following command: cd /opt/zimbra/store/0/<mailboxId>/msg/0
$ cd /opt/zimbra/store/0/73/msg/0

# list the messages
$ ls *

These are your quarantined emails. Now for example the complainer is ‘[email protected]’. To search for the emails designated for this email account, you may use the following:

$ grep -l [email protected] *

This should return you all the emails that got quarantined for the above user.

Now the question is, how can we get these emails delivered to the designated user bypassing the antivirus/antispam tools. To do this, you need to inject the mail into LMTP pipe. You may do this using ‘zmlmtpinject’ command as following:

$ zmlmtpinject -r [email protected] -s [email protected] 281-1216.msg

Remember, to change [email protected] to the original recipient. [email protected] would be the newly rewritten sender for this mail delivery and ‘281-1216.msg’ is the file name of the original email that you found out from the grep command. You can do lmtp injections for one email mail with each command. So, you would require to do this for each emails.

Softlayer & Blocking Mail Transport!

I remember when I first entered into the hosting business during 2004, LayeredTech used to be an unbeatable datacenter in the market. They were mostly competing with the ThePlanet at that time and both were market leaders for the users who used Datacenter premises based on monthly rent. When Softlayer started populating some of their automated system like OS reinstall, IP addition, port control etc. using the shared VLAN & BIOS level control with almost all server through the use of KVM (IPMI from Supermicro was fresher in the market, and SL started giving away a Java app ‘IPMIView’ that had access to both console and a fast tty, it used to be DRAC before from Dell, which was eventually developed by Supermicro, I believe they still do), everything started falling a part for LT. LT gradually started focusing on ‘Enterprise Only’ institution. I eventually forgot following LT over the year since 2008.

Since Softlayer had started growing, which they eventually done in extremely fast manner, they merged with multiple companies (ThePlanet was the most notable and talked), and become the largest consumer based datacenter in the world, beating OVH. Since then, I have only seen Softlayer growing, even though with a very high grade price range they have in the market.

Since they were acquired by IBM, there are complains, Softlayer is focusing more on Enterprise Customers. They have started employing several restrictions over the year. The most recent one is blocking Mail Transport & sanctioned countries in US, all over the Softlayer network (Remember Softlayer is used by many as an IP Transit, that possibly mean, you will loose customers or visitors from a non-sanctioned country if his ISP, who is possibly not Softlayer, but utilises Softlayer IP Transit).

Mellowhost, all the way back in 2004 (It had a different branding before, ‘Mellowhost’ had come in operation from January, 2007), started with 3 vessels from LayeredTech. Over the years (2005-06), we had moved to Softlayer as our primary datacenter premise. We had expanded in Softlayer for straight 8 years before we had realised, Softlayer doesn’t exactly have enough options (I will possibly going to post in details what are they) in hardware, that can utilise and bring your web hosting technology to the newest, which helps improving performance of your web server even for the old clients.

Then we basically started focusing on many other providers and geographically spreading our options over last 2 years. We have chosen providers that let us configure the server according to our choice. Not necessary colocation, but if we want, we can purchase hardwares that we want to use for our Servers (Like Crucial MX200 instead of Samsung Evo or LSI with Fastpath or LSI with Cachecade or a premium 8 bay hot-swappable chasis that is not usually done by the provider). We now utilise a complete Cloud like system where we can move our IPs from hardware to hardware whenever we want, with only restarting the virtual network device. Our system allows us to use DRDB, that can be used for network mirroring at any point of time if a client is expecting a high traffic for very specific period and wants to pay for that only.

Even though, we are almost done shifting from Softlayer, we haven’t completely left Softlayer premise yet. We still have two servers with them, one is in Houston (The premise that was previously owned by ThePlanet, used as a Houston based Shared Hosting for Mellowhost) and the other one is in Dallas (where runs). Server that we have in Dallas, wasn’t my concern to worry as we have been using ManDrill for sometime now to relay our mails from So if Softlayer blocks Mail Transport for this server, this won’t be a problem at all. But the problem was with the Houston server that we have. It was indeed in my mind to switch this server to another provider, but to be honest, I have been a great fan of Softlayer over the time, and literally I have been with them since the start of this company, wasn’t at all interested to completely leave the company for my customer’s purposes.

Then again, it was impossible to add an investment for this server in a hosted smarthost like mandrill or sendgrid as the server has a large number of average emails per day. This server has been on board for last 6 straight years, hosting decent amount of long term clients. You should be able to guess the size of the emails that are sent everyday. This is basically why, we deployed an MTA as smarthost in our Psychz Dallas facility and started relaying our mails from the Houston server over 587 TLS port. This basically worked greatly, to be honest, better than expected. We have employed variant type of spam protection in this server as it had a completely different CPU to process everything, most notably ASSP with mailscanner. We were able to reduce the spam in a great number over last couple of weeks through the use of remote Mail Transport. We will have to calculate how feasible it is to employ this over all other servers that we have. Most important problem with this setup is the SPF. User’s spf should use the Relay server and the MTA both in the TXT line. We did the addition using ‘sed’ for all the current users in Softlayer server and notified the clients, but we later realised there are people who uses ‘Cloudflare’, and we had to find them to manually do the update. The process does have a lot of pros and cons, but a survey will possibly let us know how we can use this as an option for our other cpanel premises. While this goes for future, this system is essential right now for our Houston server, as the local mail transport is no more working since 2nd February, 2016, Softlayer blocked Mail Transport out!

If you are a Softlayer client, and going through the same pain of blocked mail transport, then you are in the same ship as we are, and probably want to use a relay like we did through a cost effective channel unlike ManDrill or Sendgrid.

How to track all outgoing mails in Exim

If you are a mail server administrator and possibly using one of the most used open source mail server namely Exim, you might require to monitor the outgoing mails to track down a spammer. In shared web servers, you can use some regular expressions on the mail logs to trace a spammer log. But sometimes, you might fail to find a possible spammer if you have a huge amount of users in the server and a lot of users are actually spamming. Most of the cases, user’s accounts are compromised and intruders utilize the facility to send out spam.

Sometimes, a better way is to store a copy of the each mails sent using Exim and use the regular expressions on the header details to track down the original spammer. Just for the record, storing email data may breach the privacy concern if it is a public server and this should only be used to track the original spammer.

Exim can utilize 3 levels of mail filtering. I have used System Filtering to deliver a copy of each mail sent to a local mailbox. A system filter works for all the accounts and users under Exim. In Cpanel, you can set the exim filter from Cpanel >> Service Configuration >> Exim Configuration Manager >> Filter

From command line, open the /etc/exim.conf and find the line starts with “system_filter”.

By default, cpanel uses a system filter located under “/etc/cpanel_exim_system_filter”. Just for the record, this copy will always get reverted to the default on each cpanel update. We need to make a customized filter for our use. I did the following:

cp /etc/cpanel_exim_system_filter /etc/exim_system_filter_mellowhost

I made a copy of the original system filter to exim_system_filter_mellowhost. Now open the copy with your favorite text editor, mine is always nano.

Now, you need to add some simple shell script inside this custom filter using Exim filtering commands you can find here:

Here is a shortcode I have used:

if first_delivery
and ("$h_from:" does not contain "[email protected]")
and not ("$h_X-Spam-Checker-Version:" begins "SpamAssassin")
unseen deliver "[email protected]"

Just for the record, “” is an account I have created under the same server. I don’t own the domain neither operate it. I have used it to create a local inbox and deliver the mails for me. You just need to make sure the domain lies in /etc/localdomains. That is the local resolver for Exim and it won’t go for a dns resolution check if the domain is available under localdomains which serves our purpose. You need to make sure, you create an email account with the, in my case, I created an individual inbox with “[email protected]”.

Now, here is the breakdown of the shortcode. “first_delivery” means the mail is just sent, it hasn’t been queued or relayed. “$h_from” is a variable used by exim to determine the from address from each mail header. So, I am checking whether the mail is the just dispatched from a mail user and whether the mail was ever delivered to [email protected] or not using the 2nd line where it checks if the from “does not contain” (an exim filter command) our local delivery mail address. If the line isn’t included, your mail forwarder will fall in an infinite loop and keep forwarding your own mails to yourself.

The third condition is included if you have SpamAssassin installed to check your mails for spam. SpamAssassin is an individual daemon that will check every first delivery of mails, add its spam score in the header and send the mail again. That would make Exim realize the mail is another “First Delivery”. So, if the SpamAssassin score is added, we are safely discarding them as we have already received those mails in our local inbox.

Now the production of the all clauses is very simple. It is delivering the mail as “unseen deliver” (an exim command, means make the mail unread in the inbox” to our localdelivery inbox.

How can you trace down the spammer from an aggregated inbox?

It depends on how would you like to use regular expressions and tools like “grep, awk, cut” etc. Let me give you some insight on basic.

First of all, all these mails are actually getting stored as text files under the local mail directory. In my case, it is under “/home/localdel/mail/”.

Now move your shell prompt to the folder “cur” (current mails in mail directory). If you check the files, you should see each mails are stored as one individual text file.

In my case, I usually sort the subjects first and track down if there is spammer out there. You can do that using the following:

grep -i "Subject: " *

This would result all the subject and the file name.

One of my favorite way to track down a spammer is to check for Duplicate subjects. You can do it as following:

cat *|grep "Subject: "|cut -d":" -f2|sort|uniq -c|sort -n

cut is a tool to divide the sentence using regular expression and print the part you want. In my case, I am dividing the Subject lines with “:” and printing the 2nd column which is our original subject. Now we are sorting the result alphabetically with “sort”. Counting the unique values with “uniq -c” and sorting them again from low to high using sort -n.

This was just the basic of using parsing and trace out as spammer. The more you work with the spam mails, the more you will understand. Parsing talent learns based on experience.

Happy Troubleshooting!

Form Spam

I have monitoring this thing for long. It is really becoming a headache now. It continuously consuming a lot of CPU and Mysql resources for no reason. Form Spam like wordpress comment spam, directory registration/submission spam, forum spam consumes around 33% of the total CPU usage of a day in one of Mellowhost’s old server according to my calculation a couple of minutes ago. This consumption is pretty huge and grows as the server grows.

For some reason due to the development of auto scripts installer like Softaculous and Fantastico, users tend to try each script and leave them unattended. This keeps leaving form exploits for the botnet attackers. A wordpress blog without akismet is potentially threatened to form spam attack. Most of the phpbb forums contains no protection at all on the initial installation. These let the auto bot spammers to post their links in unattended forums/blogs to gain backlinks from their perspective.

This is not eventually only harmful for the server in realtime but also threaten the reputation of the shared IP. I have been working to try to develop a protection server wide to stop these spammers, but every attempt seems inadequate.

In many cases, it is hard to control or check manually as resellers add users and the users add many addon domains. It grows almost everyday. It is advised for every user, not to keep unattended blog/forum/script. It is always better to add some “Captcha” in all sort of registration form. Nowadays, spammers have broken the Captchas as well. Some people have already started some solution called Random questions. But anyway, there should be something like verification in the registration and the comments shouldn’t be allowed without registration. You can also add the Akismet plugin which is available for almost all the blogs and forums. It drastically reduces the number of spam and acts pretty quickly.

Protecting form spam is not only good for the server, but also good for your sites reputation. If you are hosting an unattended blog script inside one of your main site, than it may receive a serious damage of reputation in SEO if the unattended blog is regularly spammed by malicious users. So, check now, if you have any unattended script inside a folder, you should probably double check and delete it if it is not essential or protect it from auto botnets.