How To: Send Email Alert on Different HAProxy Status or Based on HAProxy Stats

HAProxy is a great simple load balancing tool written in Lua. It is extremely efficient as a software load balancer and highly configurable as well. On the contrary, HAProxy lacks programmable automated monitoring tools. It has a directive called ‘mailer’ which has only support above 1.8. Default CentOS 7 repo comes with HAProxy 1.5 and it has no mailer alert support either. Even with 1.8, it doesn’t come with lots of available configuration options neither the tool gets programmable facility.

That is where, I thought to work on to trigger codes from HAProxy stats. This can be done in many ways, in my cases, I did it using per minute crons. If you want it much quicker like every 5 seconds for example, you would have to run this as a daemon, which isn’t like making a rocket, should be easy and short. My entire idea is to allow you understanding how to create programmable 3rd party tools by fetching data from HAProxy socket and trigger monitors.

HAProxy Stats through Unix Socket

First, we need to enable the HAProxy stats that is available through socket. To turn on stats through unix socket, you need put the following line in your global section of haproxy.cfg file:

stats socket /var/lib/haproxy/stats

An example of Global settings section would be like the following:

global
    log         127.0.0.1 local2     #Log configuration

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     40000
    user        haproxy             #Haproxy running under user and group "haproxy"
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

Check how the stats socket section is placed to fit it for your cfg file.

Once this is done, now you can restart the haproxy to start shooting stats through the socket. The output is basically a csv of the HAProxy stats page. So the values going to be in comma seperated format. To understand HAProxy stats page and exploding the values, you can visit the following:

https://www.haproxy.com/blog/exploring-the-haproxy-stats-page/

Now, how to read the unix-socket using bash? There is a tool called ‘socat’ that can be used to read data from unix socket. ‘socat’ means ‘socket cat’, you may read more details about ‘socat’ here:

https://linux.die.net/man/1/socat

If socat is not available on your CentOS yum, you may get it from epel-release.

yum install epel-release
yum install socat

Once ‘socat’ is available in your system, you can use it to redirect the io and show the output as following:

echo "show stat" | socat unix-connect:/var/lib/haproxy/stats stdio

Now as you can see, you can retrieve the whole HAProxy stats in CSV format, you may easily manipulate and operate data using a shell script. I have created a basic shell script to get the status of the HAProxy backends and send an email alert using ‘ssmtp’. Remember, ssmtp is highly configurable mail tool, you can customize smtp authentication as well with ‘ssmtp’. You may use any other tool like Sendmail for example or ‘Curl’ to any email API like Sendgrid, possibility is infinite here. Remember, as the data is instantly available to socket as soon as the HAProxy generates the event, hence, it can be as efficient as HAProxy built in functions like ‘mailer’ is.

#!/bin/bash

cd /root/
rm -f haproxy_stat.txt
echo "show stat" | socat unix-connect:/var/lib/haproxy/stats stdio|grep app-main > /root/test.txt

SEND_EMAIL=0

while IFS= read -r line
do

APP=`echo $line | cut -d"," -f2`
STAT=`echo $line | cut -d"," -f18`
SESSION=`echo $line | cut -d"," -f34`
if [ "$STAT" != "UP" ]; then
SEND_EMAIL=1
MESSAGE+="$APP $STAT $SESSION
"
fi

done < /root/test.txt

if [ $SEND_EMAIL -eq 1 ]; then
echo -e "Subject: Haproxy Instance Down \n\n$MESSAGE" | sudo ssmtp -vvv [email protected]
echo -e "Subject: Haproxy Instance Down \n\n$MESSAGE" | sudo ssmtp -vvv [email protected]

fi

Data that I am interested in are the status of the backend, name of the backend and the session rate of the backend. So, if the load balancer sees any backend is down, this would trigger the email delivery. You can use this to catch anything in the HAProxy, like a Frontend attack for example, like the delivery optimization of your load balancer etc. As you are now able to retrieve data directly from HAProxy to your own ‘programming’ console, you can program it whatever the way you want to. Hope this helps somebody! For any help, shoot a comment!

How To: Force HTTPS in HAProxy

In Haproxy for frontend, we have to listen to both 80 and 443 port for HTTP and HTTPS. But what if we want to force redirect all requests to https? HAProxy doesn’t support things like htaccess/mod_rewrite. So we have to do it using HAProxy directives and attributes.

HAProxy has a directive called ‘ssl_fc’. This one returns true if the HAProxy frontend is on https. We can use this to force redirect reqeusts to HTTPS as following:

#redirect to HTTPS if ssl_fc is false / off.
redirect scheme https code 301 if !{ ssl_fc }

You can add this code to the section where you have defined the frontend for 80.

Now, you can also redirect reqeusts to https based on the requested domain as following:

redirect scheme https code 301 if { hdr(Host) -i www.yourdomain.com } !{ ssl_fc }

Replace your domain with your expected domain name.

How to Add Openlitespeed Server to Haproxy – Avoid 503 Haproxy Error

For past one month, Openlitespeed has been my favorite piece of web server. Litespeed has always outperformed all the other webservers including Nginx as well in any of my production environment. But I have recently switched to using OLS which is a Opensource version of Litespeed with some limited features. I get LS kind of performance along with no worry for paying. How better could it be?

OLS comes with some weird problem. As OLS is less used, finding a solution for such cases could be difficult. I faced a very similar kind of issue yesterday.

I added a OLS based server to my HAProxy cluster, but the HAProxy can not find the OLS server working. When I try to access the web app hosted under OLS server using local IP masking, I see the website without a problem. That means, OLS is interpreting the Domain with IP relation well. But failing to respond when Haproxy is requesting through IP address.

The problem is, OLS is not configured to respond to ‘default’ requests on ‘127.0.0.1’, ‘localhost’ or the server’s main IP. To find out this, I enabled ‘High’ Debug mode of OLS. To do this, first visit the OLS Webadmin Console, it can be accessed with https://IP:7080

After login, go to Server Configuration >> Log >> Edit Server Log >> Set ‘Debug Level’ to High and Save

Set high debug level in openlitespeed

Once saving is done, you may gracefully restart the OLS

Gracefully restart Openlitespeed

Once this is done, you may now monitor the error.log file located usually under /usr/local/lsws/logs. Now tail the output of error.log while processing requests with Haproxy:

tail -f /usr/local/lsws/logs/error.log

You can see, OLS has returned 404 error for the localhost/ request. That means, Haproxy is requesting the IP with a header ‘localhost/’, and the server should return something with code 200 to make sure the server is in business.

What we need to do, is to make OLS respond to request for basic IP and localhost to 200 with the main site instead of ‘404’ error. To do this, we need to go to Webconsole of OLS again >> Listeners

You will see you have two Listeners, one for Default/Non HTTP and the HTTPS/SSL. In my case, I was using only HAProxy to Origin with no SSL, means 80. I selected the Default.

Open 80 Listener View in Openlitespeed

In the Listener List, you can find your Virtualhost, click on the ‘Edit’ of your Virtualhost

Virtualhost Edit Openlitespeed

Now, you can map the virtualhost. You will see your primary domain as the ‘Virtual Host’, which can’t be changed here. But what you can do is to map this virtualhost to several domains. The trick is to add your server’s IP and the localhost in the ‘domains’ list with comma seperation as following:

localhost mapping to OLS

Once this is saved, restart your OLS and now your HAProxy should be able to read requests and starting forwarding requests to your OLS server.

How to Make Cloudflare Work with HAProxy for TLS Termination

Remember:
This is a part of dirty hack series. This is not the only way you can achieve what we want to achieve. But this is only used when you can trust the connections between your HAProxy and the Origin servers. Otherwise, you should not use this technique.

One common problem with using HAProxy and Cloudflare is that, the SSL that Cloudflare gives us, it gets terminated at HAProxy on L7 load balancer. For such cases, Cloudflare can not verify the Origin server and drops the connection. For such cases, your HAProxy will not work. What would you do for such cases? There are two ways to do this.

First one is, Cloudflare gives you a origin certificate, that you can install at HAProxy. I won’t dig into deep into this in this blog post.

But if you can trust your connections between HAProxy and backend Origin servers, as well as the connections between Cloudflare and HAproxy, you can choose the second one. For this case, Cloudflare allows you to Encrypt only the connections between the Visitors and Cloudflare. It won’t matter what you are doing behind the Cloudflare. This option is called ‘Flexible’ option, that you can select from your Cloudflare >> SSL/TLS tab.

Fix TLS Termination by HAProxy with Flexible Encryption Mode of Cloudflare

Once you set this to Flexible, this should start working ASAP. Remember, this is not essentially the best way to do this, but the quickest way only if load balancing is more important to you instead the data integrity.

Lost connection after starttls from Hostname (IP) – Virtualmin – Postfix

Problem Definition:

I have some VPS clients using Virtualmin as their LAMP/LEMP stack. After some recent updates to Virtualmin, they started seeing some Postfix errors. The error is the following:

lost connection after STARTTLS from unknown[0.0.0.0]

Virtualmin used to configure postfix to allow ‘Non TLS’ connections to the port 587, which they recently stopped configuring. Now, if you connect to 587 port, you have to follow the TLS, no matter what. My clients didn’t bother to use TLS/SSL before, which caused the error.

Virtualmin comes with Let’s Encrypt. That’s make it easy to solve the problem TLS problem.

Solution Summary:

Here is the basic to solve the problem, first you make virtualmin to install Let’s encrypt SSL for the domain you want to use for SMTP. Virtualmin primarily going to install this for your Apache. Once done, Copy the same certificate to your Postfix, Virtualmin allows you to do it with single click.

Detailed Steps:

First, login to your Virtualmin at 10000 port, then select the domain you use for the SMTP. Once done, you can go to Edit Virtual Server and expand the option ‘Enabled Features’. From here check the option says ‘Apache SSL Website Enabled?’

Check Apache SSL Website Enabled

Next, go to Server Configuration >> SSL Certificate, we will get two tabs, ‘Current Certificate’ & ‘Let’s Encrypt’. Both are important. First go to Let’s Encrypt:

Let’s Encrypt Virtualmin

In the Let’s Encrypt tab, select the ‘Domain names listed here‘ and enter the domain that only has valid A Records or loads to the server, otherwise, remember, Let’s Encrypt won’t process for any single exception unlike cpanel or cyberpanel

Let’s Encrypt Virtualmin Add Domains

Once done, request the certificate. After the certificate installation is done, go back to ‘Current Certificate’ tab. On the bottom of the tab, there are couple of Copy To ‘Services’ option available. Here you should see the option says ‘Copy to Postfix’. Use that to copy the certificate to Postfix and use it during TLS/SSL transactions.

Copy SSL to Services (Postfix) Virtualmin.

In my case, I have already copied the SSL to Postfix, which is why it is not showing the option ‘Copy To Postfix’. But the option should be above the ProFTPD.

Once done, you may now recheck and the SMTP should work with TLS and 587 port.

How to Use Sticky Session for CSRF submission on Highly Scalable Cloud App in Haproxy

HINT: If you are a nginx fan and used it in mass scale, then, you must have done this using ip_hash (Nginx Documentation). It follows the same purpose for Haproxy. Difference and benefits of using Haproxy over Nginx for L7 proxy in a highly scalable and reliable cloud app would be a discussion for another day.

Case Discussion:

Suppose, you have a Cloud app, that is load balanced & scaled between multiple servers using Haproxy, for example:

101.101.101.101
202.202.202.202
303.303.303.303

Now, if your app has a submission form, for example, a poll submission from your users, then, there is an issue in this Haproxy setup.

Let’s say, an User A, requests for the app, and gets the data from the server 101.101.101.101, the CSRF token he gets for the poll submission to his browser, also maintains the app hosted on 101.101.101.101. But when he press the submit button, HAProxy puts him on 202.202.202.202 app, and the app hosted on 202.202.202.202 instantly rejects the token for the session as the session is not registered for that app. For such cases, we need to maintain a ‘Sticky’ session based on the cookie set by the right server. That means, if the cookie is set by 101.101.101.101, HAproxy should obey and give the user 101.101.101.101 until the cookie or the session is reset or regenerated.

How To Do That:

What we need to do, let haproxy write the server id in the cookie, and make the directive ‘server’ to follow the cookie. Please remember, there are couple of other way to achieve this. There is another way of doing this is called ‘IP Affinity’, where you make sticky session based on IP of the user. There is another based on PHP session value. Setting sticky session based on php session should also work. I preferred the cookie based sticky session, just on random selection.

So, to write the server id in the cookie, you need to add the following in the haproxy ‘backend’ directive as following:

backend app-main
balance roundrobin
cookie SERVERID insert indirect nocache

In the cookie directive, you can see, we are taking the HAProxy variable ‘SERVERID’ and inserting that to the cookie attribute. Now, all you need to do, is to configure your balancing IPs to follow the cookie, like the following:

backend app-main
balance roundrobin
cookie SERVERID insert indirect nocache
server nginx1 101.101.101.101 cookie S1
server nginx2 202.202.202.202 cookie S2
server nginx3 303.303.303.303 cookie S3

S1, S2, S3 are just 3 different names of the cookies for the specific servers. After the above is done, you can now restart and see Haproxy is following stickiness based on the session you have.

Just to find out, how to test if you are using laravel, try to regenerate the session based on the session() helper method as following:

session()->regenerate()->csrf_token();

You should be able to see the content loading from different web servers when the session regenerates. But it will persists when the regenerate session method is not called.

How to Skip WHM Initial Setup Wizard When Stuck After Upcp

If you have recently ran upcp and the WHM initial setup wizard is stuck in a URL like the following:

https://yourhostname.com:2087/cpsess*****/scripts/initial_setup_wizard1

And can not get away with it, here is the easy way to do it. Basically each setup wizard has a skip button and the button goes to initial_setup_wizard1_do, so only adding the _do at the of your initial_setup_wizard1 should do the job, like the following:

https://yourhostname.com:2087/cpsess*****/scripts/initial_setup_wizard1_do/

This should take you to the WHM home by letting you save some of the new WHM features and will not ask again for initial setups.

How to Do Full Page Caching in Laravel / How to Cache Views in Laravel

Most of the developers use Laravel Cache for database query result caching. Although, this is efficient, but the ultimate caching performance enhancement is achieved through FPC or Full Page Caching for web apps. Laravel doesn’t give any hint, neither describe how to do this in their documentation, which is why the article.

What is Full Page Caching?

Technically a full page caching means, to cache the html response from an app. In FPC, it is generally accepted to use the route/view as the cache key concatenating or mixing with the VERB in request header.

When a user requests for a route, we usually pull a controller behind the route to process and prepare several data before sending them to views for response. But what if the data hasn’t changed since the last request? That technically means the response hasn’t changed, right? This essentially says, you can cache the full response and skip the whole controller processing, even pulling the view, instead, only put the Cached data in the response. Theoretically, this is the best form of caching mechanism for ‘Web Based’ solutions like Ecommerce, Newspapers, Blogs etc. This technique is known as FPC or Full Page Caching.

Laravel Cache

Laravel is best known for it’s documentation. Although, the Laravel Cache documentation, only follows how to cache the database queries, not the views. To understand how to do FPC using Laravel, let’s first look at how our views are usually formed.

class NewsController extends Controller {
    public function index() {
        $news = News::all();
        return view('news.index')->with('news', $news);
    }
}

Here the view() helper method, returns a Laravel View instance. It doesn’t return the html or renders one. So who does it? Laravel does it for you under the hood, and pass it to Response class. Now to cache the views, you have to return the html and save it to cache. There are basically two ways of doing it.

The easiest way is to use a function called ‘render()’ that is available to View class which returns the html of the created View instance. Here is how you may convert the above controller method to return from cache:

class NewsController extends Controller {
    public function index() {
        if ( Cache::has('news_index') ) {
            return Cache::get('news_index');
        } else {
            $news = News::all();
            $cachedData = view('news.index')->with('news', $news)->render();
            Cache::put('news_index', $cachedData);                                         
            return $cachedData;           
        }  
    }
}

This should be it, simple, ha!

Here is more! I looked at the laravel documentation a bit more, and I could find there is another way you can do the above. This is using the Response class. view method returns a Views instance, while Response instance is able to return rendered html based on view. Here is how to do this:

Response::view('news.index')->with('news', $news);

This also means our idea that Laravel does the rendering under the hood is a bit wrong, it basically shoots the views instance to a response instance (which it has to) and returns it, that put the rendered html in the final response. We can now cache the above output and serve for future requests without entering the controller’s processing!

How to Add a Zimbra User to Allow Distribution List Creation

NB: This is going to be another documentation purpose post.

A distribution list allow you to create a mailing list. So for example if you have a CRM with a member of 100, you want to add them to a list, so that you do not need to email each of them distinctively when required, instead, you keep a list with an email like [email protected] and allow somebody to shoot at that email, which would make sure all the CRM members get the email. In Zimbra, there is a feature called ‘Distribution List’. By default this is only permitted to admin user to create. But in case, you want to permit a user to create distribution list, you would need to use ‘zimprov’ command. Here is the reference:

# su - zimbra
# zmprov grantRight domain yourzimbradomain.com usr [email protected] createDistList

Fairly simple!

How To Renew & Deploy Let’s Encrypt SSL on Zimbra Server – 2020

Note: This does not seem to work on 2021. I have written another article on how to do this now: How to manually install/renew let’s encrypt ssl in Zimbra

Ok, there is a reason to put 2020 on the title. Because the process has changed since past. At this moment, I manage a Zimbra server with multiple domains in it, which won’t deploy the ‘other’ domains if not specified. The process is fairly simple, but I am keeping this as a documentation purpose, so that I don’t miss next time.

To renew the certificate for attached domains using certbot is fairly simple, just do:

# certbot renew

Once done, you you want to use the pre-hook and deploy-hook to do the patching and deploying as following using certbot_zimbra.sh

# certbot_zimbra.sh -p
# certbot_zimbra.sh -r -d 'your_domain'

Updated, certbot_zimbra doesn’t take this. ‘-n’ used to be taken as new and ‘-r’ for replacing, now, ‘-r’ is removed. Instead you can use ‘-e’ to specify new domains. So the command for replacement and deployment would be fairly simple as following:

# certbot_zimbra.sh -p
# certbot_zimbra.sh -d -e 'mail.yourdomain.com'
# certbot_zimbra.sh -d -e 'mailapp.yourdomain.com'


… and so on. At this moment, I couldn’t find a way to advise zimbra certbot to follow a list of domains instead of one. But this is probably possible by cracking the certbot.