How to Install Let’s Encrypt in Cpanel

Let’s Encrypt is a popular tool to use free SSL for your website. Cpanel comes with Sectigo free ssl service through requesting and pooling system. Although, you might feel interested in getting the SSL released immediately without a queue based approach, and would prefer to use Let’s Encrypt that’s why.

There are two ways, you may install Let’s Encrypt in Cpanel.

  1. Using Cpanel Plugin

First one would be using the plugin created by Cpanel. Login to your server as root:

ssh [email protected]_ip

Then, run the following to install Let’s Encrypt in your cpanel system

/usr/local/cpanel/scripts/install_lets_encrypt_autossl_provider

It might take a couple of minutes, then it should install Let’s Encrypt as a provider in AutoSSL.

Now, go to WHM >> Manage AutoSSL and select Let’s Encrypt as the provider instead of Sectigo Cpanel default. You need to check the Agreement rules under the Let’s Encrypt selection and you may create the account in Let’s Encrypt using the same tool.

Once done, your new SSLs would be issued using the Let’s Encrypt tool through Cpanel AutoSSL plugin.

2. Using FleetSSL

There is a 3rd party tool, existed before Cpanel provided a plugin for Let’s Encrypt. It’s FleetSSL. One key benefit of using FleetSSL is that, it allows the Cpanel end users to control issuing and renewing the SSL from Cpanel. One key cons of using FleetSSL is that, it is not free of charge, it comes with 30$ one time fees. But mainly hosting provider would not mind to use this as it is a nice addition for the end user feature set in a hosting provider’s point of view.

You may check for details here:

https://letsencrypt-for-cpanel.com/

Now, once you complete installing Let’s Encrypt SSL, you may now use Let’s Encrypt for different cpanel services like webmail/cpanel/whm/calenders/MTA services. You may check the following to know how to:

How to Install Odoo 13 in CentOS 7

Odoo is currently one of the most popular tool for business purposes. It has a community edition, that allows managing ERP at very low cost. Odoo was previously known as OpenERP. Odoo requires to be installed on a dedicated server or VPS. Odoo 13 had come out on October, 2019. Odoo 14 hasn’t been released yet for production. I will have a straight forward how to on installing the latest Odoo 13 in CentOS 7.

Log in to your system and update

First step would be to login to your system and then update the system using yum.

ssh [email protected]_ip

You may check the CentOS version from the redhat release file using the following:

cat /etc/redhat-release

It should show you something like the following if you

CentOS Linux release 7.8.2003 (Core)

Now, you may try updating the system with yum

yum update -y

Once done, now install the EPEL repository as we need it to satisfy a couple of dependecies:

yum install epel-release

Install Python 3.6 packages and Odoo dependencies

We need Python 3.6 at least to run Odoo 13. Odoo 12 had support for Python 3.5, unfortunately, Odoo 13 doesn’t. We will use ‘Software Collection (scl)’ repository to install and use Python 3.6. To find the available Python versions in SCL, you may check the following:

SCL Repository for Python

Now, to install Python 3.6 using SCL, we first need to install the SCL repository for Centos:

yum install centos-release-scl

Once the SCL is loaded, now, you may install the python 3.6 using the following command:

yum install rh-python36

Once the Python is installed, now we will install several tools and packages for Odoo dependencies with the following command:

yum install git gcc nano wget nodejs-less libxslt-devel bzip2-devel openldap-devel libjpeg-devel freetype-devel

Create Odoo User

We now need to create a system user and group for Odoo and define a home directory to /opt/odoo

useradd -m -U -r -d  /opt/odoo -s /bin/bash odoo

You may use any username here, but remember to create the same username for the PostgreSQL as well.

Install PostgreSQL

CentOS base repository unfortunately, comes with Postgresql 9.2. But we want to use PostgreSQL 9.6 for our Odoo installation. You may check the available PostgreSQL for CentOS 7 using the following command:

yum list postgresql*

As by default CentOS 7 does not provide the PostgreSQL 9.6, we would use PostgreSQL official repository to download and install the 9.6 version.

First, we install the Postgres Yum Repository using the following command:

yum install https://download.postgresql.org/pub/repos/yum/9.6/redhat/rhel-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

Now, you may install PostgreSQL 9.6 and related required packages using the following command:

yum install postgresql96 postgresql96-server postgresql96-contrib postgresql96-libs

Now, we need to initialize the postgres database and start it. You may do that using the following:

# Initialize the DB
/usr/pgsql-9.6/bin/postgresql96-setup initdb

# Start the database
systemctl start postgresql-9.6.service

Now you may enable Postgres to start when booting up using the systemctl enable command:

systemctl enable postgresql-9.6.service

Now, we need to create a database user for our Odoo installation. You may do that using the following:

su - postgres -c "createuser -s odoo"

Note: If you have created a different user for Odoo installation other than ‘odoo’ than you should change the username here as well.

Install Wkhtmltopdf

Wkhtmltopdf is a open source tool to make html in pdf format so that you may print pdf reports. This tool is used by Odoo and requires to be installed as dependency. CentOS 7 repository does not provide the latest version of this tool, and Odoo requires you to use the latest version. Hence, we require to download the latest version from the Wkhtmltopdf website and install it. To do that, you may first visit the page:

https://wkhtmltopdf.org/downloads.html

The page gives you the direct rpm download link for each version of CentOS/Ubuntu/Mac etc. Download the stable version for CentOS 7. At the time of writing, the URL for CentOS 7 x86_64 bit is the following:

https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox-0.12.6-1.centos7.x86_64.rpm

You may install this using the following:

cd /opt/
wget https://github.com/wkhtmltopdf/packaging/releases/download/0.12.6-1/wkhtmltox-0.12.6-1.centos7.x86_64.rpm
yum localinstall wkhtmltox-0.12.6-1.centos7.x86_64.rpm

Install and Configure Odoo 13

If you have come all through here, that means you are done with the all dependency installations before starting to download Odoo 13 source code. We will download Odoo 13 from it’s Github repo and use virtualenv to create an isolated python environment to install this python software.

First, login as odoo from root:

su - odoo

Clone the Odoo source code from Github repository:

git clone https://www.github.com/odoo/odoo --depth 1 --branch 13.0 /opt/odoo/odoo13

This will bring the Odoo 13 branch from the Odoo repository and put it inside the folder /opt/odoo/odoo13

Now, we need to enable software collections in order to access python binaries:

scl enable rh-python36 bash

Then we need to create a virtual environment to complete the installation:

cd /opt/odoo
python3 -m venv odoo13-venv

Now, you may activate the virtual environment you have just created:

source odoo13-venv/bin/activate

Now, we upgrade the pip and install the wheel library:

pip install --upgrade pip
pip3 install wheel

Once done, now we can using pip3 to install all the required Python modules from the requirements.txt file:

pip3 install -r odoo13/requirements.txt

Once the installation is complete, now we can deactivate the virtual environment and get back to the root user

deactivate && exit ; exit

If you think, you will create custom modules, you may now create it and give odoo the permission accordingly:

mkdir /opt/odoo/odoo13-custom-addons
chown odoo: /opt/odoo/odoo13-custom-addons

Now, we can fill up the odoo configuration file. First open the odoo.conf file:

nano /etc/odoo.conf

You may paste the following inside:

[options]
; This is the password that allows database operations:
admin_passwd = set_the_password_to_create_odoo_database
db_host = False
db_port = False
db_user = odoo
db_password = False
addons_path = /opt/odoo/odoo13/addons,/opt/odoo/odoo13-custom-addons
; You can enable log file with uncommenting the next line
; logfile = /var/log/odoo13/odoo.log

Please do not forget to change the password ‘set_the_password_to_create_odoo_database’ with a new strong password. This would be used to create Odoo databases from the login screen.

Create the systemd service file and start Odoo 13

Now, we will create a service file, to be able to start, stop and restart Odoo daemon. To do that, first create a service file using the following:

nano /etc/systemd/system/odoo13.service

and paste the following:

[Unit]
Description=Odoo13
Requires=postgresql-9.6.service
After=network.target postgresql-9.6.service

[Service]
Type=simple
SyslogIdentifier=odoo13
PermissionsStartOnly=true
User=odoo
Group=odoo
ExecStart=/usr/bin/scl enable rh-python35 -- /opt/odoo/odoo13-venv/bin/python3 /opt/odoo/odoo13/odoo-bin -c /etc/odoo.conf
StandardOutput=journal+console

[Install]
WantedBy=multi-user.target

Now, save the file and exit.

Now, you need to reload the systemd daemon to be able to read the latest changes you have made to services. To do that, run:

systemctl daemon-reload

Finally, now we can start Odoo 13 instance using the following command:

systemctl start odoo13

If you are interested to check the status of the instance, you may do this:

systemctl status odoo13
[[email protected] ~]# systemctl status odoo13
● odoo13.service - Odoo13
   Loaded: loaded (/etc/systemd/system/odoo13.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2020-09-13 08:26:46 EDT; 23h ago
 Main PID: 24502 (scl)
   CGroup: /system.slice/odoo13.service
           ├─24502 /usr/bin/scl enable rh-python36 -- /opt/odoo/odoo13-venv/bin/python3 /opt/odoo/odoo13/odoo-bin -c /etc/odoo.conf
           ├─24503 /bin/bash /var/tmp/sclSWH04z
           └─24507 /opt/odoo/odoo13-venv/bin/python3 /opt/odoo/odoo13/odoo-bin -c /etc/odoo.conf

It show green active running, if everything worked out. If you see no error, you may now enable Odoo to start during the boot:

systemctl enable odoo13

If you would like to see the logs, you may either use the journal tools like the following:

journalctl -u odoo13

or uncomment the following line to log the debugs in /etc/odoo.conf

logfile = /var/log/odoo13/odoo.log

After making any change to /etc/odoo.conf, do not forget the restart the Odoo13 instance using systemctl.

Test the Installation

You may now test the installation using http://your_server_ip:8069. If everything worked, it should come up. If it doesn’t, you may try stopping your ‘firewalld’ to see if firewall is blocking the port or not:

systemctl stop firewalld

At Mellowhost, we provide Odoo installation and configuration assistance for absolute free of charge. If you are willing to try out any of our VPS for Odoo, you may do so and talk with us through the Live chat or the ticket for Odoo assistance.

Furthermore, Good luck.

How to Fix zmconfigd failed in Zimbra – Starting zmconfigd…failed.

Sometimes, if you restart Zimbra, you see zmconfigd is not starting or saying it’s failed. You may also see the zmconfigd service is not running in the Zimbra admin panel. There are couple of common reasons why zmconfigd fails to start.

Disable IPv6

One reason of zmconfigd fails to start is IPv6, for some reason, it fails to route the IPv6 and fails to start. A quick solution to this problem is to disable ipv6 and restart zmconfigd. You may do this like the following:

#Edit your sysctl.conf file
nano /etc/sysctl.conf

# paste the following inside the file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

# Save the file, and update sysctl in realtime
sysctl -p

# now try to restart zmconfigd
su - zimbra
zmconfigdctl restart

Now you can check the zmconfigd status with the following, to know if it’s running or not:

[[email protected] ~]# cat /opt/zimbra/log/zmconfigd.pid
19722

If it returns an ID, it means the zmconfigd is running.

Netcat is not installed

Another reason of the error could be because nc is not installed in your system. Zimbra zmconfigd has a dependency on netcat package. Netcat is available through nmap-ncat in centos systems. You may run the following to install netcat:

yum install nc
# or 
yum install nmap-netcat

How to Recover Innodb Table when ib_logfile / ibdata is/are crashed/deleted/lost without backup

If you are here, that means, you probably have panicked the same way, I did around 12 years back. I lost my ib_logfile0/ib_logfile1/ibdata1 all at once for a server that excessively utilizes Innodb tables. I had to recover vital data from the same situation today on a random request who does not have backups, and thought it is better to keep this as a document for future.

One key purpose of utilizing Innodb tables instead of MyISAM is that, the benefit on writes. It always outperform MyISAM in writes due to the use of extra efficient buffers. But, this also causes Innodb to vulnerable from crashing. As Innodb stores some sensitive data to 3 specific files, loosing them, also looses some serious mapping instruments for the database engines to recognizes Innodb table structure and data.

Who can follow this technique?

If you have lost any of ib_logfile0, ib_logfile1, ibdata1 or all of them, but still manages to keep the database folder intact with the .frm and .ibd files (which you would, if you have accidentally deleted the log file or the data only) and also have the following option NOT DISABLED in your mysql configuration ‘innodb_file_per_table’. This option is enabled by default, until you are explicitly disabling this to increase performance. A suggestion: only do this, if you keeping real time backups of your databases. Otherwise, it is better to have this enabled

What is ‘innodb_file_per_table’?

Primarily the tablespace stores and uses data from system tablespace for Innodb. But, as this creates a single point of failure from ibdata and log files, Innodb by defaults also stores the tablespace in table’s own data file, which is .ibd file. That means, if I lose the ibdata/logfile mappings, I can still use the .ibd file to restore my tablespace and do the schema to data mapping only if I allowed innodb to store these information to the database’s own .ibd file. You may read more about the parameters from MySQL documentation:

File-Per-Table Tablespaces at dev.mysql.com

How to Recover an Innodb Table from database files only?

There are two steps to this process. One is to identify and recognize the database schema from the frm file and then basically find a way to import the tablespace from .ibd file and introduce it to innodb engine system tablespace.

First Step First: How to get the schema from .frm files?

First, you must install mysql-utilities tools to get access mysqlfrm tool, you may get the instructions to install this here:

Once this is done, now you have two options to read mysqlfrm files. My favorite way is to use the ‘diagnostic’ attribute. To achieve this, run the following:

mysqlfrm --diagnostic /var/lib/mysql/your_database/assets.frm

I assumed, your database name is ‘your_database’ and the table you are trying to recover is ‘assets’. The above command will return you the schema of ‘CREATE TABLE’ you need to use. First, create a new database, and run this on the SQL console to generate the table first on the new database.

Second Step: Get your data and mapping back from .ibd to system tablespace

Once the database has the table, it will also create a .frm and .ibd file for you. What we need to do, is to first, make it forget the existing .ibd file it created, sync the .ibd file from our collapsed database, make the mysql innodb engine to recognize tablespace from the backup tablespace of this .ibd file and store & use it from system tablespace. These lines are complex, and might sound a bit difficult. No worry, let’s do it.

Run the following command first to let it forget the .ibd it has created now:

alter table assets discard tablespace;

Remember the following, our table name is ‘assets’. If you have a different table name, make sure to replace this accordingly. What this has done, is removed the assets.ibd file it created in /var/lib/mysql/new_database/ folder as we asked him to forget the existing .ibd file. Now we first need to copy the backup/old .ibd file to this location with the correct permission. I would use rsync to make sure permissions remains intact here:

rsync -vrplogDtH /var/lib/mysql/your_database/assets.ibd /var/lib/mysql/new_database/

Once this is done, we know, .ibd contains a backup of our original tablespace. We only need to make mysql & innodb recognize this. To achieve this, you may do the following from the Sql console:

alter table assets import tablespace;

If it throws a warning on not being able to file the .cfg file, you may forget it, because it is not essential to have a .cfg to recognize permissions/configurations.

If everything runs well, you should see your rows are back. It’s because innodb has now fetched your tablespace data from .ibd file to system tablespace and it can now recognizes the mapping to your data, viola! All you now need is to repeat the process for all of your innodb tables, and recover the whole database.

Can You Cancel/Abort/Rollback a HTTP Request?

There are cases, where Frontend developers wants to cancel the REST Requests that they submit using Ajax. It is one of the popular question among the Frontend developers community. Let’s try to focus on this case today.

To answer the question in short, No, you can not cancel a HTTP Request once it is submitted. There are catches require shear understanding here. Let’s elaborate a bit.

HTTP is Stateless by Design

HTTP is stateless by design, so what happens when there is no state for a request? It means we do not know the destination, what is happening with that specific requests at the back, but only can acknowledge the response. For example, if you have a REST API behind a load balancer. If you put a POST request to the API, you will not know which server going to serve your requests out of many HTTP servers behind the load balancer. For such cases, we create ‘cookies’ or ‘sessions’ on our application end for our full fledged web applications and extend the functionality of HTTP, to store state of a request and following it. For each form request for example, we submit a state along with the app, to let load balancer understand the state and follow me to the destination. Isn’t it? But we have to remember, this is not a HTTP feature, this is done at application level to add statefullness in HTTP requests.

What does this mean? It means, as there is no state for REST, we are not aware of the destination here, but only the response. So how can we cancel a requests, whose actual destination we are not aware of? Now you might think that, how about adding a cookies and cancel it? Yes, we do that on web applications layer, but can REST HTTP Requests be stateful? Not really, that is not by design of HTTP or REST unfortunately.

So, Does That Mean We Can Never Cancel an API Call?

If you are using REST, then it means you can never cancel the API call. If it gets submitted, it will continue processing in the backend even if you stop it in the frontend. But if ‘Cancelling’ is much important for your application through the API call, then you must consider other alternatives, like SOAP or RPC. RPC is stateful API architecture, and it is possible to design a cancel request for this. Please note, RPC doesn’t implement ‘cancel’ by default, but as this is stateful, you are able to design a ‘cancel’ request with the RPC call. Google has a RPC called ‘gRPC’, which is a stateful API architecture. That means, it is possible for you to implement cancel/abort or event rollback/restore a state with gRPC.

A Google application called ‘Firestore’ database has support for gRPC which is basically a stateful version of stateless REST API of Firestore.

Can You Use Elasticsearch as a Primary Database?

Well, this is an interesting topic. Earlier today, I answered the same question in a Elasticsearch Community Group in Facebook, thought to keep this documented as well.

Primarily, if you are aware of how Elasticsearch is storing data (The document like), you might think, it is a full fledged NoSQL database, but you need to know, it is not. If you have looked at several NoSQL databases, you might already be thinking, there is probably no standard definition of NoSQL databases, which is partially true, again, there still is definition of database management systems, where it doesn’t fit. Does that mean, we can’t use ES as the primary data store? No, that’s doesn’t mean that. You certainly can, but there are use cases. Let’s look at couple of points before we conclude.

Database Transactions

ES has no transaction. Even the algorithm that ES implements, Lucene Search, which has transaction in the original design, but ES doesn’t have any transaction. When there is no transaction, the first thing you need to remember, it has no rollback facility. That also means, every operation is atomic by design, and there is no way to cancel, abort or revert them. This also means, you have no locking standard in the system. If you are doing parallel writes, you need to be very careful about the writes here because ES is not giving you any guarantee to parallel writes.

ES Caching Strategy

Elasticsearch is a ‘near real’ time search engine. It is not a ‘real’ time search engine. It has a caching standard, that refreshes every 1 second. This is ‘by design’ of Elasticsearch. So, what’s the problem with it? The problem is, you may get tricked by the ES cache if you are immediately asking for a read for a quick write operation and it is within 1 second. But as most of the cases, ES is not written much often (Since most of the people do not use it as primary storage), most of the people thinks it is real time. But it is not. It’s cache can trick you in intense cases, where writes won’t be immediately available to you for read.

User Management

ES user management is no where near to a full fledged DBMS, only because ES doesn’t need to. They have never announced themselves as a NoSQL DBMS, hence they do not require to add the functionality in full square. They only implements the facilities that ES Ecosystems require.

So … ?

So, what does all these mean to you? If you have an extremely large read featured application, where updates are very rarely or occassionaly done, then you can definitely go with ES as your primary document storage engine. But if you have a write oriented application, requires complex user management or parallel threads require write access, then you better be choosing a standard DBMS for your business and use ES as secondary data store for your analytics and searches.

How To: Manually Add Support of SSL for WWW on Cyberpanel

hmm, it’s a weird topic to write blog on. Because Cyberpanel comes with a built in Certbot, and can automatically detects www and without www to install SSL for. Then why am I writing this up? All because I found a VPS client today facing the issue. Even though, Cyberpanel was telling me that the SSL is issued, it was only issued for non-www domain, but the www domain left behind. Let’s see how can we resolve this.

First problem

First problem came up when I tried to discover the Cyberpanel certbot binaries.

[[email protected] /]# find . -name "certbot"
./usr/local/CyberCP/bin/certbot
./usr/local/CyberCP/lib/python3.6/site-packages/certbot
./usr/local/CyberPanel/bin/certbot
./usr/local/CyberPanel/lib/python3.6/site-packages/certbot

[[email protected] live]# /usr/local/CyberCP/bin/certbot --version
certbot 0.21.1
[[email protected] live]# /usr/local/CyberPanel/bin/certbot --version
certbot 0.21.1

Both of the certbot I could find from Cyberpanel was very old, Certbot has 1.4 version in the Epel which has support for Acme 2 challenge, while the one that Cyberpanel is using doesn’t. I hence decided to install a certbot for our case:

yum install epel-release
yum install certbot

These should be it for the latest version of certbot to start working in your Cyberpanel host. Once done, you may now generate the SSL using the following:

certbot certonly  --webroot -w /home/yourdomain.com/public_html -d yourdomain.com -d www.yourdomain.com

Remember to replace yourdomain.com with the actual one that is having problem with. Cyberpanel creates the home directory with the primary domain, so the remember to give the correct document root for the value of attribute ‘-w’.

Once this id done, certbot should automatically verify the challenge and get the issued license for you. Lets encrypt license are usually stored at the following directory:

/etc/letsencrypt/live/yourdomain.com/

Files are:
/etc/letsencrypt/live/yourdomain.com/privatekey.pem
/etc/letsencrypt/live/yourdomain.com/fullchain.pem

If you had already created the SSL using Cyberpanel (which you must have done if you viewing this post), then remember, certbot will place the SSLs in /etc/letsencrypt/live/yourdomain.com-001/ folder. The name of the folder would be shown at the time you complete issuing SSL with certbot.

There are couple of ways you may use the SSL now. Either you may replace the old directory with the new, or just change the settings in either the vhost conf or the openlitespeed SSL settings. I find the easiest way is just to replace the old directory with the new. Something like this should work:

mv /etc/letsencrypt/live/yourdomain.com /etc/letsencrypt/live/old_yourdomain.com
mv /etc/letsencrypt/live/yourdomain.com-001 /etc/letsencrypt/live/yourdomain.com

Once this is done, remember to restart your openlitespeed:

service lsws restart

Now your https on the WWW should work without any problem. If not, try clearing your browser cache and retry.

How to: Use WINMTR to Diagnose Network Issues

MTR is a great tool to understand if there is a routing issue. There are many times, customer says the website/web server is slow or not being able to access the network etc. After some basic checks, if no solution is concluded, it is important to get a MTR report from the client. As most of the users use Windows, it is common to use WinMTR.

To run WINMTR, you need to first download it from here:

https://sourceforge.net/projects/winmtr/

or here

https://winmtr.en.uptodown.com/windows

Once the app is downloaded, double clicking it will open it. WinMTR is a portable executable binary. It doesn’t require installation.

Once opened, you can enter the ‘domain name’ that is having trouble in the ‘Host’ section and press start.

Start winMTR by entering your domain in the Host section

Once you start, it will start reaching the domain you entered and hit each of the node it passes for routing, with giving the amount of drops each node is hitting

WintMTR running – (I have hidden two hops for privacy)

If you are seeing drops of anything above 2-5%, that node is problematic. If the node is dropping a lot, but the next node isn’t dropping enough, then the node is set to transparently hiding the packet responses for security, then that node is not problematic. So if your packet isn’t reaching the destination and it is dropping somehwere or looping in a node, that means, the problem is within that node. Now you can locate the node and see where does it belong. If it belongs to within your territory, then the issue is within your ISP or IIG. But if it is outside your territory but at the end of the tail, then the issue is with the Host.

In most case, we ask for running the MTR for 5 minutes and then export to TEXT and send it over for us to analyse to customers. You can export the report by stopping the MTR and clicking ‘Export TEXT’ available in the winMTR window.

How to Add Openlitespeed Server to Haproxy – Avoid 503 Haproxy Error

For past one month, Openlitespeed has been my favorite piece of web server. Litespeed has always outperformed all the other webservers including Nginx as well in any of my production environment. But I have recently switched to using OLS which is a Opensource version of Litespeed with some limited features. I get LS kind of performance along with no worry for paying. How better could it be?

OLS comes with some weird problem. As OLS is less used, finding a solution for such cases could be difficult. I faced a very similar kind of issue yesterday.

I added a OLS based server to my HAProxy cluster, but the HAProxy can not find the OLS server working. When I try to access the web app hosted under OLS server using local IP masking, I see the website without a problem. That means, OLS is interpreting the Domain with IP relation well. But failing to respond when Haproxy is requesting through IP address.

The problem is, OLS is not configured to respond to ‘default’ requests on ‘127.0.0.1’, ‘localhost’ or the server’s main IP. To find out this, I enabled ‘High’ Debug mode of OLS. To do this, first visit the OLS Webadmin Console, it can be accessed with https://IP:7080

After login, go to Server Configuration >> Log >> Edit Server Log >> Set ‘Debug Level’ to High and Save

Set high debug level in openlitespeed

Once saving is done, you may gracefully restart the OLS

Gracefully restart Openlitespeed

Once this is done, you may now monitor the error.log file located usually under /usr/local/lsws/logs. Now tail the output of error.log while processing requests with Haproxy:

tail -f /usr/local/lsws/logs/error.log

You can see, OLS has returned 404 error for the localhost/ request. That means, Haproxy is requesting the IP with a header ‘localhost/’, and the server should return something with code 200 to make sure the server is in business.

What we need to do, is to make OLS respond to request for basic IP and localhost to 200 with the main site instead of ‘404’ error. To do this, we need to go to Webconsole of OLS again >> Listeners

You will see you have two Listeners, one for Default/Non HTTP and the HTTPS/SSL. In my case, I was using only HAProxy to Origin with no SSL, means 80. I selected the Default.

Open 80 Listener View in Openlitespeed

In the Listener List, you can find your Virtualhost, click on the ‘Edit’ of your Virtualhost

Virtualhost Edit Openlitespeed

Now, you can map the virtualhost. You will see your primary domain as the ‘Virtual Host’, which can’t be changed here. But what you can do is to map this virtualhost to several domains. The trick is to add your server’s IP and the localhost in the ‘domains’ list with comma seperation as following:

localhost mapping to OLS

Once this is saved, restart your OLS and now your HAProxy should be able to read requests and starting forwarding requests to your OLS server.

How to Make Cloudflare Work with HAProxy for TLS Termination

Remember:
This is a part of dirty hack series. This is not the only way you can achieve what we want to achieve. But this is only used when you can trust the connections between your HAProxy and the Origin servers. Otherwise, you should not use this technique.

One common problem with using HAProxy and Cloudflare is that, the SSL that Cloudflare gives us, it gets terminated at HAProxy on L7 load balancer. For such cases, Cloudflare can not verify the Origin server and drops the connection. For such cases, your HAProxy will not work. What would you do for such cases? There are two ways to do this.

First one is, Cloudflare gives you a origin certificate, that you can install at HAProxy. I won’t dig into deep into this in this blog post.

But if you can trust your connections between HAProxy and backend Origin servers, as well as the connections between Cloudflare and HAproxy, you can choose the second one. For this case, Cloudflare allows you to Encrypt only the connections between the Visitors and Cloudflare. It won’t matter what you are doing behind the Cloudflare. This option is called ‘Flexible’ option, that you can select from your Cloudflare >> SSL/TLS tab.

Fix TLS Termination by HAProxy with Flexible Encryption Mode of Cloudflare

Once you set this to Flexible, this should start working ASAP. Remember, this is not essentially the best way to do this, but the quickest way only if load balancing is more important to you instead the data integrity.