How to Fix zmconfigd failed in Zimbra – Starting zmconfigd…failed.

Sometimes, if you restart Zimbra, you see zmconfigd is not starting or saying it’s failed. You may also see the zmconfigd service is not running in the Zimbra admin panel. There are couple of common reasons why zmconfigd fails to start.

Disable IPv6

One reason of zmconfigd fails to start is IPv6, for some reason, it fails to route the IPv6 and fails to start. A quick solution to this problem is to disable ipv6 and restart zmconfigd. You may do this like the following:

#Edit your sysctl.conf file
nano /etc/sysctl.conf

# paste the following inside the file
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

# Save the file, and update sysctl in realtime
sysctl -p

# now try to restart zmconfigd
su - zimbra
zmconfigdctl restart

Now you can check the zmconfigd status with the following, to know if it’s running or not:

[root@mailapp ~]# cat /opt/zimbra/log/zmconfigd.pid
19722

If it returns an ID, it means the zmconfigd is running.

Netcat is not installed

Another reason of the error could be because nc is not installed in your system. Zimbra zmconfigd has a dependency on netcat package. Netcat is available through nmap-ncat in centos systems. You may run the following to install netcat:

yum install nc
# or 
yum install nmap-netcat

How to Redirect HTTP to HTTPs Zimbra 8.8.*

Zimbra Supports HTTPs by Default:

By default Zimbra will use HTTPs support only and disable HTTP use on the webmail client. But users will always use non http port to access the webclient. Users do not like to type https before the domain each time to get into the webmail client. Zimbra uses Nginx to run the proxy services to access the Javamail Client of Zimbra. Zimbra supports 5 types of proxy services through Nginx:

  1. redirect
  2. both
  3. http
  4. https
  5. mixed

You may check the following for details:

Enabling_Zimbra_Proxy_and_memcached

How to Redirect HTTP to HTTPs automatically in Zimbra 8.8*

The most popular out of 5 options for proxy services, is to redirect. To do this, you can run the following:

zmprov ms `zmhostname` zimbraReverseProxyMailMode redirect

This will redirect your URLs to the zimbra hostname based HTTPs.

Now, restart the proxy services:

su - zimbra
zmproxyctl restart

Hope this helps.

Odoo Controller JSON Route Returns 404 – werkzeug.exceptions.NotFound

Even though, if you have defined your routes properly, you are seeing an error of the following:

{
    "id": null,
    "jsonrpc": "2.0",
    "error": {
        "http_status": 404,
        "code": 404,
        "data": {
            "name": "werkzeug.exceptions.NotFound",
            "debug": "Traceback (most recent call last):\n  File \"/opt/odoo/odoo12/odoo/http.py\", line 656, in _handle_exception\n    return super(JsonRequest, self)._handle_exception(exception)\n  File \"/opt/odoo/odoo12/odoo/http.py\", line 314, in _handle_exception\n    raise pycompat.reraise(type(exception), exception, sys.exc_info()[2])\n  File \"/opt/odoo/odoo12/odoo/tools/pycompat.py\", line 87, in reraise\n    raise value\n  File \"/opt/odoo/odoo12/odoo/http.py\", line 1460, in _dispatch_nodb\n    func, arguments = self.nodb_routing_map.bind_to_environ(request.httprequest.environ).match()\n  File \"/opt/odoo/odoo12-venv/lib64/python3.5/site-packages/werkzeug/routing.py\", line 1563, in match\n    raise NotFound()\nwerkzeug.exceptions.NotFound: 404: Not Found\n",
            "message": "404: Not Found",
            "exception_type": "internal_error",
            "arguments": []
        },
        "message": "404: Not Found"
    }
}

This error doesn’t return when you use the http.route as type ‘http’ or default, which is still http, but returns when you use the type ‘json’. One of the cause why the error return is that, you have multiple Odoo databases and Odoo is failing to detect the usable database for the type json. For the type, http, Odoo usually can predict what to use, while for the type json, it can not. For such cases, you would need to use the ‘db-filter’ to add the default database to load for Odoo on the odoo-bin command. If you are using the systemd service file, append the line with the following:

--db-filter=^my_prod$

where ‘my_prod’ is your database name.

So the service ExecStart would look like the following:

ExecStart=/usr/bin/scl enable rh-python35 -- /opt/odoo/odoo12-venv/bin/python3 /opt/odoo/odoo12/odoo-bin -c /etc/odoo.conf --limit-time-real 1009999999 --limit-time-cpu 1009999999 --limit-memory-hard 89179869184000000 --limit-memory-soft 57179869184000000 --db-filter=^my_prod$

After making the change, reload your systemctl and restart your odoo-bin:

systemctl daemon-reload
service odoo12 restart

This should do the job.

Grep Non ASCII Character Sets

I had an interesting challenge today about filtering a list using grep with a set like the following:

senegalese footballer|সেনেগালীয় ফুটবলার
species of insect|কীটপতঙ্গের প্রজাতি
indian cricket player|ভারতীয় ক্রিকেটার
Ajit Manohar Pai|অজিত পৈ|অজিত মনোহর পাই|অজিত ভরদ্বাজ পাই
ajit pai|অজিত পাই

You would need to grep to match them based on pipe. My target was to match lines that had multiple pipes, at least 2. I took a bit greedy approach for this to understand and find how to match Bengali characters using Grep. So, I started matching Alphanumeric first with a Pipe and then Bengali Characters with a Pipe, instead of just counting how many pipes I have at least.

If you are aware, conventional regex can detect and match unicode character sets like, if you want to match a ‘Greek’ set, you can do \p{Greek} in regular expressions. But for some reason, this wasn’t matching the Bengali in the following grep:

grep -Ei "\p{Bengali}" test.txt

I then looked at the grep manual and found a key information. Grep by default uses POSIX regex, and -E is just the extended version of POSIX grep. Unfortunately, this regex engine does not support PCRE, which is basically used to grep the unicode sets here. POSIX can only work with the HEX boundaries, which may sometimes get pretty difficult to match range boundaries of non ascii characters. To make it simpler, you can use PERL Regex that is a PCRE supporting engine. To use that, you may do the following:

grep -Pi "\p{Bengali}" test.txt

To get all the unicode that are available with a set in a PCRE supported Regex engine, you may check the following:

Regex Unicode Scripts

Now, let’s come to the original matching, what we have to match at least 2 pipes, the first one being the basic alphanumeric with whitespace being the simpler one:

grep -Pi "^[A-Za-z0-9\s]+\|" test.txt

Then, we need to add the First Bengali part with whitespace and a pipe

grep -Pi "^[A-Za-z0-9\s]+\|[\p{Bengali}\s]+\|" test.txt

This should suffice our purpose here in matching first being alphanumeric with a pipe, and second being the Bengali unicode set with a pipe at least.

[ERROR] Can’t open and lock privilege tables: Table ‘mysql.servers’ doesn’t exist in engine – Resolution

There are times, you may see the following error in your MySQL/MariaDB based Cpanel server:

[ERROR] Can't open and lock privilege tables: Table 'mysql.servers' doesn't exist in engine

The issue is most likely related to your Innodb tablespace got corrupted, and hence some tables under the mysql database got locked out as some of them use Innodb storage engine. One of the outcome of the symptom is, if you try to add a user to a database, it doesn’t add or show the green notification any longer in cpanel mysql databases section. Instead it just stops.

The only and best way to properly fix this would be restore the ‘mysql’ database or just the ‘servers’ table from your backup. If you don’t have one, you may just create the ‘servers’ table using the following SQL statement:

CREATE TABLE `servers` (
`Server_name` char(64) NOT NULL,
`Host` char(64) NOT NULL,
`Db` char(64) NOT NULL,
`Username` char(64) NOT NULL,
`Password` char(64) NOT NULL,
`Port` int(4) DEFAULT NULL,
`Socket` char(64) DEFAULT NULL,
`Wrapper` char(64) NOT NULL,
`Owner` char(64) NOT NULL,
PRIMARY KEY (`Server_name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

You may require to drop the table first. Now, if you can’t do this either, then there is only one way left, is to uninstall your MariaDB installation, and let Cpanel/WHM to install them for you.

Get a Backup First:

cp -Rf /var/lib/mysql/mysql /root/
rm -Rf /var/lib/mysql/mysql

Uninstall MariaDB:

yum remove MariaDB*

Now, you may install the latest MariaDB from WHM >> MariaDB/MySQL Upgrade and proceed accordingly. This should install the latest for you with a fresh ‘mysql’ database for you. But it will not alter your other data files, means your other databases should be fine.

One thing, you need to remember, after a fresh mysql installation with the old data files, you will have the authorizations missing. You would have to recreate the database users manually to get the privileged table filled up.

How to Recover Innodb Table when ib_logfile / ibdata is/are crashed/deleted/lost without backup

If you are here, that means, you probably have panicked the same way, I did around 12 years back. I lost my ib_logfile0/ib_logfile1/ibdata1 all at once for a server that excessively utilizes Innodb tables. I had to recover vital data from the same situation today on a random request who does not have backups, and thought it is better to keep this as a document for future.

One key purpose of utilizing Innodb tables instead of MyISAM is that, the benefit on writes. It always outperform MyISAM in writes due to the use of extra efficient buffers. But, this also causes Innodb to vulnerable from crashing. As Innodb stores some sensitive data to 3 specific files, loosing them, also looses some serious mapping instruments for the database engines to recognizes Innodb table structure and data.

Who can follow this technique?

If you have lost any of ib_logfile0, ib_logfile1, ibdata1 or all of them, but still manages to keep the database folder intact with the .frm and .ibd files (which you would, if you have accidentally deleted the log file or the data only) and also have the following option NOT DISABLED in your mysql configuration ‘innodb_file_per_table’. This option is enabled by default, until you are explicitly disabling this to increase performance. A suggestion: only do this, if you keeping real time backups of your databases. Otherwise, it is better to have this enabled

What is ‘innodb_file_per_table’?

Primarily the tablespace stores and uses data from system tablespace for Innodb. But, as this creates a single point of failure from ibdata and log files, Innodb by defaults also stores the tablespace in table’s own data file, which is .ibd file. That means, if I lose the ibdata/logfile mappings, I can still use the .ibd file to restore my tablespace and do the schema to data mapping only if I allowed innodb to store these information to the database’s own .ibd file. You may read more about the parameters from MySQL documentation:

File-Per-Table Tablespaces at dev.mysql.com

How to Recover an Innodb Table from database files only?

There are two steps to this process. One is to identify and recognize the database schema from the frm file and then basically find a way to import the tablespace from .ibd file and introduce it to innodb engine system tablespace.

First Step First: How to get the schema from .frm files?

First, you must install mysql-utilities tools to get access mysqlfrm tool, you may get the instructions to install this here:

Once this is done, now you have two options to read mysqlfrm files. My favorite way is to use the ‘diagnostic’ attribute. To achieve this, run the following:

mysqlfrm --diagnostic /var/lib/mysql/your_database/assets.frm

I assumed, your database name is ‘your_database’ and the table you are trying to recover is ‘assets’. The above command will return you the schema of ‘CREATE TABLE’ you need to use. First, create a new database, and run this on the SQL console to generate the table first on the new database.

Second Step: Get your data and mapping back from .ibd to system tablespace

Once the database has the table, it will also create a .frm and .ibd file for you. What we need to do, is to first, make it forget the existing .ibd file it created, sync the .ibd file from our collapsed database, make the mysql innodb engine to recognize tablespace from the backup tablespace of this .ibd file and store & use it from system tablespace. These lines are complex, and might sound a bit difficult. No worry, let’s do it.

Run the following command first to let it forget the .ibd it has created now:

alter table assets discard tablespace;

Remember the following, our table name is ‘assets’. If you have a different table name, make sure to replace this accordingly. What this has done, is removed the assets.ibd file it created in /var/lib/mysql/new_database/ folder as we asked him to forget the existing .ibd file. Now we first need to copy the backup/old .ibd file to this location with the correct permission. I would use rsync to make sure permissions remains intact here:

rsync -vrplogDtH /var/lib/mysql/your_database/assets.ibd /var/lib/mysql/new_database/

Once this is done, we know, .ibd contains a backup of our original tablespace. We only need to make mysql & innodb recognize this. To achieve this, you may do the following from the Sql console:

alter table assets import tablespace;

If it throws a warning on not being able to file the .cfg file, you may forget it, because it is not essential to have a .cfg to recognize permissions/configurations.

If everything runs well, you should see your rows are back. It’s because innodb has now fetched your tablespace data from .ibd file to system tablespace and it can now recognizes the mapping to your data, viola! All you now need is to repeat the process for all of your innodb tables, and recover the whole database.

How to Install Mysqlfrm / Mysql-utilities in CentOS 7

Mysql provides a set of utility tools that can be used to recover your data from Mysql data files. One of them is ‘Mysqlfrm’. This tool is not given in primary MySQL bundles, instead it comes with Mysql-utilities.

This package can be installed from ‘mysql-tools-community’ repo, those are available from MySQL Yum Repos

Command would be:

yum install mysql-utilities

This would also install another python package called ‘mysql-connector-python’ for you form the ‘mysql-connectors-community’ repo automatically. There is one catch. Sometimes, due to python version dependencies, you may fail to connect to mysql through the automatically detected mysql-connector-python that is automatically installed by mysql-utilities. You may know that if you are seeing the following error when you type mysqlfrom in the command line:

# mysqlfrm
Traceback (most recent call last):
  File "/usr/bin/mysqlfrm", line 27, in <module>
    from mysql.utilities.common.tools import (check_python_version,
ImportError: No module named utilities.common.tools

For these cases, you may install an older version of mysql connector for python, using the following before installing mysql-utilities:

yum install mysql-connector-python.noarch

This would install an older version of mysql connector that works better with Python 2.7 or similar.

Once the above is done, you may now install mysql-utilities using the following back again:

yum install mysql-utilities

As you have already installed the connector, this won’t try to reinstall the mysql connector from dependencies and use the other one that you got installed.

Now you may use the mysqlfrm tool to read your frm files and recover the table structures if required. Here is a great article from 2014 and still valid on mysqlfrm use cases:

How to recover table structure from .frm files with MySQL Utilities

How to use Postfix as Relay for Mailgun

Mailgun is a popular SMTP Relay/API service, one of my favorite. For transactional emails I have favored Mandrill before they declared to shutdown and later on merged with Mailchimp. Mandrill has cleaner network than any other services for transactional emails till this date. But what if, you need a smtp relay along with transactional emails? Mandrill fails there, as they can’t be entirely used as a SMTP relay. For those cases, I prefer Mailgun over Sendgrid, one of the main reason is, Sendgrid has poor network quality over Mailgun.

If you try to configure Sendgrid with Postfix, you will see, it will work without smtp_sasl_auth_enable set to true/yes. But this won’t be the case with Mailgun. To use Mailgun as smtp relay, you need to set the following in your main.cf file:

# set the relayhost to use 587 port of mailgun:
relayhost = [smtp.mailgun.org]:587

# set the authentication using static method:
smtp_sasl_password_maps = static:[email protected]:password
# you can get the smtp authentication from Sending >> Domain Settings >> Select Domain >> SMTP Credentials

# set sasl authentication on
smtp_sasl_auth_enable = yes

# set sasl security options to no anonymous:
smtp_sasl_security_options = noanonymous

Once these are done, you can save the file and restart postfix to start relaying with Mailgun. In cases, if you see the following error:

SASL authentication failed; cannot authenticate to server smtp.mailgun.org[34.224.137.116]: no mechanism available

Along with the following:

warning: SASL authentication failure: No worthy mechs found

This means, you are lacking the SASL authentication library for postfix or libsasl2 is not enough to cover the dependencies. To resolve this, you can install the cyrus-sasl libraries. You may do that using the following:

yum install cyrus-sasl cyrus-sasl-lib cyrus-sasl-plain

This should be it, your SMTP should now send mails using Mailgun as the relay.

How to Restore MySQL Database Using SSH

This would be a simple step by step post for the help of my customers. You require two tools for this.

i) putty – ssh client
ii) WinSCP – File transfer client over SSH (You may use FTP or File manager as well)

Once you download putty, open it, and use your server’s IP in the Host Name section:

Now, click on open. In the prompt, type ‘root’ as login as and enter. Then type the password and enter. If the password for the root is right, it will login to the command shell prompt or else it will put you in the authentication error console.

Now, open WinSCP to login to your server using the same manner of SSH. Once done, transfer your sql file to the home directory of root.

Login to your server using WinSCP
WinSCP Transfer

To transfer the file, navigate the sql file from your desktop on the left side and then drag it to the right side of WinSCP window. That will start the transfer. Remember the path of the destination on the server. Path is shown in the red marked area in the image.

Once the file is transferred, come back to putty, and change directory to the path you had noted in WinSCP, in our case, it is /root/

cd /root/
mysql -uusername -ppassword your_database_name < your_sql_file_name.sql

Remember the following. Your mysql username should go right after ‘-u’ without any space and the mysql password should be the same without space. Replace the ‘your_database_name’ with your original database name in the server and ‘your_sql_file_name.sql’ with the exact sql file name that you had uploaded.

Now if you allow sometime, the command should complete after the restore. If it returns any error, you would need to attend them to do a complete restore.

Can You Cancel/Abort/Rollback a HTTP Request?

There are cases, where Frontend developers wants to cancel the REST Requests that they submit using Ajax. It is one of the popular question among the Frontend developers community. Let’s try to focus on this case today.

To answer the question in short, No, you can not cancel a HTTP Request once it is submitted. There are catches require shear understanding here. Let’s elaborate a bit.

HTTP is Stateless by Design

HTTP is stateless by design, so what happens when there is no state for a request? It means we do not know the destination, what is happening with that specific requests at the back, but only can acknowledge the response. For example, if you have a REST API behind a load balancer. If you put a POST request to the API, you will not know which server going to serve your requests out of many HTTP servers behind the load balancer. For such cases, we create ‘cookies’ or ‘sessions’ on our application end for our full fledged web applications and extend the functionality of HTTP, to store state of a request and following it. For each form request for example, we submit a state along with the app, to let load balancer understand the state and follow me to the destination. Isn’t it? But we have to remember, this is not a HTTP feature, this is done at application level to add statefullness in HTTP requests.

What does this mean? It means, as there is no state for REST, we are not aware of the destination here, but only the response. So how can we cancel a requests, whose actual destination we are not aware of? Now you might think that, how about adding a cookies and cancel it? Yes, we do that on web applications layer, but can REST HTTP Requests be stateful? Not really, that is not by design of HTTP or REST unfortunately.

So, Does That Mean We Can Never Cancel an API Call?

If you are using REST, then it means you can never cancel the API call. If it gets submitted, it will continue processing in the backend even if you stop it in the frontend. But if ‘Cancelling’ is much important for your application through the API call, then you must consider other alternatives, like SOAP or RPC. RPC is stateful API architecture, and it is possible to design a cancel request for this. Please note, RPC doesn’t implement ‘cancel’ by default, but as this is stateful, you are able to design a ‘cancel’ request with the RPC call. Google has a RPC called ‘gRPC’, which is a stateful API architecture. That means, it is possible for you to implement cancel/abort or event rollback/restore a state with gRPC.

A Google application called ‘Firestore’ database has support for gRPC which is basically a stateful version of stateless REST API of Firestore.