Install Elasticsearch in Cpanel – CentOS / java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native

If you are trying to install the Elasticsearch 7.* in CentOS and Cpanel, you are probably seeing your Elasticsearch is installed, but not starting with an error like the following:

java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
        at org.elasticsearch.systemd.Libsystemd.lambda$static$0(Libsystemd.java:23) ~[?:?]
        at java.security.AccessController.doPrivileged(AccessController.java:318) ~[?:?]
        at org.elasticsearch.systemd.Libsystemd.<clinit>(Libsystemd.java:22) ~[?:?]
        at org.elasticsearch.systemd.SystemdPlugin.sd_notify(SystemdPlugin.java:115) ~[?:?]
        at org.elasticsearch.systemd.SystemdPlugin.onNodeStarted(SystemdPlugin.java:126) ~[?:?]
        at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
        at org.elasticsearch.node.Node.start(Node.java:998) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:313) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:408) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:167) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:158) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:114) ~[elasticsearch-cli-7.15.1.jar:7.15.1]
        at org.elasticsearch.cli.Command.main(Command.java:79) ~[elasticsearch-cli-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:123) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81) ~[elasticsearch-7.15.1.jar:7.15.1]

The error appears if your tmp directory is not writable and executable by Elasticsearch. You need to make two changes, to make Elasticsearch start for you.

First, edit the elasticsearch sysconfig file:

nano /etc/sysconfig/elasticsearch

and add the following line:

ES_TMPDIR=/tmp

Next, you need to mount /tmp with exec permission, which is normally mounted with noexec permission in Cpanel:

mount -o remount,exec /tmp

You should be done now, you may start your Elasticsearch:

service elasticsearch start

Please also remember, Elasticsearch needs at least 3GB of RAM to start. If you have lower RAM than that, you are probably hitting OOM killer of the system.

How to decide your Innodb Buffer Pool Size? – Why MySQL Buffer Pool is awe-inspiring?

One key bottleneck, every software engineer who uses MySQL, fails to realize, is how to utilize MySQL cache. I have seen people believing the MySQL query cache as the MySQL cache. Sadly, it is not. The query cache is probably one of the worst cache attributes you may enable for your MySQL instance optimization for a high-traffic website.

MySQL Buffers

MySQL utilizes buffers for caching. These buffers are stored in RAM. The key benefit of these buffers is that the changes can also be stored in RAM before flushing the data to disk, along with sync to the replica. Let’s look at an example.

Let’s say, you have selected a post with id 1001

select * from posts where id = 1001

If you have a MySQL buffer, then the result of this query would be placed in the memory.

Now, the subsequent call, let’s say the following:

update posts
set title = 'This is a new title'
where id = 1001

Now, imagine, how the above query will be handled in MySQL? MySQL would write in the RAM, and keep the chunk as dirty to be flushed to disk. Now, that makes the above update super fast to the interactive queries. Because writing to disk would be super slow compared to writing to RAM.

If you live under a heavy load, this is an awe-inspiring tweak. Because this makes your MySQL instance a RAM-based database, with an IO player, which keeps track of your dirty bits, and flushes them when needed. While the con of the approach is that if the system is crashed, dirty bits are lost. Now, we can’t directly say lost, if you are using a system with journaling enabled, that keeps track of the changes, and rolls back.

What is the right Innodb Buffer Pool Size?

There is much misinformation about the size of your buffer pool. MySQL says you should use 80% of your RAM as the pool, which is significantly incorrect. If your database is of size 500MB and the RAM you hold is much larger, then using 80% is quite a waste. You could probably use them for other buffers like a join buffer or a temporary table or a heap table. But unwisely using that for Innodb Buffer is not worthy.

To start with, you can set the buffer pool size to the total size of your database first and let the system run for a day or two. Once done, then you may analyze a few MySQL status variables to find out whether should you increase or decrease. Let’s find out a few details

To get the MySQL status variables, run the following from your MySQL console:

SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';

Now, this should give you a details like the following:

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';
+---------------------------------------+--------------------------------------------------+
| Variable_name                         | Value                                            |
+---------------------------------------+--------------------------------------------------+
| Innodb_buffer_pool_dump_status        |                                                  |
| Innodb_buffer_pool_load_status        | Buffer pool(s) load completed at 210915 13:30:05 |
| Innodb_buffer_pool_resize_status      |                                                  |
| Innodb_buffer_pool_load_incomplete    | OFF                                              |
| Innodb_buffer_pool_pages_data         | 117007                                           |
| Innodb_buffer_pool_bytes_data         | 1917042688                                       |
| Innodb_buffer_pool_pages_dirty        | 557                                              |
| Innodb_buffer_pool_bytes_dirty        | 9125888                                          |
| Innodb_buffer_pool_pages_flushed      | 80192071                                         |
| Innodb_buffer_pool_pages_free         | 7339                                             |
| Innodb_buffer_pool_pages_misc         | 6726                                             |
| Innodb_buffer_pool_pages_total        | 131072                                           |
| Innodb_buffer_pool_read_ahead_rnd     | 0                                                |
| Innodb_buffer_pool_read_ahead         | 7627222                                          |
| Innodb_buffer_pool_read_ahead_evicted | 41991                                            |
| Innodb_buffer_pool_read_requests      | 480918989539                                     |
| Innodb_buffer_pool_reads              | 117010959                                        |
| Innodb_buffer_pool_wait_free          | 1206                                             |
| Innodb_buffer_pool_write_requests     | 290588732                                        |
+---------------------------------------+--------------------------------------------------+
19 rows in set (0.001 sec)

We can use a few metrics from this to determine whether we need a change in buffer or not.

  1. Free Pages

    First, we look at the variable ‘Innodb_buffer_pool_pages_free’. It says the amount the free pages you have. If you have set a pretty large buffer pool, you should have a large number of pages in total, hence seeing the number ‘Innodb_buffer_pool_pages_free’ should not make you realize if you are going out of the pool page or not. You may calculate a ratio like the following:

    free_page_ratio = ( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) * 100

    free_page_ratio should not be less than 10% according to my experience until you are running out of RAM. If you have enough RAM, and the ratio is less than 10%, you have fewer free pages, you might want to increase the pool size.

    Now, one thing to remember, the value, might stay below 10% all the time, it might not mean you are running out of pool space, instead, you are at the top of the optimization or near to cut off. To determine, if you are out of pool space, make sure to monitor the variable several times of the day and see how often does it change. If it gets lower too often, that suggests you are hitting the boundary and it is time to increase the buffer pool size.
  2. Read Ratio

    Next, you need to look at the number of read requests and the number of reads you have performed in total. If you are seeing a lot of read requests and the ratio to reads is higher, then the buffer isn’t enough to store your full database requests in the RAM, and you should increase the value.

    read_ratio = (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests) * 100

    Remember, Innodb_buffer_pool_reads means the amount of requests Innodb had to query the disk to get the data from. The value should be less than 1% for maximum efficiency. In my above example, the value is less than .1%, which makes the Innodb performance the best.
  3. Flushed & Wait Free

    The other thing, we need to track is that, Innodb_buffer_pool_wait_free. The variable should be near 0 or 0. If the variable is higher, it suggests a situation has happened, where the flushing was essential to free pages. Even if you have a good read_ratio, but if the variable shows a large number, then you need to increase the buffer pool size to make sure we do not need to free pages at the peak times.

Continuous monitoring is essential to perfectly use Innodb Buffer Pool. It is an extremely essential part of MySQL-based software and should be used wisely. Proper usage can excessively decrease your server CPU cost, and maximize the overall performance.

[ERROR] We are not able to run systemctl start mysql return code: 5. Fatal error, see /var/log/installLogs.txt for full details – Cyberpanel

If you are trying to install Cyberpanel 2.1.2 on CentOS 7, you are probably seeing your installation fails with the following in the screen:

[ERROR] We are not able to run systemctl start mysql return code: 5.  Fatal error, see /var/log/installLogs.txt for full details.

This is appearing because Cyberpanel installation script located here:

/usr/local/cyberpanel/install/installCyberPanel.py

wants to restart MariaDB 10.4 with the following:

systemctl start mysql

Unfortunately, this was only possible until 10.3 or below. Since 10.4, Mariadb uses the following command:

systemctl start mariadb

Cyberpanel uses that command if you are using CentOS 8 or Ubuntu. But it looks like CentOS 7 now installs MariaDB 10.4 as default unlike 10.3 as default previously.

Even if you edit this command, Cyberpanel will reclone the script, hence it is not possible to change this from here. The nasty trick would be to load a mysql.service file before starting the cyberpanel installation. Here is what you would need to do to do this.

  1. First reload your VPS with CentOS 7
  2. Run yum update and install nano
yum update -y
yum install nano -y

3. Now, load the following file with the text given below:

nano /usr/lib/systemd/system/mysql.service

Load the file with the following text

# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades.  If you want to customize, the
# best way is to create a file "/etc/systemd/system/mariadb.service",
# containing
#       .include /usr/lib/systemd/system/mariadb.service
#       ...make your changes here...
# or create a file "/etc/systemd/system/mariadb.service.d/foo.conf",
# which doesn't need to include ".include" call and which will be parsed
# after the file mariadb.service itself is parsed.
#
# For more info about custom unit files, see systemd.unit(5) or
# https://mariadb.com/kb/en/mariadb/systemd/
#
# Copyright notice:
#
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.

[Unit]
Description=MariaDB 10.4.21 database server
Documentation=man:mysqld(8)
Documentation=https://mariadb.com/kb/en/library/systemd/
After=network.target

[Install]
WantedBy=multi-user.target
Alias=mysql.service
Alias=mysqld.service


[Service]

##############################################################################
## Core requirements
##

Type=notify

# Setting this to true can break replication and the Type=notify settings
# See also bind-address mysqld option.
PrivateNetwork=false

##############################################################################
## Package maintainers
##

User=mysql
Group=mysql

# CAP_IPC_LOCK To allow memlock to be used as non-root user
# CAP_DAC_OVERRIDE To allow auth_pam_tool (which is SUID root) to read /etc/shadow when it's chmod 0
#   does nothing for non-root, not needed if /etc/shadow is u+r
# CAP_AUDIT_WRITE auth_pam_tool needs it on Debian for whatever reason
CapabilityBoundingSet=CAP_IPC_LOCK CAP_DAC_OVERRIDE CAP_AUDIT_WRITE

# PrivateDevices=true implies NoNewPrivileges=true and
# SUID auth_pam_tool suddenly doesn't do setuid anymore
PrivateDevices=false

# Prevent writes to /usr, /boot, and /etc
ProtectSystem=full

# Doesn't yet work properly with SELinux enabled
# NoNewPrivileges=true

# Prevent accessing /home, /root and /run/user
ProtectHome=true

# Execute pre and post scripts as root, otherwise it does it as User=
PermissionsStartOnly=true



# Perform automatic wsrep recovery. When server is started without wsrep,
# galera_recovery simply returns an empty string. In any case, however,
# the script is not expected to return with a non-zero status.
# It is always safe to unset _WSREP_START_POSITION environment variable.
# Do not panic if galera_recovery script is not available. (MDEV-10538)
ExecStartPre=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"
ExecStartPre=/bin/sh -c "[ ! -e /usr/bin/galera_recovery ] && VAR= || \
 VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] \
 && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1"

# Needed to create system tables etc.
# ExecStartPre=/usr/bin/mysql_install_db -u mysql

# Start main service
# MYSQLD_OPTS here is for users to set in /etc/systemd/system/mariadb.service.d/MY_SPECIAL.conf
# Use the [Service] section and Environment="MYSQLD_OPTS=...".
# This isn't a replacement for my.cnf.
# _WSREP_NEW_CLUSTER is for the exclusive use of the script galera_new_cluster
ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION

# Unset _WSREP_START_POSITION environment variable.
ExecStartPost=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"



KillSignal=SIGTERM

# Don't want to see an automated SIGKILL ever
SendSIGKILL=no

# Restart crashed server only, on-failure would also restart, for example, when
# my.cnf contains unknown option
Restart=on-abort
RestartSec=5s

UMask=007

##############################################################################
## USERs can override
##
##
## by creating a file in /etc/systemd/system/mariadb.service.d/MY_SPECIAL.conf
## and adding/setting the following under [Service] will override this file's
## settings.

# Useful options not previously available in [mysqld_safe]

# Kernels like killing mysqld when out of memory because its big.
# Lets temper that preference a little.
# OOMScoreAdjust=-600

# Explicitly start with high IO priority
# BlockIOWeight=1000

# If you don't use the /tmp directory for SELECT ... OUTFILE and
# LOAD DATA INFILE you can enable PrivateTmp=true for a little more security.
PrivateTmp=false

# Set an explicit Start and Stop timeout of 900 seconds (15 minutes!)
# this is the same value as used in SysV init scripts in the past
# Galera might need a longer timeout, check the KB if you want to change this:
# https://mariadb.com/kb/en/library/systemd/#configuring-the-systemd-service-timeout
TimeoutStartSec=900
TimeoutStopSec=900

##
## Options previously available to be set via [mysqld_safe]
## that now needs to be set by systemd config files as mysqld_safe
## isn't executed.
##

# Number of files limit. previously [mysqld_safe] open-files-limit
LimitNOFILE=32768

# Maximium core size. previously [mysqld_safe] core-file-size
# LimitCore=

# Nice priority. previously [mysqld_safe] nice
# Nice=-5

# Timezone. previously [mysqld_safe] timezone
# Environment="TZ=UTC"

# Library substitutions. previously [mysqld_safe] malloc-lib with explicit paths
# (in LD_LIBRARY_PATH) and library name (in LD_PRELOAD).
# Environment="LD_LIBRARY_PATH=/path1 /path2" "LD_PRELOAD=

# Flush caches. previously [mysqld_safe] flush-caches=1
# ExecStartPre=sync
# ExecStartPre=sysctl -q -w vm.drop_caches=3

# numa-interleave=1 equalivant
# Change ExecStart=numactl --interleave=all /usr/sbin/mysqld......

# crash-script equalivent
# FailureAction=

Now, save the file, and start your cyberpanel installation as it is with the following:

sh <(curl https://cyberpanel.net/install.sh || wget -O - https://cyberpanel.net/install.sh)

Hope it solves the problem. I am hoping, soon enough Cyberpanel would be able to address the issue the way they solved this for CentOS 8 and Ubuntu.

ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2) – LXC/LXD

If you are seeing the above error in LXC, you need to do two things.

  1. Make sure the LXC container is running on privileged mode.
  2. Run the following commands inside the container:
mknod /dev/net/tun c 10 200

Now, you may run the OpenVPN command to start the VPN client:

openvpn --pull-filter ignore redirect-gateway --config ovpn.ovpn

# assumming your vpn config file is ovpn.ovpn

failed to open db file /var/spool/exim/db/ratelimit: permission denied

Cpanel incoming mails are failing, with an error in the exim_mainlog as following:

failed to open db file /var/spool/exim/db/ratelimit: permission denied

The error is appearing due to some permission issues with the exim db or the files are corrupted. These files recreate when the exim restart. Hence, we can do the following:

# delete the db files
rm -rf /var/spool/exim/db/*

# restart exim
service exim restart

# fix permission of exim spool
chown -Rf mailnull.mail /var/spool/exim
chmod 0750 /var/spool/exim

You should be done now.

How to Configure Postfix Relay

Open your main.cf file, in my case, it’s a zimbra main.cf file:

nano /opt/zimbra/common/conf/main.cf

Now change the following settings:

relayhost = [smtp.yourrelayserver.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = static:relayusername:relaypassword
smtp_sasl_mechanism_filter = login

You need to replace 3 things here:

  1. smtp.yourrelayserver.com should be your original relay server.
  2. relayusername should be the relay authentication username.
  3. relaypassword should be the relay authentication password.

Once done, you may now restart your postfix to see the mail is relaying through the new relay you have added.

How to Set Timezone in LXC/LXD container CentOS 7

LXC should take the UTC as the default timezone for the guest. To set a custom timezone, you need to manually run this:

ln -sf /usr/share/zoneinfo/Asia/Dhaka /etc/localtime

In our case, we set the Asia/Dhaka as the default timezone. For Ubuntu based systems, you may use ‘tzdata’ package.

How to aggregate column with non-aggregable value in an Array using Group in PostgreSQL

Let’s imagine, scenario is like the following:

We have three columns with the following values in a table named ‘sale_line’

----------------------------------------
product_id	qty_done	sale_id
----------------------------------------
1010		5.0		101
1010		1.0		102
1010		2.0		103
1012		1.0		104
1012		4.0		105
1012		4.0		106
1012		9.0		107

We have a product_id column that contains which product we are referring to, a qty_done column, which contains the number of products sold, and sale_id refers to the sale order where the product_id was dispatched. Now the example table contains how many of each product dispatched in different sale orders. We can write a query, to group the product_id and the total number of products sold using aggregation

select product_id, sum(qty_done) as total_sold from sale_line group by product_id;

The producing result would be the following:

---------------------------
product_id	total_sold
---------------------------
1010		8.0		
1012		17.0	

We can see, how group by can give you the benefit of aggregation. But have you noticed, we have a column called ‘sale_id’ that doesn’t go with any aggregation function like ‘count’, ‘sum’ or ‘avg’. Can we use them for any purpose?

Sometimes, you may be interested to track down the orders responsible for a set of products using group by when each products will contain the list of orders responsible for causing the total sell. Can we accomplish that in one query?

We actually can. We can concat distinct ids and put them in a resulting array instead of our known aggregation function like ‘count’ or ‘sum’. Postgres provides a array aggregation function called ‘array_agg’, it can be used to produce such result. Have a look at the following query:

select product_id, sum(qty_done) as total_sold, array_agg(sale_id) as sale_ids from sale_line group by product_id;

The producing result would be the following:

-----------------------------------------
product_id	total_sold	sale_ids
-----------------------------------------
1010		8.0		[101, 102, 103]
1012		17.0		[104, 105, 106, 107]

As you can see, we have made postgres return us an array that contains the sale_ids for consisting total_sold even though, sale_id column wasn’t conventionally aggregable or let’s say different than the usual numeric pattern.

This technique can have many use cases. For example if you have a different model/table for managing sale orders, purchase orders, pickings and invoices while, your sale orders contains the price information, and the pickings doesn’t, but picking contains the actual number of products got dispatched, then you may first aggregate the total_sold from the picking table, then produce an average price using the sale_ids you have produced from the picking table using group by and array aggregation, then merge them. This seems quite complicated, I understand, but again, it’s all about putting a break to your reporting time, and manage scalability of your application, putting a constant cost based reporting algorithm using single sql query, even if you process 10 times more orders in future.

How to get purchase_id from stock_picking table in Odoo?

stock_picking is one of the core model for Odoo if you using Odoo for ‘Inventory’. Now, if you use ‘Sales’ module, then stock_picking would be extended with a new field for model and column for database table, naming ‘sale_id’. This can be used to detect if the picking is originated from a sale order or not. But if you install ‘Purchase’ module, then stock_picking model is extended with ‘purchase_id’ like the ‘sale_id’ for purchases, but the database isn’t expanded with a column like ‘Sale’ module.

What does this mean?

This means, if you use Odoo ORM, only then, you may use purchase_id of a stock_picking. An example could be like the following. Let’s say, we would like to pick the pickings that originated from purchase orders, aka, GRN, we could use something like this:

purchase_pickings = self.env['stock.picking'].search([('purchase_id', '!=', False)])

This works, only if you are not trying to make a report from a huge lot of pickings, purchase orders and sale orders, when you want to use SQL statement to produce efficient joins and generate the report quickly.

Let me demonstrate what I meant

We know, stock_picking has a field called sale_id and also this also belongs to the database column as well. Hence, to get all the pickings belongs to sale order, we may first use the ORM:

sale_pickings = self.env['stock.picking'].search([('sale_id', '!=', False)])

or a direct PostGRE SQL

query = """select * from stock_picking where sale_id is not null"""
self.env.cr.execute(query)
result = self.env.cr.fetchall()

Now, the second example is not only faster, but also, it allows you to extend the facility further to use joins or select specific field of a table result, which is only possible using ‘read’ Odoo ORM method, again, domain specification is not permissible like it is available in ‘search’.

We are able to do things like the following with the sql:

query = """select sale_order.name, stock_picking.name from stock_picking left join on stock_picking.sale_id = sale_order.id where stock_picking.sale_id is not null"""
self.env.cr.execute(query)
result = self.env.cr.fetchall()

This would give you a result of each sale order with it’s picking name. To produce a result like the above using ORM is costly as it follows ‘N+1’ algorithm, hence inefficient in making reports or scaling the software.

Now, we understand, we are able to use such field and make the reports efficient using SQL as sale_id is distinctively available in the database. But what if you want to check how the product has been purchased, and then sold? Then, we also need purchase_order model to connect to our above query, right? But unfortunately, as ‘Purchase’ module doesn’t add a column purchase_id, we are unable to use this directly.

So, how can we still use purchase_id in the SQL Query to generate report in Odoo?

First, we need to see, how purchase_id is added in Odoo.

purchase_id is added in stock_picking model in the ‘purchase_stock’ module. If you open the following file:

purchase_stock/models/stock.py

you may see, how purchase_id is defined as related Many2one field:

class StockPicking(models.Model):
    _inherit = 'stock.picking'

    purchase_id = fields.Many2one('purchase.order', related='move_lines.purchase_line_id.order_id',
        string="Purchase Orders", readonly=True)

A related field in Odoo, is like a pointer, a syntactic sugar of foreign key for less used fields. If the field is highly used, this might cause performance issue, as Odoo has to do multiple lookups unlike direct lookup for a related field. Now, get to the point, purchase_id is related to ‘move_lines.purchase_line_id.order_id’. This is a long relation. Let me go one by one:

  1. move_lines : stock_picking has an One2many relation with stock.move model, that derives the available moves for the picking.
  2. purchase_line_id: Each move line derived from a purchase order line, and while doing so, it keeps the ID of the purchase order line in a foreign key of stock.move model, namely purchase_line_id.
  3. order_id: Each purchase_order_line has a foreign key with the purchase.order model kept in order_id field.

Now, we know, how the purchase_id derives the purchase_order id using the following relation:

Picking > Moves > Purchase Order Line > Purchase Order

Now we can use the following kind of relation for detecting purchase order from stock picking:

select purchase_order.name, stock_picking.name from stock_picking left join stock_move on stock_move.picking_id = stock_picking.id left join purchase_order_line on purchase_order_line.id = stock_move.purchase_line_id left join purchase_order on purchase_order.id = purchase_order_line.order_id where stock_move.purchase_line_id is not null group by stock_picking.name, purchase_order.name

Here, we are able to get the picking and purchase in relation with one query. This concept can be used to derive many data, like, let’s say, you would like to see, how many of your products are purchased, then, sold and returned, all can be done in few queries, without having N+1 problem.

How to Display Select Result Vertically in Postgres psql Console

Mysql has a pretty cool feature with \G, that sets the mysql query result in vertical mode. If you have a table with a lot of column, you will face difficulty to display them in horizontal mode. Hence, you want to see the result vertically.

For Postgres, this is called ‘expanded display’. To set the expanded display on or off, you need to use the following command in psql console:

my_prod=# \x
Expanded display is on.
my_prod=# \x
Expanded display is off.
my_prod=#

If your expanded display is on, then \x will set this to off, else it would set the display to on. If you set the expanded display to on, this would show the column and value in a vertical key value pair style display, which is much easier to understand.