How to Set Timezone in Postgres Databases

Question: How to Set Timezone in Postgres Databases

Answer

Postgres allows you to set a timezone per database. To view the list of available timezones, you may use the following query from psql:

select * from pg_timezone_names;

To set a timezone for say a database called ‘inventory_report’, you may use the following query:

alter database inventory_report set timezone to 'Asia/Dhaka';

Tips:

To List Postgres Databases, you may use the following command from psql:

\l

To List Postgres Tables in a Database:

Connect to the database first:

\c my_prod

Now, list:

\dt

How to Allow Remote User Access in PostgreSQL

Configuration PostgreSQL is a bit tricky and complicated compared to MySQL for a few cases. One such case would be allowing remote user access in PGSQL. You might be interested to allow your home network to connect to your test or production database to run a few queries. To do that, if you are not using a tool like Percona Distribution for Postgres or a Cluster tool like Clustercontrol, it might get a bit complicated.

I have hence, set a list of steps, we need to do to achieve the goal.

Set PostgreSQL to Allow Remote Addresses

To do that, open up your postgresql.conf file. It is usually located under /var/lib/pgsql/9.6/data/postgresql.conf file. I am running PGSQL 9.6, hence the folder is in 9.6. If you are running any other version of PGSQL, it shall change based on the version.

nano /var/lib/pgsql/9.6/data/postgresql.conf

Find the directive ‘listen_address’ and uncomment it, if it is commented. Also, set this to wildcard like the following:

listen_address = "*"

Create a PostgreSQL User with Encrypted Password

Open up your psql console from the user postgres and run the query given below:

su - postgres
psql
create user new_user with encrypted password 'testu1234'

You should always use ‘with encrypted password’, as this would avoid storing your password in plaintext instead of md5.

Now, grant the user on the database, you would like it to access:

grant all privileges on database my_prod to new_user;

Add Remote Access for User in PostgreSQL

PostgreSQL manages a remote access file, to allow access from remote sources. The file is located in /var/lib/pgsql/9.6/data/pg_hba.conf. To allow remote access from an IP, we need to add a line in this file:

Edit the file in nano

nano /var/lib/pgsql/9.6/data/pg_hba.conf

Add the following line

host    all             new_user           27.147.176.2/32       md5

Here 27.147.176.2 is the IP from where I would like to access the database.

Restart PostgreSQL

Now, restart your PostgreSQL instance

systemctl restart postgresql-9.6.service

You should be now set.

Few Additional Things For Reference

  1. How to list the users of PostgreSQL from the console?

    From the Postgres console (psql), run the following command:
    \du
  2. How to change the password for a PostgreSQL user?

From the Postgres console (psql), run the following query:

ALTER USER username WITH ENCRYPTED PASSWORD 'password';

where username is the username you would like the password changed. Remember to use encrypted password keyword to save the password in md5.

3. How to change the user to a superuser?

You may use the following query:

ALTER USER new_user WITH SUPERUSER;

CENTOS 6 : YUM ERROR : ALL MIRROR URLS ARE NOT USING FTP, HTTP[S] OR FILE

If you are still using CentOS 6, and trying to update the system using the following command:

yum update -y

You probably end up with the following:

Loaded plugins: fastestmirror, refresh-packagekit, security
Setting up Update Process
Determining fastest mirrors
YumRepo Error: All mirror URLs are not using ftp, http[s] or file.
Eg. Invalid release/repo/arch combination/
removing mirrorlist with no valid mirrors: /var/cache/yum/x86_64/6/base/mirrorlist.txt
Error: Cannot find a valid baseurl for repo: base

The error is coming up because CentOS 6 has now hit the end of life, and the URLs are moved to centos vault. Now to update the CentOS, you would need to replace the URLs.

How to Fix

To fix the error, you need to replace your CentOS-base.repo file. Open up the following file using nano:

nano /etc/yum.repos.d/CentOS-Base.repo

Now, replace the full code using the following:

[base]
name=CentOS-$releasever - Base
# mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra
# baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/
baseurl=https://vault.centos.org/6.10/os/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

# released updates
[updates]
name=CentOS-$releasever - Updates
# mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
# baseurl=http://mirror.centos.org/centos/$releasever/updates/$basearch/
baseurl=https://vault.centos.org/6.10/updates/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

# additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
# mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras&infra=$infra
# baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
baseurl=https://vault.centos.org/6.10/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-6

Notice, how we replaced mirror.centos.org to valut.centos.org in the repo file.

Now, clean the yum, and update

yum clean all
yum update -y

You should be good to go now.

How to Backup/Dump and Restore PostgreSQL Databases

Here is the situation. We have a customer, who runs a POS backed by PostgreSQL. Customers want a second POS with the updated SQL till last night. Point to note, he didn’t want to use the Master/Replica or Master/Master replication, instead, a day earlier backup to be restored.

A similar situation is probably applicable for developers who are trying to work with staging and production environments and vice versa. In such a situation, you need a technique for dumping and restoring databases.

Backup/Dump Postgresql Databases

To dump the PostgreSQL database, you need to use pg_dump. First ssh to your server, change the user to Postgres, and then run the dump

su - postgres
pg_dump your_database > your_database_2021_13_11.sql

Here, your_database is the name of the database you are trying to dump.

Copy the SQL to the Remote Server

Once the dump is done, now you may copy the SQL file to the remote server. You may do that using rsync:

rsync -vrplogDtH --progress your_database_2021_13_11.sql [email protected]:/var/lib/pgsql/

Replace your_database_2021_13_11.sql with your SQL dump name and ‘ip.ip.ip.ip’ with the IP of your remote destination.

Restore the Dump

Now, ssh to the remote server, and continue with the following command:

# change to postgres user
su - postgres
# drop the old database
psql -c 'drop database your_database;'
# create database (replace your_username with the one you want to use for the database)
psql -c 'create database your_database with owner your_username;'
# restore database
psql your_database < /var/lib/pgsql/your_database_2021_13_11.sql

Once done, you should be able to use the new restored database. Good luck.

How to Add RAW Image into An Existing OpenVZ 7 VM

OpenVZ 7 supports both KVM VM and OpenVZ containers. For a customer, we were trying to import a CentOS 7 based KVM VM to OpenVZ 7 KVM VM. But the problem is, the VM was created in RAW format. But OpenVZ 7 KVM does not support RAW files for KVM, it supports QCOW2. Hence, we had to first convert the RAW image into a QCOW2 image first.

How to convert RAW KVM VM Image to QCOW2 VM Image

To convert raw to qcow2, you may use qemu-img kvm tool. This tool comes with the KVM setup. Let’s say your raw image is ‘harddisk.hdd.raw’, and you would like to produce a qcow2 image, called ‘harddisk.hdd’. To do that, you may run the following:

qemu-img convert -f raw -O qcow2 /root/harddisk.hdd.raw /root/harddisk.hdd.qcow2

Make sure, you are running this script from the place where you have the harddisk.hdd.raw stored.

Replace the disk in OpenVZ 7

In our case, our files are stored under /vz/vmprivate/. Under this folder of OpenVZ, you would find folders with VM id. In our case, it was under the following:

/vz/vmprivate/9d07cfef-42bf-4fbe-ac6a-7af9c85c9475

You can also see the list of VMs UUID, by typing the following command:

prlctl list --all

First, stop the VM

prlctl stop 9d07cfef-42bf-4fbe-ac6a-7af9c85c9475

Now, all you need to do, is to replace the ‘harddisk.hdd’ with the one we had converted

cd /vz/vmprivate/9d07cfef-42bf-4fbe-ac6a-7af9c85c9475
mv harddisk.hdd harddisk.hdd_backup
mv /root/harddisk.hdd.qcow2 harddisk.hdd

Now, start the VM, and you should be good to go

prlctl start 9d07cfef-42bf-4fbe-ac6a-7af9c85c9475

ERROR: cannot drop the currently open database – PostgreSQL

If you are trying to drop a database and getting an error like the following in PostgreSQL:

ERROR: cannot drop the currently open database

Then, the easiest technique to solve the problem is to restart your PostgreSQL server. I am using PGSQL 9.6, here was my command:

systemctl stop postgresql-9.6.service
systemctl start postgresql-9.6.service

If you are using PostgreSQL 13 or above, you have an option to drop the database with force now. You may drop the database using the following:

DROP DATABASE database_to_drop WITH (FORCE);

Replace ‘database_to_drop’ with the database you would like to drop.

Install Elasticsearch in Cpanel – CentOS / java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native

If you are trying to install the Elasticsearch 7.* in CentOS and Cpanel, you are probably seeing your Elasticsearch is installed, but not starting with an error like the following:

java.lang.NoClassDefFoundError: Could not initialize class com.sun.jna.Native
        at org.elasticsearch.systemd.Libsystemd.lambda$static$0(Libsystemd.java:23) ~[?:?]
        at java.security.AccessController.doPrivileged(AccessController.java:318) ~[?:?]
        at org.elasticsearch.systemd.Libsystemd.<clinit>(Libsystemd.java:22) ~[?:?]
        at org.elasticsearch.systemd.SystemdPlugin.sd_notify(SystemdPlugin.java:115) ~[?:?]
        at org.elasticsearch.systemd.SystemdPlugin.onNodeStarted(SystemdPlugin.java:126) ~[?:?]
        at java.util.ArrayList.forEach(ArrayList.java:1511) ~[?:?]
        at org.elasticsearch.node.Node.start(Node.java:998) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Bootstrap.start(Bootstrap.java:313) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:408) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:167) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:158) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:75) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:114) ~[elasticsearch-cli-7.15.1.jar:7.15.1]
        at org.elasticsearch.cli.Command.main(Command.java:79) ~[elasticsearch-cli-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:123) ~[elasticsearch-7.15.1.jar:7.15.1]
        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:81) ~[elasticsearch-7.15.1.jar:7.15.1]

The error appears if your tmp directory is not writable and executable by Elasticsearch. You need to make two changes, to make Elasticsearch start for you.

First, edit the elasticsearch sysconfig file:

nano /etc/sysconfig/elasticsearch

and add the following line:

ES_TMPDIR=/tmp

Next, you need to mount /tmp with exec permission, which is normally mounted with noexec permission in Cpanel:

mount -o remount,exec /tmp

You should be done now, you may start your Elasticsearch:

service elasticsearch start

Please also remember, Elasticsearch needs at least 3GB of RAM to start. If you have lower RAM than that, you are probably hitting OOM killer of the system.

How to decide your Innodb Buffer Pool Size? – Why MySQL Buffer Pool is awe-inspiring?

One key bottleneck, every software engineer who uses MySQL, fails to realize, is how to utilize MySQL cache. I have seen people believing the MySQL query cache as the MySQL cache. Sadly, it is not. The query cache is probably one of the worst cache attributes you may enable for your MySQL instance optimization for a high-traffic website.

MySQL Buffers

MySQL utilizes buffers for caching. These buffers are stored in RAM. The key benefit of these buffers is that the changes can also be stored in RAM before flushing the data to disk, along with sync to the replica. Let’s look at an example.

Let’s say, you have selected a post with id 1001

select * from posts where id = 1001

If you have a MySQL buffer, then the result of this query would be placed in the memory.

Now, the subsequent call, let’s say the following:

update posts
set title = 'This is a new title'
where id = 1001

Now, imagine, how the above query will be handled in MySQL? MySQL would write in the RAM, and keep the chunk as dirty to be flushed to disk. Now, that makes the above update super fast to the interactive queries. Because writing to disk would be super slow compared to writing to RAM.

If you live under a heavy load, this is an awe-inspiring tweak. Because this makes your MySQL instance a RAM-based database, with an IO player, which keeps track of your dirty bits, and flushes them when needed. While the con of the approach is that if the system is crashed, dirty bits are lost. Now, we can’t directly say lost, if you are using a system with journaling enabled, that keeps track of the changes, and rolls back.

What is the right Innodb Buffer Pool Size?

There is much misinformation about the size of your buffer pool. MySQL says you should use 80% of your RAM as the pool, which is significantly incorrect. If your database is of size 500MB and the RAM you hold is much larger, then using 80% is quite a waste. You could probably use them for other buffers like a join buffer or a temporary table or a heap table. But unwisely using that for Innodb Buffer is not worthy.

To start with, you can set the buffer pool size to the total size of your database first and let the system run for a day or two. Once done, then you may analyze a few MySQL status variables to find out whether should you increase or decrease. Let’s find out a few details

To get the MySQL status variables, run the following from your MySQL console:

SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';

Now, this should give you a details like the following:

MariaDB [(none)]> SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool%';
+---------------------------------------+--------------------------------------------------+
| Variable_name                         | Value                                            |
+---------------------------------------+--------------------------------------------------+
| Innodb_buffer_pool_dump_status        |                                                  |
| Innodb_buffer_pool_load_status        | Buffer pool(s) load completed at 210915 13:30:05 |
| Innodb_buffer_pool_resize_status      |                                                  |
| Innodb_buffer_pool_load_incomplete    | OFF                                              |
| Innodb_buffer_pool_pages_data         | 117007                                           |
| Innodb_buffer_pool_bytes_data         | 1917042688                                       |
| Innodb_buffer_pool_pages_dirty        | 557                                              |
| Innodb_buffer_pool_bytes_dirty        | 9125888                                          |
| Innodb_buffer_pool_pages_flushed      | 80192071                                         |
| Innodb_buffer_pool_pages_free         | 7339                                             |
| Innodb_buffer_pool_pages_misc         | 6726                                             |
| Innodb_buffer_pool_pages_total        | 131072                                           |
| Innodb_buffer_pool_read_ahead_rnd     | 0                                                |
| Innodb_buffer_pool_read_ahead         | 7627222                                          |
| Innodb_buffer_pool_read_ahead_evicted | 41991                                            |
| Innodb_buffer_pool_read_requests      | 480918989539                                     |
| Innodb_buffer_pool_reads              | 117010959                                        |
| Innodb_buffer_pool_wait_free          | 1206                                             |
| Innodb_buffer_pool_write_requests     | 290588732                                        |
+---------------------------------------+--------------------------------------------------+
19 rows in set (0.001 sec)

We can use a few metrics from this to determine whether we need a change in buffer or not.

  1. Free Pages

    First, we look at the variable ‘Innodb_buffer_pool_pages_free’. It says the amount the free pages you have. If you have set a pretty large buffer pool, you should have a large number of pages in total, hence seeing the number ‘Innodb_buffer_pool_pages_free’ should not make you realize if you are going out of the pool page or not. You may calculate a ratio like the following:

    free_page_ratio = ( Innodb_buffer_pool_pages_free / Innodb_buffer_pool_pages_total ) * 100

    free_page_ratio should not be less than 10% according to my experience until you are running out of RAM. If you have enough RAM, and the ratio is less than 10%, you have fewer free pages, you might want to increase the pool size.

    Now, one thing to remember, the value, might stay below 10% all the time, it might not mean you are running out of pool space, instead, you are at the top of the optimization or near to cut off. To determine, if you are out of pool space, make sure to monitor the variable several times of the day and see how often does it change. If it gets lower too often, that suggests you are hitting the boundary and it is time to increase the buffer pool size.
  2. Read Ratio

    Next, you need to look at the number of read requests and the number of reads you have performed in total. If you are seeing a lot of read requests and the ratio to reads is higher, then the buffer isn’t enough to store your full database requests in the RAM, and you should increase the value.

    read_ratio = (Innodb_buffer_pool_reads / Innodb_buffer_pool_read_requests) * 100

    Remember, Innodb_buffer_pool_reads means the amount of requests Innodb had to query the disk to get the data from. The value should be less than 1% for maximum efficiency. In my above example, the value is less than .1%, which makes the Innodb performance the best.
  3. Flushed & Wait Free

    The other thing, we need to track is that, Innodb_buffer_pool_wait_free. The variable should be near 0 or 0. If the variable is higher, it suggests a situation has happened, where the flushing was essential to free pages. Even if you have a good read_ratio, but if the variable shows a large number, then you need to increase the buffer pool size to make sure we do not need to free pages at the peak times.

Continuous monitoring is essential to perfectly use Innodb Buffer Pool. It is an extremely essential part of MySQL-based software and should be used wisely. Proper usage can excessively decrease your server CPU cost, and maximize the overall performance.

[ERROR] We are not able to run systemctl start mysql return code: 5. Fatal error, see /var/log/installLogs.txt for full details – Cyberpanel

If you are trying to install Cyberpanel 2.1.2 on CentOS 7, you are probably seeing your installation fails with the following in the screen:

[ERROR] We are not able to run systemctl start mysql return code: 5.  Fatal error, see /var/log/installLogs.txt for full details.

This is appearing because Cyberpanel installation script located here:

/usr/local/cyberpanel/install/installCyberPanel.py

wants to restart MariaDB 10.4 with the following:

systemctl start mysql

Unfortunately, this was only possible until 10.3 or below. Since 10.4, Mariadb uses the following command:

systemctl start mariadb

Cyberpanel uses that command if you are using CentOS 8 or Ubuntu. But it looks like CentOS 7 now installs MariaDB 10.4 as default unlike 10.3 as default previously.

Even if you edit this command, Cyberpanel will reclone the script, hence it is not possible to change this from here. The nasty trick would be to load a mysql.service file before starting the cyberpanel installation. Here is what you would need to do to do this.

  1. First reload your VPS with CentOS 7
  2. Run yum update and install nano
yum update -y
yum install nano -y

3. Now, load the following file with the text given below:

nano /usr/lib/systemd/system/mysql.service

Load the file with the following text

# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades.  If you want to customize, the
# best way is to create a file "/etc/systemd/system/mariadb.service",
# containing
#       .include /usr/lib/systemd/system/mariadb.service
#       ...make your changes here...
# or create a file "/etc/systemd/system/mariadb.service.d/foo.conf",
# which doesn't need to include ".include" call and which will be parsed
# after the file mariadb.service itself is parsed.
#
# For more info about custom unit files, see systemd.unit(5) or
# https://mariadb.com/kb/en/mariadb/systemd/
#
# Copyright notice:
#
# This file is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.

[Unit]
Description=MariaDB 10.4.21 database server
Documentation=man:mysqld(8)
Documentation=https://mariadb.com/kb/en/library/systemd/
After=network.target

[Install]
WantedBy=multi-user.target
Alias=mysql.service
Alias=mysqld.service


[Service]

##############################################################################
## Core requirements
##

Type=notify

# Setting this to true can break replication and the Type=notify settings
# See also bind-address mysqld option.
PrivateNetwork=false

##############################################################################
## Package maintainers
##

User=mysql
Group=mysql

# CAP_IPC_LOCK To allow memlock to be used as non-root user
# CAP_DAC_OVERRIDE To allow auth_pam_tool (which is SUID root) to read /etc/shadow when it's chmod 0
#   does nothing for non-root, not needed if /etc/shadow is u+r
# CAP_AUDIT_WRITE auth_pam_tool needs it on Debian for whatever reason
CapabilityBoundingSet=CAP_IPC_LOCK CAP_DAC_OVERRIDE CAP_AUDIT_WRITE

# PrivateDevices=true implies NoNewPrivileges=true and
# SUID auth_pam_tool suddenly doesn't do setuid anymore
PrivateDevices=false

# Prevent writes to /usr, /boot, and /etc
ProtectSystem=full

# Doesn't yet work properly with SELinux enabled
# NoNewPrivileges=true

# Prevent accessing /home, /root and /run/user
ProtectHome=true

# Execute pre and post scripts as root, otherwise it does it as User=
PermissionsStartOnly=true



# Perform automatic wsrep recovery. When server is started without wsrep,
# galera_recovery simply returns an empty string. In any case, however,
# the script is not expected to return with a non-zero status.
# It is always safe to unset _WSREP_START_POSITION environment variable.
# Do not panic if galera_recovery script is not available. (MDEV-10538)
ExecStartPre=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"
ExecStartPre=/bin/sh -c "[ ! -e /usr/bin/galera_recovery ] && VAR= || \
 VAR=`cd /usr/bin/..; /usr/bin/galera_recovery`; [ $? -eq 0 ] \
 && systemctl set-environment _WSREP_START_POSITION=$VAR || exit 1"

# Needed to create system tables etc.
# ExecStartPre=/usr/bin/mysql_install_db -u mysql

# Start main service
# MYSQLD_OPTS here is for users to set in /etc/systemd/system/mariadb.service.d/MY_SPECIAL.conf
# Use the [Service] section and Environment="MYSQLD_OPTS=...".
# This isn't a replacement for my.cnf.
# _WSREP_NEW_CLUSTER is for the exclusive use of the script galera_new_cluster
ExecStart=/usr/sbin/mysqld $MYSQLD_OPTS $_WSREP_NEW_CLUSTER $_WSREP_START_POSITION

# Unset _WSREP_START_POSITION environment variable.
ExecStartPost=/bin/sh -c "systemctl unset-environment _WSREP_START_POSITION"



KillSignal=SIGTERM

# Don't want to see an automated SIGKILL ever
SendSIGKILL=no

# Restart crashed server only, on-failure would also restart, for example, when
# my.cnf contains unknown option
Restart=on-abort
RestartSec=5s

UMask=007

##############################################################################
## USERs can override
##
##
## by creating a file in /etc/systemd/system/mariadb.service.d/MY_SPECIAL.conf
## and adding/setting the following under [Service] will override this file's
## settings.

# Useful options not previously available in [mysqld_safe]

# Kernels like killing mysqld when out of memory because its big.
# Lets temper that preference a little.
# OOMScoreAdjust=-600

# Explicitly start with high IO priority
# BlockIOWeight=1000

# If you don't use the /tmp directory for SELECT ... OUTFILE and
# LOAD DATA INFILE you can enable PrivateTmp=true for a little more security.
PrivateTmp=false

# Set an explicit Start and Stop timeout of 900 seconds (15 minutes!)
# this is the same value as used in SysV init scripts in the past
# Galera might need a longer timeout, check the KB if you want to change this:
# https://mariadb.com/kb/en/library/systemd/#configuring-the-systemd-service-timeout
TimeoutStartSec=900
TimeoutStopSec=900

##
## Options previously available to be set via [mysqld_safe]
## that now needs to be set by systemd config files as mysqld_safe
## isn't executed.
##

# Number of files limit. previously [mysqld_safe] open-files-limit
LimitNOFILE=32768

# Maximium core size. previously [mysqld_safe] core-file-size
# LimitCore=

# Nice priority. previously [mysqld_safe] nice
# Nice=-5

# Timezone. previously [mysqld_safe] timezone
# Environment="TZ=UTC"

# Library substitutions. previously [mysqld_safe] malloc-lib with explicit paths
# (in LD_LIBRARY_PATH) and library name (in LD_PRELOAD).
# Environment="LD_LIBRARY_PATH=/path1 /path2" "LD_PRELOAD=

# Flush caches. previously [mysqld_safe] flush-caches=1
# ExecStartPre=sync
# ExecStartPre=sysctl -q -w vm.drop_caches=3

# numa-interleave=1 equalivant
# Change ExecStart=numactl --interleave=all /usr/sbin/mysqld......

# crash-script equalivent
# FailureAction=

Now, save the file, and start your cyberpanel installation as it is with the following:

sh <(curl https://cyberpanel.net/install.sh || wget -O - https://cyberpanel.net/install.sh)

Hope it solves the problem. I am hoping, soon enough Cyberpanel would be able to address the issue the way they solved this for CentOS 8 and Ubuntu.

ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such file or directory (errno=2) – LXC/LXD

If you are seeing the above error in LXC, you need to do two things.

  1. Make sure the LXC container is running on privileged mode.
  2. Run the following commands inside the container:
mknod /dev/net/tun c 10 200

Now, you may run the OpenVPN command to start the VPN client:

openvpn --pull-filter ignore redirect-gateway --config ovpn.ovpn

# assumming your vpn config file is ovpn.ovpn