Cpanel incoming mails are failing, with an error in the exim_mainlog as following:
failed to open db file /var/spool/exim/db/ratelimit: permission denied
The error is appearing due to some permission issues with the exim db or the files are corrupted. These files recreate when the exim restart. Hence, we can do the following:
# delete the db files
rm -rf /var/spool/exim/db/*
# restart exim
service exim restart
# fix permission of exim spool
chown -Rf mailnull.mail /var/spool/exim
chmod 0750 /var/spool/exim
We have a product_id column that contains which product we are referring to, a qty_done column, which contains the number of products sold, and sale_id refers to the sale order where the product_id was dispatched. Now the example table contains how many of each product dispatched in different sale orders. We can write a query, to group the product_id and the total number of products sold using aggregation
select product_id, sum(qty_done) as total_sold from sale_line group by product_id;
We can see, how group by can give you the benefit of aggregation. But have you noticed, we have a column called ‘sale_id’ that doesn’t go with any aggregation function like ‘count’, ‘sum’ or ‘avg’. Can we use them for any purpose?
Sometimes, you may be interested to track down the orders responsible for a set of products using group by when each products will contain the list of orders responsible for causing the total sell. Can we accomplish that in one query?
We actually can. We can concat distinct ids and put them in a resulting array instead of our known aggregation function like ‘count’ or ‘sum’. Postgres provides a array aggregation function called ‘array_agg’, it can be used to produce such result. Have a look at the following query:
select product_id, sum(qty_done) as total_sold, array_agg(sale_id) as sale_ids from sale_line group by product_id;
As you can see, we have made postgres return us an array that contains the sale_ids for consisting total_sold even though, sale_id column wasn’t conventionally aggregable or let’s say different than the usual numeric pattern.
This technique can have many use cases. For example if you have a different model/table for managing sale orders, purchase orders, pickings and invoices while, your sale orders contains the price information, and the pickings doesn’t, but picking contains the actual number of products got dispatched, then you may first aggregate the total_sold from the picking table, then produce an average price using the sale_ids you have produced from the picking table using group by and array aggregation, then merge them. This seems quite complicated, I understand, but again, it’s all about putting a break to your reporting time, and manage scalability of your application, putting a constant cost based reporting algorithm using single sql query, even if you process 10 times more orders in future.
stock_picking is one of the core model for Odoo if you using Odoo for ‘Inventory’. Now, if you use ‘Sales’ module, then stock_picking would be extended with a new field for model and column for database table, naming ‘sale_id’. This can be used to detect if the picking is originated from a sale order or not. But if you install ‘Purchase’ module, then stock_picking model is extended with ‘purchase_id’ like the ‘sale_id’ for purchases, but the database isn’t expanded with a column like ‘Sale’ module.
What does this mean?
This means, if you use Odoo ORM, only then, you may use purchase_id of a stock_picking. An example could be like the following. Let’s say, we would like to pick the pickings that originated from purchase orders, aka, GRN, we could use something like this:
This works, only if you are not trying to make a report from a huge lot of pickings, purchase orders and sale orders, when you want to use SQL statement to produce efficient joins and generate the report quickly.
Let me demonstrate what I meant
We know, stock_picking has a field called sale_id and also this also belongs to the database column as well. Hence, to get all the pickings belongs to sale order, we may first use the ORM:
query = """select * from stock_picking where sale_id is not null"""
self.env.cr.execute(query)
result = self.env.cr.fetchall()
Now, the second example is not only faster, but also, it allows you to extend the facility further to use joins or select specific field of a table result, which is only possible using ‘read’ Odoo ORM method, again, domain specification is not permissible like it is available in ‘search’.
We are able to do things like the following with the sql:
query = """select sale_order.name, stock_picking.name from stock_picking left join on stock_picking.sale_id = sale_order.id where stock_picking.sale_id is not null"""
self.env.cr.execute(query)
result = self.env.cr.fetchall()
This would give you a result of each sale order with it’s picking name. To produce a result like the above using ORM is costly as it follows ‘N+1’ algorithm, hence inefficient in making reports or scaling the software.
Now, we understand, we are able to use such field and make the reports efficient using SQL as sale_id is distinctively available in the database. But what if you want to check how the product has been purchased, and then sold? Then, we also need purchase_order model to connect to our above query, right? But unfortunately, as ‘Purchase’ module doesn’t add a column purchase_id, we are unable to use this directly.
So, how can we still use purchase_id in the SQL Query to generate report in Odoo?
First, we need to see, how purchase_id is added in Odoo.
purchase_id is added in stock_picking model in the ‘purchase_stock’ module. If you open the following file:
purchase_stock/models/stock.py
you may see, how purchase_id is defined as related Many2one field:
A related field in Odoo, is like a pointer, a syntactic sugar of foreign key for less used fields. If the field is highly used, this might cause performance issue, as Odoo has to do multiple lookups unlike direct lookup for a related field. Now, get to the point, purchase_id is related to ‘move_lines.purchase_line_id.order_id’. This is a long relation. Let me go one by one:
move_lines : stock_picking has an One2many relation with stock.move model, that derives the available moves for the picking.
purchase_line_id: Each move line derived from a purchase order line, and while doing so, it keeps the ID of the purchase order line in a foreign key of stock.move model, namely purchase_line_id.
order_id: Each purchase_order_line has a foreign key with the purchase.order model kept in order_id field.
Now, we know, how the purchase_id derives the purchase_order id using the following relation:
Picking > Moves > Purchase Order Line > Purchase Order
Now we can use the following kind of relation for detecting purchase order from stock picking:
select purchase_order.name, stock_picking.name from stock_picking left join stock_move on stock_move.picking_id = stock_picking.id left join purchase_order_line on purchase_order_line.id = stock_move.purchase_line_id left join purchase_order on purchase_order.id = purchase_order_line.order_id where stock_move.purchase_line_id is not null group by stock_picking.name, purchase_order.name
Here, we are able to get the picking and purchase in relation with one query. This concept can be used to derive many data, like, let’s say, you would like to see, how many of your products are purchased, then, sold and returned, all can be done in few queries, without having N+1 problem.
Mysql has a pretty cool feature with \G, that sets the mysql query result in vertical mode. If you have a table with a lot of column, you will face difficulty to display them in horizontal mode. Hence, you want to see the result vertically.
For Postgres, this is called ‘expanded display’. To set the expanded display on or off, you need to use the following command in psql console:
my_prod=# \x
Expanded display is on.
my_prod=# \x
Expanded display is off.
my_prod=#
If your expanded display is on, then \x will set this to off, else it would set the display to on. If you set the expanded display to on, this would show the column and value in a vertical key value pair style display, which is much easier to understand.
When looking at the html report in Odoo, locale fonts look ok, but if you download the Qweb report to print in pdf format, it prints gibberish. How to fix that?
Resolution
Odoo uses a templating engine for reporting called ‘Qweb’. Qweb can be used to generate two types of reports. One is HTML and the other is PDF. Odoo primarily uses Qweb engine to generate the HTML code. After that, it uses a tool called ‘wkhtmltopdf’ to convert the report to pdf and make it printable. Now when, we look at the HTML version of the report, fonts are shown based on Unicode supported browsers or the fonts you have installed on your computer. But when you try to convert this to PDF using wkhtmltopdf, that tool has to have exclusive access to those fonts to be able to convert them from HTML to pdf for you. As wkhtmlpdf command runs in the server you have installed Odoo, hence, you would need to install the font package in the server.
In my case, I required to install Bengali fonts. For CentOS, it is available under the lohit package, that contains several indian fonts including bengali. To install bengali font package in CentOS 7, use the following command:
yum install lohit-bengali* -y
Once done, your wkhtmltopdf should be able to read the bengali fonts from your html/qweb templates and able to convert them to PDF for you.
res.users and res.partners tables are two base tables of odoo. If you would like to inherit and extend them, remember, you can’t do it from the Odoo user view. The reason is, when you do an upgrade from the user view, it has to be something that works over base module, not base module itself. Hence, you will get a 500 error or internal server error when trying to upgrade the module.
We will make a simple module for res.users to extend the model to add a field called ‘access_token’ for each user, and generate a key automatically when a user is added.
I will only post the model file and the view file here. I expect you already know how to write an Odoo module.
This is my res_users.py file
from odoo import api, fields, models
import string, random
class res_users_ex(models.Model):
_inherit = 'res.users'
access_token = fields.Text(string='API Access Token', default=False)
def create(self, vals):
key = ''.join(random.choices(string.ascii_lowercase + string.digits, k = 48))
user_id = super(res_users_ex, self).create(vals)
if key:
user_id.access_token = key
return user_id
Now, after you have added this to a module, you can not simply upgrade this from App >> Module >> Upgrade. You need to upgrade this module via command line like the following:
First switch to your odoo user, in my case, it is ‘odoo’
su - odoo
Now, first stop your current odoo with the following:
service odoo14 stop
Once done, now you can upgrade the module with the following:
The next one is the binary of odoo with it’s location, which should be odoo-bin. With the parameter -d, you give your database name, and with the parameter -u, you need to give your module name. After you run the command, you should see no ‘Error’ or Red marked line in your console. If not, it shall be upgraded. Now do control + c, and start your odoo again to see the new fields being visible in your Users tab.
In one line, context managers are an efficient way of handling resources in Python. So, what kind of resources are they? It could be any logical resource that you are using for your software, a common one, is database connections, or the files or in few cases, locks for concurrency control.
How exactly Context Manager is efficient?
If we talk about efficient programs, there could be several meanings of it in computer science. For our case, we mean efficient by writing less code, or more specifically, not writing repetitive codes for managing resources. There are some pinpoint benefits of using less repetitive codes other than writing more codes, which is purely technical. You do not forget to perform a step, that is essential if you do it from one source. Let’s find an example. You have a code, that connects to a remote FTP and uploads some content, once done, it closes the connection. If you do the process in several places, you might miss out to close the FTP connection in a place, that is accessed several times by the users. If that is so you might run out of the FTP connection pool on a random day. It is essential to close the connection after you are done with the connection to free the resources. Context Managers can help you write a code, that does the job for you, without the need to remember closing the connection each time.
The ‘with’ statement in Python
Before we can go deeper with Context Managers, we need to learn something about ‘with’ statement in python. ‘with’ is a special statement in python, that does jobs automatically for you. One, it calls a method of setting class, that is called ‘__enter__’ when it calls it, and the other, it automatically calls ‘__exit__’ method when it completes running its code. Let’s do some coding now
with open('test.txt') as f:
f.write("testing")
In the above code, we are opening a file using the ‘with’ statement of python, then writing some texts in it. But wait, we haven’t closed the file, did you notice? Isn’t it necessary to close a file after opening it to free up the system file descriptors? Absolutely, but using the ‘with’ statement, do it for you even if you don’t do it in your code. In Python, the open() method for opening a file, can be used as Context Manager. For Context Manager, two methods are essential, one is when you set up the call, which is ‘__enter__’ and the other is, when you end the code, that is ‘__exit__’. ‘with’ statement in python is created to be used for Context Managers. As I said, the open() method can be used as Context Manager, which means, Python has both of these methods defined by default with the open() method, and can be used using the ‘with’ statement.
Why and when do we need Context Managers?
Before jumping into, how can we do context managers, let’s understand, if we understand the need of context managers properly with an example! The primary purpose of a context manager is to write cleaner nonrepetitive codes. Do we need this often? Yes, we do. A common example would be in setting up your database connections. If you are setting up a database connection and clean up the things once done, you may create a context manager to do that. There are more complex database use cases of context managers. Let’s focus on one of them. Let’s say, you would like to utilize the database ‘SAVEPOINT’ in a cleaner way, manage the release, and rollback for concurrent transactions based on the savepoint you create dynamically, a clean technique would be to use context manager. Pseudocode could be like the following for this kind of context manager
FUNCTION SAVEPOINT()
NAME = UUID()
SQL.EXECUTE('SAVEPOINT ' + UUID)
TRY
YIELD
EXCEPTION
EXECUTE('ROLLBACK TO SAVEPOINT ' + UUID)
RAISE
FINALLY
EXECUTE('RELEASE SAVEPOINT ' + UUID)
What this context manager is trying to do, is generating a savepoint with a name for you. Once done, it yields the code you instruct it to run after the ‘with’ statement. If you create an exception from those codes, it rollback and sends the exception to the main program else if not it releases the savepoint and gives control to the code after the ‘with’ statement. This is technically the most efficient way of using Savepoint for SQL in your code. Similarly so, we can acquire and release locks for concurrent control using Context managers, or processing an API that had a setup call and an end/cleanup call.
How can we write Context Manager in Python?
There are two ways you can do it. One is using Python class, and the other use, using contextlib and the contextmanager decorator. Let’s first check out, how to do it using Python class to understand the concept better.
First, we want to emulate the way Python uses ‘open’ method as context manager using our own context manager class. A context manager class that can be used using ‘with’ statement could be like the following:
class Open_A_File():
def __init__(self, name):
self.name = name
def __enter__(self):
self.f = open(self.name)
return self.f
def __exit__(self, exc_type, exc_val, traceback):
self.f.close()
with Open_A_File('test.txt') as f:
f.write('Class Test')
In our class ‘Open_A_File()’, we have 3 methods. Our constructor __init__ method and other two methods are __enter__ and __exit__. When we used Open_A_File() using ‘with’ statement with a parameter, it setup our filename variable for the class using constructor, and then, calls the __enter__ method. It then opens the file and returns the file object. When it returns the file object, we catch it as ‘f’ to use in our code under the ‘with’ statement. We then write the code and the codes within ‘with’ statement ends, hence the __exit__ is automatically called, that closes the file object by calling ‘close()’. We can technically convert any class into a Context Manager and use them using ‘with’ statement if can define the methods to do while entering and exiting the class when called directly with the ‘with’ statement.
Other than class, we can use Context Managers using a function, through the use of contextlib library. This is the most used method of using Context Managers. We used the idea of this, in our pseudocode while demonstrating earlier. Let’s rewrite the above code using contextlib below
from contextlib import contextmanager
@contextmanager
def open_a_fiie(name):
try:
f = open(file)
yield f
finally:
f.close()
with Open_A_File('test.txt') as f:
f.write('Contextlib Test')
We first import contextmanager decorator from contextlib and then, we define a normal function. Although, there exists a ‘yield’ statement. For context managers, the statements that exist before yield would execute on __enter__ method, and the statements after yield would execute on __exit__ method. If you want to return anything to the ‘with’ call, then you need to specify that after yield, as we did in yield f, that means, we returned the file object to the ‘wite’ statement. the yield would replace the code, we run after the ‘with’ statement, like the f.write() in our case.
Hope this make sense. For confusion, or in case you would like to add some, do comment below. Thanks for reading.
You may get the password from using ‘netsh’ windows command. First go to your windows 10 search box, and type ‘cmd’
Now, from the result, there should be an option called ‘Command Prompt’, right click on it, and ‘Run as administrator’. Now on the black command prompt, type the following:
netsh wlan show profile "your wifi name" key=clear
Replace the part “your wifi name” with your one. So, for example if you connect to a wifi connection that has a name ‘Mellowhost_Portable’, then the command should be like the following:
netsh wlan show profile "Mellowhost_Portable" key=clear
This shall show you the full profile of your wifi details, including the password. Password should be available under the ‘Security Settings’, inside the ‘Key Content’ section.