Preamble
Have you ever wondered why VACUUM does not reduce the size of your PostgreSQL tables? Have you ever pondered the lack of data file compression in VACUUM? Maybe the article you’ve been looking for is this one. The key takeaway is that it’s crucial to comprehend why space is typically not returned to the operating system following a cleanup. People frequently assume the wrong things about how VACUUM functions internally. Investigating further and discovering the truth makes sense.
The most beneficial information regarding VACUUM is revealed in the post that follows.
Understanding tuple visibility
It’s crucial to grasp how PostgreSQL manages visibility in order to comprehend VACUUM in that database. A group of concealed columns that are a component of the row serve as the foundation for the overall idea. This is how it goes:
test=# CREATE TABLE t_test (id int); CREATE TABLE test=# INSERT INTO t_test VALUES (5), (6), (7); INSERT 0 3 test=# INSERT INTO t_test VALUES (8), (9), (10); INSERT 0 3
To keep things simple, the table we just built has just one column. Keep in mind that the table’s data was loaded utilizing two distinct transactions. Each transaction inserted three rows, and the hidden columns make this quite clear:
test=# SELECT ctid, xmin, xmax, cmin, cmax, * FROM t_test; ctid | xmin | xmax | cmin | cmax | id ------+------+------+------+------+---- (0,1) | 764 | 0 | 0 | 0 | 5 (0,2) | 764 | 0 | 0 | 0 | 6 (0,3) | 764 | 0 | 0 | 0 | 7 (0,4) | 765 | 0 | 0 | 0 | 8 (0,5) | 765 | 0 | 0 | 0 | 9 (0,6) | 765 | 0 | 0 | 0 | 10 (6 rows)
xmin
, xmax
, cmin
and cmax
are hidden columns containing transaction IDs. As you can see, the first three rows have been written by transaction number 764, while the rest of the data has been created using transaction number 765.
PostgreSQL will often (but not always) decide whether a row may be seen by a particular transaction based on those columns, and the hidden columns will manage visibility.
Running UPDATE statements will reveal the situation’s true nature:
test=# BEGIN; BEGIN test=*# UPDATE t_test SET id = id * 10 WHERE id > 9 RETURNING *; id ----- 100 (1 row) UPDATE 1 test=*# SELECT ctid, xmin, xmax, cmin, cmax, * FROM t_test; ctid | xmin | xmax | cmin | cmax | id ------+------+------+------+------+----- (0,1) | 764 | 0 | 0 | 0 | 5 (0,2) | 764 | 0 | 0 | 0 | 6 (0,3) | 764 | 0 | 0 | 0 | 7 (0,4) | 765 | 0 | 0 | 0 | 8 (0,5) | 765 | 0 | 0 | 0 | 9 (0,7) | 766 | 0 | 0 | 0 | 100 (6 rows)
There has been one row change. However, let’s concentrate on the CTID, which identifies the actual location of the row on the disk. Since PostgreSQL had to replicate the row, notice that (0, 6) is no longer present. The row will be replicated once again if we do a second UPDATE:
test=*# UPDATE t_test SET id = id * 10 WHERE id > 9 RETURNING *; id ------ 1000 (1 row) UPDATE 1 test=*# UPDATE t_test SET id = id * 10 WHERE id > 9 RETURNING *; id ------- 10000 (1 row) UPDATE 1
Since we must retain the original row, copying these rows is crucial. Otherwise, ROLLBACK would not function, so the previous version must be kept.
Let’s take another look at the table:
test=*# SELECT ctid, xmin, xmax, cmin, cmax, * FROM t_test; ctid | xmin | xmax | cmin | cmax | id ------+------+------+------+------+------- (0,1) | 764 | 0 | 0 | 0 | 5 (0,2) | 764 | 0 | 0 | 0 | 6 (0,3) | 764 | 0 | 0 | 0 | 7 (0,4) | 765 | 0 | 0 | 0 | 8 (0,5) | 765 | 0 | 0 | 0 | 9 (0,9) | 766 | 0 | 2 | 2 | 10000 (6 rows) test=*# COMMIT; COMMIT
Dead rows are everywhere between 5 and 9, and they need to be taken out.
VACUUM: Cleaning out rows
Don’t forget that COMMIT shouldn’t kill any dead rows either. As a result, the cleanup procedure must be carried out asynchronously. The function of VACUUM is just this. Let’s test it and see what occurs:
test=# VACUUM VERBOSE t_test INFO: vacuuming "test.public.t_test" INFO: finished vacuuming "test.public.t_test": index scans: 0 pages: 0 removed, 1 remain, 1 scanned (100.00% of total) tuples: 3 removed, 6 remain, 0 are dead but not yet removable removable cutoff: 767, which was 0 XIDs old when operation ended new relfrozenxid: 764, which is 1 XIDs ahead of previous value index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed avg read rate: 22.790 MB/s, avg write rate: 27.348 MB/s buffer usage: 6 hits, 5 misses, 6 dirtied WAL usage: 3 records, 3 full page images, 14224 bytes system usage: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s VACUUM
VACUUM is actively searching for rows that are no longer visible to anyone. The middle of the data file may contain those rows. VACUUM enables PostgreSQL to reuse the space; it does not, however, give it back to the operating system. This is impossible since there is no file system procedure that permits returning “the middle of the file” to the operating system if a data file is 1 GB in size and it is empty. PostgreSQL must instead keep track of this vacant space and reuse it in the future.
Exception to the VACUUM rule
There is, however, an exception to every rule. Take a look at the following code example:
test=# SELECT pg_relation_size('t_test'); pg_relation_size ------------------ 8192 (1 row) test=# DELETE FROM t_test; DELETE 6 test=# SELECT pg_relation_size('t_test'); pg_relation_size ------------------ 8192 (1 row)
Even after the DELETE statement, the table will keep its size. Recall that cleanup is carried out asynchronously. Consequently, VACUUM can be used to remove those rows:
test=# VACUUM t_test; VACUUM
This is a bit of an exception. The criterion is that a table may be truncated by a vacuum if, starting at a specific place, ALL rows are dead. In this instance, that is exactly what took place:
test=# SELECT pg_relation_size('t_test'); pg_relation_size ------------------ 0 (1 row)
After all, there are frequently a few rows near the end of a huge table’s data file (under normal circumstances). Because of this, don’t rely on VACUUM to compress tables.
VACUUM FULL vs. pg_squeeze
About Enteros
Enteros offers a patented database performance management SaaS platform. It proactively identifies root causes of complex business-impacting database scalability and performance issues across a growing number of clouds, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Enhancing Identity and Access Management in Healthcare with Enteros
- 19 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Maximizing Efficiency with Enteros: Revolutionizing Cost Allocation Through a Cloud Center of Excellence
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Driving Efficiency in the Transportation Sector: Enteros’ Cloud FinOps and Database Optimization Solutions
- 18 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Empowering Nonprofits with Enteros: Optimizing Cloud Resources Through AIOps Platform
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…