Preamble
Although using non-core Postgres extensions or forks is generally discouraged, there are some rare instances where doing so could actually be detrimental. However, because the BDR project is here to stay (as far as I can tell, the main functionality will be gradually integrated into Postgres core), the following information may still be useful in the future. To our surprise, we found that the official documentation (as well as our big brother, the Internet) is sadly pretty empty in that area with no working code examples when we received a client request to support them with custom replication handlers. Perhaps even on purpose, as a warning that you are already walking on thin ice and should exercise caution. The unknown. But in short, after doing some research, I became familiar with the subject, so let me give you a brief overview of what’s involved.
About BDR in two sentences
First off, for those who are unfamiliar with the relatively new project known as BDR (Bi-Directional Replication), it enables asynchronously replicated Master-Master Postgres clusters (up to 48 nodes). In these clusters, the nodes are somewhat independent of one another and simply try to communicate with one another in order to fetch and apply (synchronize) their data changes on a “best effort” basis. The biggest restriction may be that all of your tables must have UNIQUE or PRIMARY KEY constraints, and exclusion constraints and DDL operations that cause full table rewrites are disabled in order to have any real chance of keeping data synchronized. Which in reality shouldn’t be a big issue if you adhere to good database design best practices. However, there are some additional factors. Additionally, I advise conducting extensive testing for your specific use case before going “all in” on BDR because there are some other corner-case issues (see GitHub). However, in general, it is still a very potent tool and unquestionably the future.
BDR conflict handling basics
But, back to the conflict. In other words, when you model your data in accordance with the BDR guidelines, the framework will typically handle conflict situations where two nodes changed or inserted the same data-row at the same time. This is possible because there is no global locking implemented; instead, all nodes simply try to automatically sync with one another by using the “last timestamp wins” policy and working with local node data. That should cover the vast majority of use cases. Just keep in mind that some data inserts or updates will simply be discarded as a result, so one should design the applications accordingly.
But what should we do if we still want to tailor how we handle conflicts? The framework provides a way to do that by allowing you to declare common PL/pgSQL stored procedures with a specific signature that will take conflicting rows as input and let you choose whether to merge some column values or discard some rows. We’ll do a sample with the merging use case since that is likely the most popular method for custom handlers. Also keep in mind that only one UNIQUE constraint must be broken for custom handling to be possible, so try not to have too many!
Setting up a sample handler function
Due to the specific table (type) based inputs of user-written conflict handler stored procedures, each handled table will require at least one separate function. However, since the handler function will at least receive the conflict type (INSERT vs INSERT, INSERT vs UPDATE, etc.) as input, you can decide to use some “IF statements” to cover all the conflict types required for one table in the code.
A simple INSERT VS. INSERT conflict handler that will merge, say, our website hit counters (imagine two data centers with a webserver and one BDR database node, and some kind of inflexible log monitoring tool that can only do an INSERT at midnight with a fixed primary key ID translating to yesterday’s date) will look like this:
CREATE OR REPLACE FUNCTION public.hitcount_conflict_handler_ins_ins ( row1 public.hitcount, row2 public.hitcount, table_name text, table_regclass regclass, conflict_type bdr.bdr_conflict_type, /* [insert_insert | insert_update | update_update | update_delete | delete_delete | unhandled_tx_abort] */ OUT row_out public.hitcount, OUT handler_action bdr.bdr_conflict_handler_action) /* [IGNORE | ROW | SKIP] */ RETURNS record AS $BODY$ BEGIN raise warning 'conflict detected for public.hitcount, old_row: %, incoming_row: %', row1, row2; -- code to choose the output row or to merge values row1.counter = row1.counter + row2.counter; row_out := row1; handler_action := 'ROW'; END; $BODY$ LANGUAGE plpgsql; -- after writing the handler procedure we also need to register it as an handler select * from bdr.bdr_create_conflict_handler( ch_rel := 'hitcount', ch_name := 'hitcount_ins_ins_handler', ch_proc := 'public.hitcount_conflict_handler_ins_ins(public.hitcount, public.hitcount, text, regclass, bdr.bdr_conflict_type)', ch_type := 'insert_insert');
Our conflict resolution handler kicks into action after successful definition, and the server logs show the following messages:
2017-03-17 09:56:20.430 UTC,,,23833,,58cba8e3.5d19,10,,2017-03-17 09:14:11 UTC,5/32,719,WARNING,01000,"conflict detected for public.hitcount, old_row: (20170317,100), incoming_row: (20170317,50) ",,,,,"apply INSERT from remote relation public.t1 in commit 0/18DC780, xid 739 commited at 2017-03-17 09:56:20.425756+00 (action #2) from node (6398391759225146474,1,16385)",,,,"bdr (6398391788660796561,1,16385,): apply" 2017-03-17 09:56:20.430 UTC,,,23833,,58cba8e3.5d19,12,,2017-03-17 09:14:11 UTC,5/32,719,LOG,23000,"CONFLICT: remote INSERT: row was previously INSERTed at node 6398391788660796561:1. Resolution: conflict_trigger_returned_tuple; PKEY: id:20170317 counter[int4]:100",,,,,"apply INSERT from remote relation public.hitcount in commit 0/18DC780, xid 739 commited at 2017-03-17 09:56:20.425756+00 (action #2) from node (6398391759225146474,1,16385)",,,,"bdr (6398391788660796561,1,16385,): apply"
By setting bdr.log_conflicts_to_table=on in the server configuration, conflict handler “audit logging” into the specific bdr.bdr_conflict_history table can be enabled in addition to output to the server log. There, the same information will simply be better organized, but you may also need some sort of Cronjob to periodically clean up the table.
Avoid conflicts if possible
The best conflict handling strategy is to avoid any conflicts at all, despite the fact that we’ve seen that handling conflicts is somewhat manageable. Therefore, before launching your application, I’d advise trying to create a data model where all applications can use UUIDs or just a few data partitions. From the BDR documentation, an excerpt:
The simplest way to prevent conflicts is to only ever write to one node, or to only ever write to independent subsets of the database on each node, since conflicts can only occur when things are happening at the same time on multiple nodes.
NB! Use the most recent 1.0.2 version of BDR for production purposes; older versions had a bug that prevented the remotely changed tuple from populating correctly for insert/insert handlers.
About Enteros
Enteros offers a patented database performance management SaaS platform. It finds the root causes of complex database scalability and performance problems that affect business across a growing number of cloud, RDBMS, NoSQL, and machine learning database platforms.
The views expressed on this blog are those of the author and do not necessarily reflect the opinions of Enteros Inc. This blog may contain links to the content of third-party sites. By providing such links, Enteros Inc. does not adopt, guarantee, approve, or endorse the information, views, or products available on such sites.
Are you interested in writing for Enteros’ Blog? Please send us a pitch!
RELATED POSTS
Optimizing Database Performance with Enteros and AWS Resource Groups: A RevOps Approach to Streamlined Efficiency
- 13 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enhancing Healthcare Data Integrity: How Enteros, Logical Models, and Database Security Transform Healthcare Operations
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Optimizing Budgeting and Cost Allocation in the Finance Sector with Enteros: A Smarter Approach to Financial Efficiency
- 12 November 2024
- Database Performance Management
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…
Enteros and Cloud FinOps: Unleashing Big Data Potential for eCommerce Profitability
In the fast-evolving world of finance, where banking and insurance sectors rely on massive data streams for real-time decisions, efficient anomaly man…