Wednesday, 9 November 2016

HPE DP: Solve problem when postgres transactions fail with the "out of shared memory"

This is caused by default setting for postgreSQL parameter "max_locks_per_transaction" - default is 64. Need to increase the value to 1024 in postgresql.conf ( takes up about 30 MB of shared memory) and restart dp services. It's recomendation for 9.07 version of data protector.

From PostgreSQL 9.1.24 Documentation max_locks_per_transaction (integer) 

 The shared lock table tracks locks on max_locks_per_transaction * (max_connections + max_prepared_transactions) objects (e.g., tables); hence, no more than this many distinct objects can be locked at any one time. This parameter controls the average number of object locks allocated for each transaction; individual transactions can lock more objects as long as the locks of all transactions fit in the lock table. This is not the number of rows that can be locked; that value is unlimited. The default, 64, has historically proven sufficient, but you might need to raise this value if you have clients that touch many different tables in a single transaction. This parameter can only be set at server start.

No comments:

Post a Comment