Was this helpful?
Using Hash Structures to Avoid Deadlock
One option to avoid deadlock is to modify the table structure from B-tree to hash. Hash is the best structure to use to avoid the problem of users requiring locks on the same pages. If the modify creates enough hash buckets (minpages), each new insert is hashed to a different bucket, and users never require locks on the same pages.
The actual value of minpages used in the MODIFY statement depends on the number of concurrent users and the volume and frequency of updates to the replicated tables. The value must be as large as possible even though more disk space is needed when the table is modified. The table does not grow in size until more records are inserted that hash to a single page than fits in that page; this event becomes less likely with more hash buckets, which offsets the additional disk space requirement.
The shadow and archive tables present a different situation. Records are continually inserted into these tables for each insert (shadow only), update, and delete (both shadow and archive) on the base replicated table. None of the inserted records are automatically removed, which means that the tables are continually growing in size (until the arcclean utility is used to remove rows that are no longer needed). This causes a problem with hash tables because no matter how many hash buckets are provided, they are eventually filled, and the tables contain unwanted overflow pages. Because each insert must lock the entire overflow chain for the relevant bucket, the longer the chain becomes, the more likely it is that the insert causes lock contention (and therefore possible deadlock).
The key to the efficient use of hash tables is planning. You need to calculate the likely volume of replicated updates for each table in a day and ensure that there are enough spare hash buckets to handle any new additions. The tables must be remodified each night to remove any overflow and increase the number of hash buckets for the next day’s use. The shadow and archive tables must also be cleaned out periodically when a logical consistency point (for replicated databases) can be found and while replication is disabled. If there are situations where this type of maintenance is impossible, the row-locking alternative described in the chapter “Using Advanced Features” must be investigated.
Keep in mind that a hash table is more efficient for updates than a B-tree table (provided there is no unwanted overflow); for performance considerations, you must try to use hash tables. Remember that records are auto-deleted from the input queue table, so there is no reason for using row locking on this table. The only exception is where the distribution threads are allowed to lag behind the user update (because of insufficient threads or the value of rep_txq_size is set too low—neither of which is desirable).
Last modified date: 08/14/2024