Data Integrity
The following features support concurrent access while ensuring the integrity of your files in a multi-user environment:
Record Locks
Applications can explicitly lock either one record at a time (single record lock) or multiple records at once (multiple record lock). When an application specifies a record lock, the application can also apply a wait or no-wait condition. When an application requests a no-wait lock on a record that is currently unavailable (either the record is already locked by another application or the whole file is locked by an exclusive transaction), the transactional interface does not grant the lock.
When an application requests a wait lock on a record that is unavailable, the transactional interface checks for a deadlock condition. You can configure the transactional interface to wait before returning a deadlock detection status code. Doing so improves performance in multi-user situations by allowing the transactional interface to wait internally, rather than forcing the application to retry the operation.
Transactions
If you have a number of modifications to make to a file and you must be sure that either all or none of those modifications are made, include the operations for making those modifications in a transaction. By defining explicit transactions, you can force the transactional interface to treat multiple operations as an atomic unit. Other users cannot see the changes made to a file until the transaction ends. The transactional interface supports two types of transactions: exclusive and concurrent.
Exclusive Transactions
In an exclusive transaction, the transactional interface locks the entire data file when you insert, update, or delete a record in that file. Other applications (or other instances of the same application) can open the file and read its records, but they cannot modify the file. The file remains locked until the application ends or aborts the transaction.
Concurrent Transactions
In a concurrent transaction, the transactional interface can lock either records or pages in the file, depending on the operation you are performing. The transactional interface enables multiple applications (or multiple instances of the same application) to perform modifications inside concurrent transactions in different parts of the same file simultaneously, as long as those modifications do not affect other previously locked portions of the file. The record or page remains locked until the application ends or aborts the transaction. Concurrent transactions are available only for 6.0 and later files.
Exclusive vs. Concurrent
Clients can still read records from a file even if a concurrent transaction has locked the requested record. However, these clients cannot be operating from within an exclusive transaction. Also, they cannot apply a lock bias to their read operation if the file containing the requested record is currently locked by an exclusive transaction, or if a concurrent transaction has locked the requested record.
When a client reads a record using an exclusive lock, the transactional interface locks only the individual record; the rest of the page on which the record resides remains unlocked.
*Note: Simply opening a file from within a transaction does not lock any records, pages, or files. In addition, the transactional interface does not lock files that you flag as read-only or files that you open in read-only mode.
When you use exclusive transactions, the transactional database engine causes other clients to implicitly wait on the locked file unless the No Wait bias is added to the Begin Transaction (19 or 1019) operation. The application seems to hang during this implicit wait state. If these exclusive transactions are short lived, you may not notice the wait time. However, the overall effect of many clients involved in implicit waits results in using a large amount of CPU time. Additionally, multiple position blocks in the same file share locks.
Exclusive transactions involved in implicit waits also waste network bandwidth. The transactional database engine waits about one second before returning to the requester. The requester recognizes a wait condition and returns the operation to the transactional database engine. Thus, exclusive transactions also can cause extra network traffic.
The amount of extra CPU cycles and network traffic increase exponentially with the number of clients waiting on the locked file combined with the length of time involved in the exclusive transaction.
Transaction Durability
You can configure the transactional interface to guarantee Transaction Durability (page 4-50) and atomicity by logging all operations to a single transaction log. Transaction durability is the assurance that the transactional database engine finishes writing to the log when a client issues the End Transaction operation and before the transactional database engine returns a successful status code to the client. Atomicity ensures that if a given statement does not execute to completion, then the statement does not leave partial or ambiguous effects in the database, thereby ensuring the integrity of your database by keeping it in a stable state.
If you want atomicity without the overhead of Transaction Durability, you can use Transaction Logging feature in Pervasive PSQL V8 and later releases. See Advanced Operations Guide for more information on Transaction Logging.
By default, the transaction log is in the \MKDE\LOG subdirectory of the default Windows system directory. The log must exist on the same machine as the transactional database engine. You can change the location using the transaction log directory configuration option.
The transactional database engine maintains the transaction log in one or more physical files, called log segments. The transactional interface starts a new log segment when the current log segment reaches a user-defined size limit, no files have pending changes, and the transactional database engine has finished a system transaction.
All transaction log segments have a.LOG extension. The transactional database engine names log segments with consecutive, 8-character hexadecimal names, such as 00000001.LOG, 00000002.LOG, and so on.
To improve performance on specific files, you can open a file in Accelerated mode. (Version 6.x transactional database engine accepted Accelerated open requests, but interpreted them as Normal open requests.) When you open a file in Accelerated mode, the transactional database engine does not perform transaction logging on the file.
*Note: If a system failure occurs, there will be some log segments that are not deleted. These segments contain changes that did not get fully written to the data files. Do not delete these log segments. You do not know which files are represented in these log segments. No action is necessary because those data files will automatically get rolled forward the next time they are opened.
System Data
Pervasive PSQL uses a 7.x transaction log file format. In order for the transactional interface to log transactions on a file, the file must contain a log key, which is a unique (non-duplicatable) key that the transactional interface can use to track the record in the log. For files that have at least one unique (non-duplicatable) key, the transactional interface uses one of the unique keys already defined in the file.
For files that do not have any unique keys, the transactional interface can include system data upon file creation. The transactional interface includes system data in a file only if the file uses the 7.x file format or later and if at the time the file is created, the transactional interface is configured to include system data in files upon creation. System data is defined as an 8-byte binary value with the key number 125. Even if a file has a unique, user-defined key, you may want to use system data (also called the system-defined log key), to protect against a user dropping an index.
The database engine turns on data compression for the file if the file uses system data and the record length exceeds the limit shown in Table 12.
The transactional interface adds system data only at file creation. If you have existing files for which you want to add system data, turn on the System Data configuration option, then use the Rebuild utility.
*Note: When the transactional interface adds system data, the resulting records may be too large to fit in the file’s existing page size. In such cases, the transactional interface automatically increases the file’s page size to the next accommodating size.
Shadow Paging
The transactional database engine uses shadow paging to protect 6.0 and later files from corruption in case of a system failure. When a client needs to change a page (either inside or outside a transaction), the transactional database engine selects a free, unused physical location in the data file itself and writes a new page image, called a shadow page, to this new location. During a single transactional interface operation, the transactional database engine might create several shadow pages all the same size as the original logical pages.
When the changes are committed (either when the operation is complete or the transaction ends), the transactional database engine makes the shadow pages current, and the original pages become available for reuse. The transactional database engine stores a map, which is the Page Allocation Table, in the file to keep track of the valid and reusable pages. If a system failure occurs before the changes are committed, the transactional interface does not update the PAT, thereby dropping the shadow pages and reverting to using the current pages, which are still in their original condition.
*Note: This description simplifies the shadow paging process. For improved performance, the transactional database engine does not commit each operation or user transaction individually, but groups them in a bundle called a system transaction.
When a client operates inside a transaction, the shadow pages corresponding to the original logical pages are visible only to that client. If other clients need to access the same logical pages, they see the original (unchanged) pages—that is, they do not see the first client’s uncommitted changes. Shadow paging thus enhances reliability because the original file is always valid and internally consistent.
*Note: Pre-6.0 the transactional interface versions used pre-imaging to protect files from corruption in case of a system failure. Pervasive.SQL 7.0 also uses pre-imaging to protect pre-6.0 files. Before updating a pre-6.0 file, the transactional interface creates a temporary pre-image file, which contains the pages to be updated from the original file. The transactional interface then performs the update on the original file. If a system failure occurs during the update, the transactional interface can restore the original file using the pre-image file.
Backing Up Your Files
Backing up your files regularly is an important step in protecting your data.
For information on backing up your files, please see the following topic in Advanced Operations Guide: Logging, Backup, and Restore.