Working with Records
 
Working with Records
This chapter discusses the following topics:
Sequence of Operations
Accessing Records
Inserting and Updating Records
Multi-Record Operations
Adding and Dropping Keys
Sequence of Operations
Some Btrieve operations can be issued at any time, such as Create (14), Reset (28), and Version (26). However, most Btrieve operations require that you open a file using Open (0). Then, you must establish a position, or currency, in the file before you can operate on any records.
You can establish logical currency based on a key or physical currency by using Step operations.
Use one of the following operations to establish physical currency:
Step First (33)
Step Last (34)
Use one of the following operations to establish logical currency:
Get By Percentage (44)
Get Direct/Record (23)
Get Equal (5)
Get First (12)
Get Greater Than (8)
Get Greater Than or Equal (9)
Get Last (13)
Get Less Than (10)
Get Less Than or Equal (11)
After you establish currency, you can issue appropriate file I/O operations, such as Insert (2), Update (3), and Delete (4).
Note Always use Btrieve operations to perform I/O on Btrieve files; never perform standard I/O on a Btrieve file.
Based on your currency, you can move through the file as follows:
If you have established position based on physical currency, use the following operations: Step Next (24), Step Next Extended (38), Step Previous (35), or Step Previous Extended (39). The Step operations are useful for traversing a file quickly if your application does not need to retrieve the records in a specific order. The Extended operations are useful for working on a number of records at one time.
If you have established position based on logical currency, use the following operations: Get Next (6), Get Next Extended (36), Get Previous (7), or Get Previous Extended (37). The Get operations are useful for traversing a file in a specific order. The Extended operations are useful for working on a number of records at one time.
You cannot establish physical currency with one operation and then follow with an operation that requires logical currency. For example, you cannot issue a Step First operation and then a Get Next operation.
In data-only files, the MicroKernel does not maintain or create any index pages. You can access the records using only the Step operations and Get Direct/Record (23), all of which use the physical location to find records.
In key-only files, the MicroKernel does not maintain or create any data pages. You can access records using only the Get operations, which use logical currency to find records.
When you have finished working with a file, use Close (1) to close it. When your application is ready to terminate, issue a Stop (25).
Note Failure to perform a Stop operation prevents the MicroKernel from returning its resources to the operating system. This failure eventually results in unpredictable system behavior, including the possibility of crashing the computer on which the application is running.
Accessing Records
Btrieve provides both physical and logical access to your data. With physical access, Btrieve retrieves records based on the physical record address within the file. With logical access, Btrieve retrieves records based on a key value contained in the record. In addition, Btrieve also allows you to access “chunks” of data within a record.
Accessing Records by Physical Location
Record accessing by physical location is faster for the following reasons:
The MicroKernel does not have to use index pages.
The next or previous physical record is usually already in the MicroKernel’s memory cache because the page on which it resides is probably in cache.
Physical Currency
Physical currency is the effect on positioning when accessing records by physical location. When you insert a record, the MicroKernel writes that record into the first free space available in the file, regardless of any key values contained in the record. This location is referred to as the physical location, or address, of the record. The record remains in this location until you delete it from the file. The Btrieve Step operations use the physical location to access records.
The record accessed last is the current physical record. The next physical record is the record with the immediately higher address relative to the current physical record. The previous physical record is the record with the immediately lower address. There is no physical record previous to the first physical record; likewise, there is no physical record next to the last physical record.
Together, the current, next, and previous physical locations form the physical currency within a file.
Step Operations
Your application can use the Step operations to access records based on their physical location within a file. For example, the Step First operation (33) retrieves the record that is stored in the first, or lowest, physical location in the file.
Note You cannot perform Step operations on key-only files.
The Step Next (24) operation retrieves the record stored in the next higher physical location. Step Previous (35) retrieves the record stored in the next lower physical location in the file. Step Last (34) retrieves the record that is stored in the last, or highest, physical location in the file.
Step Next Extended (38) and Step Previous Extended (39) retrieve one or more records from the physical location following or preceding the current record.
Note Each of the Step operations reestablishes the physical currency but destroys the logical currency, even if one existed before.
Accessing Records by Key Value
Accessing records by key value allows you to retrieve records based on their values for a specified key.
Logical Currency
Logical currency is the effect on positioning when accessing records by key value. When you insert a record into a file, the MicroKernel updates each B-tree index for which the appropriate key in the record has a non-null value. Each key of a file determines a logical ordering of the records. The ordering is determined by the key’s defined sort order or ACS.
The record accessed last is the current logical record. (This record is not necessarily the last record retrieved. The last record could have been retrieved by Get Direct/Chunk (23), which does not change the logical currency.) The next logical record is the record that is immediately next in the defined logical sequence. The previous logical record is the record that is immediately previous in the defined logical sequence. There is no logical record previous to the first logical record; likewise, there is no logical record next to the last logical record.
Together, the current, next, and previous logical records form the logical currency within a file.
The current logical record is also the current physical record, except when you perform an operation that uses the no-currency-change (NCC) option or when you operate on a record with a null key value. For example, you can perform an NCC Insert (2) operation and have the same logical position in the file as you had prior to the insert. The physical currency is updated.
NCC operations are useful when you must preserve your logical currency in order to perform another operation. For example, you may want to insert or update a record and then use a Get Next (6) operation based on your original logical currency.
NCC Insert Operation
status = BTRV( B_GET_FIRST, posBlock, dataBuf, &dataLen, keyBuf, keyNum); /* gets first record in key path */
 
for (i = 0; i < numRecords; i++)
{ status = BTRV( B_INSERT, posBlock, dataBuf, &dataLen, keyBuf, -1); /* -1 for key num indicates no currency change */
} /* inserts several records */
 
status = BTRV( B_GET_NEXT, posBlock, dataBuf, &dataLen, keyBuf, keyNum); /* gets next record after first record in key path */
 
Note When you use an NCC operation, the MicroKernel does not return any information in the Key Buffer parameter. If you want to maintain logical currency, you must not change the value in the Key Buffer following an NCC operation. Otherwise, your next Get operation can have unpredictable results.
Get Operations
Your application can use the Get operations to retrieve records based on their values for a specified key. The appropriate Get operation can retrieve a specific record from a file or retrieve records in a certain order.
For example, the Get First (12) operation retrieves the first record by the key specified in the Key Number parameter. Likewise, Get Last (13) retrieves the last record according to the logical order based on the specified key. Some Get operations, such as Get Equal (5) or Get Less Than (10), return a record based on a key value your application specifies in the Key Buffer parameter.
Get operations establish logical currency. Your application can change from one key to another by performing the following procedure
1 Retrieve a record by issuing one of the Get operations.
2 Issue Get Position (22) to retrieve the 4-byte physical address of the record.
3 Issue Get Direct/Record (23) and pass to the MicroKernel the 4-byte physical address and the Key Number to change.
In addition to establishing logical currency, all Get operations except Get Position (22) establish the physical currency. As a result, you can continue with Step Next (24) or Step Previous (35). However, using the Step operations destroys the logical currency.
To reestablish logical currency after using a Step operation, perform the following procedure
1 Immediately after using a Step operation, issue Get Position (22) to retrieve the 4-byte physical address of the retrieved record.
2 Issue Get Direct/Record (23), passing to the MicroKernel the 4-byte position and the Key Number on which to establish logical currency.
Reading Variable-Length Records
Reading a variable-length record is the same as reading a fixed-length record in that you use the Data Buffer Length parameter to tell the MicroKernel how much room you have for the record to be returned. Set this parameter to the size of your entire Data Buffer, which can accommodate the maximum amount of data.
Note Do not set the Data Buffer Length to a value larger than the number of bytes allocated to your Data Buffer; doing so could lead to a memory overwrite when running your application.
After a successful read operation, the Data Buffer Length parameter is changed to reflect the size of the returned record, which is the size of the fixed-length portion plus the amount of actual data in the variable portion (not the maximum size of a record). Your application should use this value to determine how much data is in the Data Buffer.
For example, suppose you have the following records in a data file:
Key 0: Owner
30-byte ZSTRING
Key 1: Account
8-byte INTEGER
Balance (Not a Key)
8 bytes
Comments (Not a Key)
1000 bytes
John Q. Smith
263512477
1024.38
Comments
Matthew Wilson
815728990
644.29
Comments
Eleanor Public
234817031
3259.78
Comments
Following are examples of Get Equal operations.
Note While developing and debugging your application, it helps to display the Data Buffer Length just before and after the read operation to verify that it is set correctly at each point.
Get Equal Operation in C
/* get the record with key 1= 263512477 using B_GET_EQUAL */
memset(&dataBuf, 0, sizeof(dataBuf));
dataBufLen = sizeof(dataBuf); /* this should be 1047 */
account = 263512477;
*(BTI_LONG BTI_FAR *)&keyBuf[0] = account;
status = BTRV( B_GET_EQUAL, posBlock, &dataBuf, &dataBufLen, keyBuf, 1);
/* the dataBufLen should now be 56 */
Get Equal Operation in Visual BASIC
dataBufLen= length(dataBuf) ' this should be 1047
account% = 263512477
status = BTRV(B_GETEQUAL, PosBlock$, dataBuf, dataBufLen, account%, 1)
' the dataBufLen should now be 56
If the returned record is longer than the value specified by the Data Buffer Length, the MicroKernel returns as much data as it can (according to the size the Data Buffer Length was set to) and status code 22.
Accessing Records by Chunks
The Btrieve Data Buffer Length parameter, because it is a 16-bit unsigned integer, limits the record length to 65535. Chunk operations expand the record length well beyond this limit by allowing you to read or write portions of a record. A chunk is defined as an offset and length; the offset can be as large as 64 GB, but the length is limited to 65535 bytes. The limits that your operating system and the Data Buffer Length parameter impose also apply to the chunk operations; however, because the chunk operations can access any portion of a record, the limits have no effect on record length, only on the maximum size of a chunk accessible in a single operation.
For example, using the chunk operations, an application can read a 150,000 byte record by making three chunk retrieval calls. Each chunk in this example is 50,000 bytes long. The first chunk starts at offset zero, the next at offset 50,000, and the final at offset 100,000.
A chunk’s offset and length do not have to correspond to any of the internal structures of a record that are known to the MicroKernel, such as key segments, the fixed-length portion of a record, or a variable tail. Also, a chunk does not have to correspond to any portion of the record that your application defines (for example, a field), although you may find it useful to update such defined portions as chunks.
Note Chunks are defined only for the duration of the operation that defines them.
In some cases, the use of chunk operations in client/server environments allows the client Requester to use a smaller Data Buffer Length setting, which lowers the Requester memory requirement. For example, if your application used whole record operations and accessed records up to 50 KB long, your Requester would have to set its Data Buffer Length to at least 50 KB, thereby using 50 KB of RAM. But if your application used chunk operations and limited the size of each chunk to 10 KB, for example, then the Requester could set its Data Buffer Length to 10 KB, thereby saving 40 KB of RAM.
Intrarecord Currency
Intrarecord currency is relevant in chunk operations, because it tracks an offset within the current record. The current position is the offset that is one byte beyond the last byte of the chunk that was read or written. This is true even if in your last operation you attempted to read an entire record and the MicroKernel could return only part of that record, which can happen when the Data Buffer Length is inadequate.
The exception is for an Update Chunk (53) operation that uses the Truncate subfunction. In this case, the MicroKernel defines the current position in the truncated record as the offset that is one byte beyond the end of the record itself.
By tracking intrarecord currency, the MicroKernel can do the following:
Provide next-in-record subfunction biases for the Chunk operations.
You specify an original offset, length and number of chunks, and the MicroKernel calculates the subsequent offsets.
Improve performance in accessing chunks.
Intrarecord currency can speed up any chunk operation, as long as it operates on the same record that was last accessed by the Position Block, and as long as the next chunk offset is greater than the current position in the record. That is, you do not have to access the next immediate byte in the record to benefit from intrarecord currency.
Note The MicroKernel maintains intrarecord currency only for the current record. When you change physical or logical currency, the MicroKernel resets the intrarecord currency, as well.
Chunk Operations
You access chunks using the Get Direct/Chunk (23) operation and the Update Chunk (53)operation. To use these operations, you must define a chunk descriptor structure that defines the offset and length of the chunk. For the Get Direct/Chunk operation, the chunk descriptor structure must also specify an address to which the MicroKernel returns the chunk.
Before you can use Get Direct/Chunk (23), you must retrieve the physical address of the current record by issuing Get Position (22). You can use a single Chunk operation to retrieve or update multiple chunks in a record by using a next-in-record subfunction bias.
Inserting and Updating Records
Most of the time, inserting and updating records is a simple process: You use Insert (2) or Update (3) and pass in the record using the Data Buffer. This topic discusses some special cases involving inserts and updates.
Ensuring Reliability in Mission-Critical Inserts and Updates
While the MicroKernel is an extremely reliable data management engine, it cannot prevent system failures. System failures are more common in client-server applications, because network failures can occur. You can ensure reliability by taking advantage of these MicroKernel features:
Transaction Durability. Transaction durability ensures that before the application receives a successful status code from an End Transaction operation, the changes are already committed to disk. Transactions are normally used to group multiple change operations that need to succeed or fail as a group. However, transaction durability can be useful even for single operations because the application has control over when the change is committed to disk.
Consider “wrapping” individual mission-critical insert and update operations inside Begin Transaction and End Transaction operations and using the MicroKernel’s Transaction Durability configuration option. For more information about the Begin Transaction and End Transaction operations, see Btrieve API Guide. For more information about transaction durability, see Transaction Durability.
Note When you open a file in Accelerated mode, the MicroKernel does not perform transaction logging on the file. That is, operations performed on a file opened in Accelerated mode are not transaction durable.
System Transaction Frequency. An alternative to using Transaction Durability is to use the MicroKernel settings Operation Bundle Limit and Initiation Time Limit to control the frequency of system transactions. For each open file, the MicroKernel bundles a set of operations into a single system transaction. If a system failure occurs, all changes made before the current system transaction completes are lost; however, the file is restored to a consistent state, enabling the operations to be attempted again after resolving the cause of the system failure.
If you set both the Operation Bundle Limit and Initiation Time Limit to 1, the MicroKernel commits each operation as a separate system transaction. Doing so decreases performance, so this method is useful only for applications that can accept a performance decrease. One way to determine this is to measure the CPU utilization while your application runs. Applications that utilize 50 to 100 percent of the CPU are not good candidates for this approach.
Inserting Non-Duplicatable Keys
If you are inserting a record with a key value that may already exist and for which you do not allow duplicates, you can proceed in one of two ways:
Perform Insert (2). If the MicroKernel returns status code 5, then the key value does exist, and you cannot perform the insert.
Perform a Get Equal operation with a Get Key bias (55). If the MicroKernel returns status code 4, then the key value does not already exist, and you can perform the insert.
If your Insert operation stands alone and does not depend on logical currency in the file, executing a Get Equal prior to each Insert is an additional overhead. However, for a group of inserts, the Get Equal operation facilitates any subsequent Insert operations by fetching into memory the index pages that point to the key location.
Inserting and Updating Variable-Length Records
When designing a variable-length data file, you must decide the maximum size of the variable-length portion of the record your application will support. You should set up a record structure that accommodates the fixed-length portion plus the maximum size of the variable portion. Use this structure as the Data Buffer when reading, inserting, and updating your variable-length records.
When inserting or updating a variable-length record, you use the Data Buffer Length parameter to tell the MicroKernel how much data to write. Set this parameter to the size of the fixed-length portion plus the amount of real data in the variable portion. Do not set the Data Buffer Length to the fixed length plus the maximum size your application allows for the variable field; if you do, the MicroKernel will always write the maximum size.
For example, suppose you want to insert the following record.
Key 0: Owner
30-byte ZSTRING
Key 1: Account
8-byte INTEGER
Balance (Not a Key)
8 bytes
Comments (Not a Key)
1000 bytes
John Q. Smith
263512477
1024.38
Comments
Following are examples of Insert operations. Note that the Data Buffer Length is computed as the fixed-length portion plus the amount of data in the Comments field (8 bytes), not the maximum size of the Comments field (1000 bytes).
Insert Operation
#define MAX_COMMENT 1000 /* Largest variable comment size */
typedef struct
{ char owner[30];
int number;
int balance;
} FixedData;
typedef struct
{ FixedData fix;
char variable[MAX_COMMENT];
} DataBuffer;
 
DataBuffer account;
BTI_ULONG dataBufLen;
BTI_SINT status;
strcpy(account.fix.owner, "John Q. Smith");
account.fix.number = 263512477;
account.fix.balance = 102438;
strcpy (account.variable, “Comments”);
dataBufLen = sizeof(FixedData) + strlen(account.variable) +1;
/* the +1 accommodates the null character after the data */
status = BTRV(B_INSERT, PosBlock, &account, &dataBufLen,
keyBuffer, 0);
 
Reading and Updating Fixed-length Portions
It is possible to read only the fixed-length portion of a record by setting the data buffer size to that fixed length. The MicroKernel returns only the fixed-length portion and status code 22. However, if you then use Update (3) and pass in only the fixed-length portion, the variable-length portion is lost. Instead, use Update Chunk (53), which updates any part of a record, based on a byte offset and length. Set the byte offset to 0 and the length to the length of the fixed-length portion. This operation updates the fixed-length portion and retains the variable-length portion.
Updating Non-Modifiable Keys
If you attempt to update a key value that is defined as not modifiable, the MicroKernel returns status code 10. If you want to update the key value anyway, you must first Delete (4) the record and then Insert (2) it.
No-Currency-Change (NCC) Operations
You can perform a variation on the standard insert or update, called a no-currency-change (NCC) operation, by passing in a -1 (0xFF) for the Key Number parameter. NCC operations are useful when an application must save its original logical position in a file in order to perform another operation, such Get Next (6).
To achieve the same effect without an NCC insert operation, you would have to execute these steps:
1 Get Position (22) – Obtain the 4-byte physical address for the logical current record. You would save this value for use in Step 3.
2 Insert (2) – Insert the new record. This operation establishes new logical and physical currencies.
3 Get Direct/Record (23) – Reestablish logical and physical currencies as they were in Step 1.
The NCC Insert operation has the same effect as a standard Insert in terms of logical currency, but can have a different effect in terms of physical currency. For example, executing a Get Next (6) after either procedure produces the same result, but executing a Step Next (24) might return different records.
To maintain original positioning without an NCC update operation, you would have to execute these steps:
1 Get Next (6) – Establish the next logical record.
2 Get Position (22) – Obtain the 4-byte physical address for the next logical record. You would save this value for use in Step 8
3 Get Previous (7) – Reestablish the current logical record.
4 Get Previous (7) – Establish the previous logical record.
5 Get Position (22) – Obtain the 4-byte physical address for the previous logical record. You would save this value for use in Step 8
6 Get Next (6) – Reestablish the current logical record.
7 Update (3) – Update the affected record. If this standard update operation changes the specified key’s value, it also establishes new logical currency.
8 Get Direct/Record (23) – Establish the currencies to the record that preceded or followed the record updated in Step 7 If your application is to continue searching forward, you would pass to the Get Direct/Record operation the address saved in Step 2. If the application is to continue searching backward, you would pass the address saved in Step 5.
Multi-Record Operations
PSQL provides a high performance mechanism for filtering and returning multiple records or portions of multiple records. The mechanism is called “extended operations” and is supported by four specific operation codes:
Get Next Extended (36)
Get Prev Extended (37)
Step Next Extended (38)
Step Prev Extended (39)
For detailed information on how to code these operations, see Btrieve API Guide. This section explains how to optimize your use of these operations for best performance.
Terminology
The following words can have other meanings in other contexts. For the purposes of this section, definitions are provided below.
Descriptor
Also called the Extended Expression. The whole contents of the data buffer which describes how the Btrieve extended operation should be accomplished.
Filter
A portion of the Extended Expression which describes a selection criteria to apply to the records being selected.
Condition
A portion of a filter that uses a single logical operator.
Connector
A Boolean operator that connects a condition to what follows it. Either AND, OR, or NONE
Extractor
A portion of the Extended Expression which defines what data to return.
Key
A whole index definition which may have multiple segments. Get operations require the MicroKernel to move through the data file along a single key path.
Key Segment
Compound indices, also called multi-segmented keys, can have multiple segment definitions. Each segment defines an offset, length, data type, and so on.
Background
The filter evaluation mechanism for extended operations is designed to be very fast. It evaluates expressions in a straightforward manner without doing any extra processing. Because of this approach, you should be aware of some of the idiosyncracies of the interface.
Extended operations do not establish initial positioning. They only move forward or reverse from the current position. So in order to find all records where (lastname = 'Jones' AND firstname = 'Tom' AND city = 'LasVegas') your application must perform a Get Equal operation before performing a Get Next Extended operation.
Filter evaluation is strictly left to right. For example, your application cannot perform a single extended operations that will get all records where (Age = 32 AND Gender = "M") OR (Age = 30 AND Gender = "F").
This kind of search must jump around the file somewhat in order to be most efficient, but extended operations do not jump around. They move along a logical key or record path one record at a time. A compound logical expression such as the one above would require an optimizer like the one in the Relational Engine that is part of PSQL. Instead, the MicroKernel evaluates expressions left to right to make the filter evaluation process as fast as possible.
To do the search above, the caller must make two Get Extended calls, each after first performing a GetEqual operation to position the cursor to the first record. If you use the four conditions from the example above in an extended operation, it would be evaluated like (Age = 32 AND (Sex = "M" OR (Age = 30 AND Sex = "F"))). In other words, for each record, the MicroKernel evaluates the first condition, then looks at the Boolean operator to determine whether it must evaluate the next condition. If the first condition is false, the AND operator means the whole expression is false.
Validation
While an extended operation can return a status 62 (Invalid Descriptor) in many ways, the following list includes some of the most common:
Descriptor length is not long enough. This depends on the number of filter conditions, the length of each condition value and whether a collating sequence is used, and the number of fields to extract.
Data buffer: Length must be at least long enough to contain the full descriptor.
Each condition must have a valid data type. It must be one of the valid Btrieve key types.
Flags used on the comparison code must be valid (FILTER_NON_CASE_SENSITIVE, FILTER_BY_ACS, FILTER_BY_NAMED_ACS), and can be used only with string type fields (STRING_TYPE, LSTRING_TYPE, ZSTRING_TYPE, WSTRING_TYPE, WZSTRING_TYPE).
Must be a valid comparison code, (1-6).
Must be a valid connector, (0-2).
Any referenced ACS or ISR must be predefined.
The last filter condition needs a terminator (connector must be 0).
All other filter condition can not have a terminator (connector must be 1 or 2).
The extractor record count may not be zero.
Optimization
Optimizing an extended operation means that the MicroKernel can stop looking at records along the current key path because there is no possibility that records remaining on the key path can satisfy the filter. Starting with PSQL 2000i SP3, the MicroKernel can optimize on multiple conditions as long as they occur sequentially and match the segments of the current key.
As it evaluates each condition, the MicroKernel determines if the given segment can be optimized. In order to do that, all the following must be true:
Must be a Get Extended operation (36 and 37), not a Step Extended operation (38 and 39).
Optimization must not have been turned off for remaining segments because of a matching record found when evaluating a previous segment.
An OR connector makes the current condition and any following condition not optimizable, since the expression is evaluated left to right.
The condition must refer to the same offset in the record as the current key segment.
The condition must do either one of the following:
Refer to the same field length in the record as the current key segment,
Be a substring of the key if the data type is one of the string types.
The condition must refer to the same field type as the key.
The condition must not be comparing a field with another field in the record (FILTER_BY_FIELD).
The condition must have the same case sensitivity as the key.
The condition must have the same ACS or ISR specification as the key if the data type is one of the string types.
The logical operator must be EQ (=) unless the condition is being optimized to the first key segment.
For the first key segment, the logical operator can also be LT or LE (< or <=) if the direction is forward, and GT or GE (> or >=) if the direction is reverse. For these logical operators, only one filter condition will be optimized.
The actual direction is dependent on both the direction indicated by the operation, but also on whether the current key is descending or ascending.
Table 29 Actual Direction of Extended Operation
 
Ascending Key Segment
Descending Key Segment
Get/Step Next
Ascending/Forward
Descending/Reverse
Get/Step Prev
Descending/Reverse
Ascending/Forward
Any one of the following can cause optimization to be turned off for all future segments.
If the key has no more segments to optimize to.
If the current condition has an OR connector.
If the current condition IS optimized, but is a sub-string of the associated key segment.
If the current condition IS optimized, but the logical operator is not EQ (=).
If the current condition failed to optimize and the previous condition was optimized. This means that if a bunch of conditions are ANDed together, the first optimizing condition, the one that matched the first key segment, does NOT have to be the first condition in the filter. But once a condition is found that CAN be optimized to the first key segment, the other optimizing conditions must occur sequentially and immediately following that first optimizing condition.
Note There is a defect in PSQL 2000i SP3 that results in a requirement for the optimizable conditions to occur first in the filter. Thus, the ability to put not optimizable conditions before the optimizable conditions is available only after SP3. A limited ability for this was available before SP3, but only one condition could be optimized.
Examples
The examples below refer to the following sample data:
Table 30 Sample Data for Multi-record Operations
Record
Field 1
Field 2
Field 3
Field 4
1
AAA
AAA
AAA
XXX
2
AAA
BBB
BBB
OOO
3
AAA
CCC
CCC
XXX
4
BBB
AAA
AAA
OOO
5
BBB
AAA
BBB
XXX
6
BBB
AAA
CCC
OOO
7
BBB
BBB
AAA
XXX
8
BBB
BBB
BBB
OOO
9
BBB
BBB
CCC
XXX
10
BBB
CCC
AAA
OOO
11
BBB
CCC
BBB
XXX
12
BBB
CCC
CCC
OOO
13
CCC
AAA
CCC
XXX
14
CCC
CCC
AAA
OOO
Consider the table above, which has a compound key on fields 1, 2 and 3 in that order. Assume that the application performs a Get First (12) operation on this file using this three-segment key, followed up with Get Next Extended (36). Notice that these examples contain parentheses where they are assumed to be. This is the only way parentheses can occur when the filter is evaluated from left to right.
Remember that in order to optimize against a key segment, the offset, length, type, case, and ACS identified in the filter condition all must be the same as the key definition.
(Field1 = AAA AND (Field2 = AAA AND (Field3 = AAA)))
The MicroKernel Engine retrieves record 1 and stops searching with status 64 (Optimization limit exceeded). The last record examined that satisfies the optimization criteria is record 1 which becomes the current record. Engines before Pervasive.SQL 2000 SP3 can optimize only one condition and thus leave the current record at record 3.
(Field1 = AAA OR (Field2 = AAA OR (Field3 = AAA)))
The MicroKernel Engine retrieves records 1, 2, 3, 4, 5, 6, 7, 10, 13, and 14 and returns with status 9 (End of file reached). No condition can be optimized since the first condition contains an OR connector. The current record becomes record 14.
(Field1 = BBB AND (Field2 = BBB OR (Field3 = BBB)))
The MicroKernel Engine would retrieve records 5, 7, 8, 9, and 11 and return with status 64. The first condition was optimized, but the second condition was not since it contained an OR connector. The last record examined that satisfies the optimization criteria is record 12 which becomes the current record.
(Field4 = OOO AND (Field2 = BBB AND (Field3 = BBB)))
The MicroKernel Engine retrieves records 2 and 8 and return with status 9. No condition can be optimized to the first key segment, so the following segments cannot be optimized. The current record becomes record 14.
(Field1 = BBB AND (first byte of Field2 = B AND (Field3 = BBB)))
The MicroKernel Engine record 8 and returns with status 64. The first two conditions can be optimized, but since the second condition is a substring, the third condition cannot be optimized. The last record examined that satisfies the optimization criteria is record 9. Engines before Pervasive.SQL 2000 SP3 can optimize only one condition and thus return with record 12 as the current record.
(Field1 = BBB AND (Field2 = Field3))
This is done by using the +64 bias on the comparison code, which indicates that the second operand is another field of the record, rather than a constant. The MicroKernel Engine retrieves records 4, 8, and 12 and returns with status 64. The first condition can be optimized, but since the second condition does not compare to a constant, it cannot be optimized. The last record examined that satisfies the optimization criteria is record 12.
(Field1<= BBB AND (Field2 <= BBB AND (Field3 <= BBB)))
The MicroKernel Engine retrieves records 1, 2, 4, 5, 7, and 8 and returns with status 64. The first condition can be optimized, but since it does not have a logical operator of EQ, the following conditions cannot be optimized. The last record examined that satisfies the optimization criteria is record 12.
(Field1= BBB AND (Field2 < BBB AND (Field3 < BBB)))
The MicroKernel Engine retrieves record 4 and returns with status 64. The first condition can be optimized, but since the second condition does not have a logical operator of EQ, it cannot be optimized. The last record examined that satisfies the optimization criteria would be record 12.
(Field1= BBB AND (Field2 = BBB AND (Field3 < BBB)))
The MicroKernel Engine retrieves record 7 and returns with status 64. The first two conditions can be optimized because they use EQ, but the third condition cannot. The last record examined that satisfies the optimization criteria is record 9. Engines prior to Pervasive.SQL 2000 SP3 can optimize on only one condition and so the current record would be 12.
(Field2>= AAA AND (Field2 <= BBB AND (Field1 >= AAA) AND (Field1 <= BBB))))
The MicroKernel Engine retrieves records 1, 2, 4, 5, 6, 7, 8, and 9 and returns with status 64. The first three conditions cannot be optimized to the first key segment, but since they are all ANDed together, the fourth condition can be used to optimize the search. The second condition would be optimizable if it occurred immediately after the fourth condition. But since it is out of position relative to the key segments, it cannot be optimized. Since only one key segment is optimized, the last record examined that satisfies the optimization criteria would be record 12. Note that there is a defect in Pervasive.SQL 2000 SP3 that prevents optimization unless the optimizable condition occurs first. So the SP3 engine would retrieve the same records, but would return status 9.
Performance Tips
This section provides some information on how to speed up your operations.
Connectors
Since extended operations evaluate logical expressions left to right, this feature is by no means a complete expression evaluator that can be used to extract whatever data you want in the most efficient manner. Extended operations are designed to be used in conjunction with a Get or Step operation to initially set the cursor in the file to the correct position. So the first suggestion is:
do not try to mix AND and OR connectors in the same filter. If you do, put the AND conditions up front to match the segments of the key, so that at least the engine can optimize the search to a shorter portion of the key path.
In other words, it may be appropriate to add some OR’ed conditions for fields not in the index after an optimizable condition for an indexed field. For example, let say you are searching a nationwide phone book for everyone in Texas with first name "William", "Bill", "Billy" or "Billybob". Using a key on the State field, you would use GetEqual to set the current record on the first person in Texas. Then call GetNextExtended with a filter like (State = "Texas" AND (FirstName = "William" OR (FirstName = "Bill" OR (FirstName = "Billy" OR (FirstName = "Billybob"))))). If your Extractor indicated a reject count of 10,000 and a return count of 100 records, the GetNextExtended would return after looking at about 10,000 records. But with 14 million people in Texas, you will need to keep issuing the same GetNextExtended operation over and over until you finally get to Utah and a status 64 (Optimization Limit exceeded). This process would be much faster than having each record transferred to your application one at a time.
But what if a compound index existed on State and FirstName? The Get Next Extended above would still work, but it would be much faster to do a Get Equal and Get Next Extended on each of the four State and FirstName combinations, optimizing on both fields.
So you can see that having filters with OR connectors is only useful when no index is available. Priority should be placed on AND connectors for fields that match the key.
Reject Count
Another issue that you should understand is how to set the reject count. If your application is the only one being serviced by a MicroKernel engine, then using the maximum reject count is most efficient since it keeps that network traffic or interprocess communication to a minimum. However, if there are a lot of applications running in a highly concurrent environment, there can be serious consequences if you have a reject count that is too large.
The MicroKernel can handle many Btrieve requests concurrently even while doing Btrieve operations atomically on a file. So it allows any number of reader threads, but only one write thread, to access the same file. Most Btrieve operations are read operations and they do not take much time to accomplish. So if a write operation comes in, it waits until all readers are finished before it locks the file for the instant it takes to insert, update, or delete a record. This coordination works great until a read operation comes in that takes a long time to finish. That is what an extended operation with a high reject count will do if it does not find any records. It keeps reading and reading and reading. The other read operation can get done without a problem, but the write operations start to back up. After a write operation has tried to get write access to the file 100 times, it reaches what is called a frustration count. At this time, it puts a block on all new reader threads. So now all Btrieve operations on this file are hung until the extended operation is done.
For this reason, if your application is used in a highly concurrent environment, use a reject count that is somewhere between 100 and 1000. Also try to make your extended operations optimizable so that the MicroKernel does not have to read and reject records very often.
Even with a reject count of 100 to 1000, it is better to have the MicroKernel read and reject them than it is to have the records returned to your application to reject them.
Adding and Dropping Keys
Btrieve provides two operations for adding and dropping keys in your files: Create Index (31) and Drop Index (32). The Create Index operation is useful for adding keys to a file after the file has been created. The Drop Index operation is useful for removing keys whose index pages have become damaged. After dropping a key, you can re-add it, which causes the MicroKernel to rebuild the index.
When you drop a key, the MicroKernel renumbers all higher-numbered keys, unless you specify otherwise. The MicroKernel renumbers the keys by decrementing all higher-numbered keys by 1. For example, suppose you have a file with key numbers 1, 4, and 7. If you drop key 4, the MicroKernel renumbers the keys as 1 and 6.
If you do not want the MicroKernel to automatically renumber keys, add a bias of 0x80 to the value you supply for the Key Number parameter. This allows you to leave gaps in the key numbering; consequently, you can drop a damaged key and then rebuild it without affecting the numbering of other keys in the file. You rebuild the index using Create Index (31), which allows you to specify a key number.
Note If you dropped a key without renumbering and a user then cloned the affected file without assigning specific key numbers, the cloned file would have different key numbers than the original.