Key Attributes
The following sections describe the attributes you can assign when you define a key:
Key Attributes Description
This section contains information on attributes that you can assign to keys:
Segmentation
Keys can consist of one or more segments in each record. A segment can be any set of contiguous bytes in the record. The key type and sort order can be different for each segment in the key.
he number of index segments that you may use depends on the file’s page size.
1”n/a” stands for “not applicable”
2”rounded up” means that the page size is rounded up to the next size supported by the file version. For example, 512 is rounded up to 1,024, 2,560 is rounded up to 4,096, and so forth.
3The maximum number of index segments that can be used with the relational interface is 119. For the transactional interface, the maximum number is 204 for a page size of 4,096, and 420 for page sizes 8,192 and 16,384.
See status codes 26: The number of keys specified is invalid and 29: The key length is invalid for related information about index segments and the transactional interface.
The total length of a key is the sum of the length of the key segments, and the maximum length is 255 bytes. Different key segments can overlap each other in the record.
When a segmented key is a nonduplicatable key, the combination of the segments must form a unique value; however, individual segments may contain duplicates. When you are defining this type of segmented key, each segment has duplicates=no as a key-level attribute even though that particular segment may have duplicates. To ensure that a particular segment is always unique, define it as a separate nonduplicatable key in addition to the segmented key definition.
When issuing a call to the transactional interface, the format of the key buffer must be able to accommodate the key specified by the key number. So, if defined keynumber=0 and key 0 is a 4-byte integer, the key buffer parameter can be any of the following:
Basically, the transactional interface gets a pointer to a memory location to be used as a key buffer. The transactional interface expects that memory location to have a data value corresponding to the specified key number for certain operations, such as Get Equal. In addition, the transactional interface may write data out to that location, and the data written will be a key value corresponding to the specified key number. In this situation, the memory location must be allocated large enough to accommodate the entire key value.
To the transactional interface, a key is a single collection of data; even if it is made up of multiple segments. The segment feature allows you to combine non-contiguous bytes of data together as a single key. It also allows you to apply different sorting rules (as dictated by the supported data types) to different portions of the key data. The data type associated with a key segment is used typically as a sorting rule— it tells the transactional interface how to compare two values to determine which one is larger. Data types are not used to validate data.
The transactional interface always deals with an entire key, not a key segment. To work with any key, set up a key buffer that is large enough to hold the entire key. Some applications define a generic 255 byte buffer to use on all calls to the transactional interface; the maximum size of a key is 255 bytes, which is a sufficient size. When data is returned in this key buffer, the application usually copies data out of the generic buffer into an application variable or structure declared as the same type(s) as the key segment(s). Alternatively, pass a key buffer parameter (simple variable or structure variable) that directly corresponds to the key.
For example, suppose you want to read a record and only know the value of the first segment of the key, but not all segments. You can still utilize that key to find the data. However, you still have to pass in an entire key buffer corresponding to all segments. Because you only know part of the key value, you cannot use the Get Equal call. You have to use the Get Greater Or Equal call. In this case, initialize the key buffer with as many key values as you know and then specify low or null values for the unknown key segments.
For example, given a key 1 definition of three segments corresponding to data values ulElement2, ulElement3, and ulElement5, if you know what value you want for ulElement2, you would initialize your key buffer as:
SampleKey1.ulElement2 = <search value>;
SampleKey1.ulElement3 = 0;
SampleKey1.ulElement5 = 0;
and then pass &SampleKey1 as the key buffer parameter on a Get Greater Or Equal call. When the transactional interface completes the call and a record is found, Status Code 0 is returned, the corresponding data record is returned, and the key buffer is set to have the key value including all three segments.
Duplicatability
Pervasive PSQL supports two methods for handling duplicate key values: linked (the default) and repeating. With linked-duplicatable keys, the transactional interface uses a pair of pointers on the index page to identify the chronologically first and last records with the same key value. Additionally, the transactional interface uses a pair of pointers in each record on the data page to identify the chronologically previous and next records with the same key value. The key value is stored once, and only on the index page.
With repeating-duplicatable keys, the transactional interface uses a single pointer on the index page to identify the corresponding record on the data page. The key value is stored on both the index page and the data page. For more information on duplicate keys, see Duplicatable Keys.
Modifiability
If you define a key as modifiable, the transactional interface enables you to change the value of a key even after the record is inserted. If one segment of a key is modifiable, all the segments must be modifiable.
Sort Order
By default, the transactional interface sorts key values in ascending order (lowest to highest). However, you can specify that the transactional interface order the key values in descending order (highest to lowest).
*Note: Use caution when using descending keys with the transactional interface Get operations (Get Greater (8), Get Greater or Equal (9), Get Less Than (10), and Get Less Than or Equal (11)). In this context, Greater (or Less) refers to the order with respect to the key; in the case of a descending key, this order is the opposite of the corresponding ascending key.
When you perform a Get Greater operation (8) on a descending key, the transactional interface returns the record corresponding to the first key value that is lower than the key value you specify in the key buffer. For example, consider a file that has 10 records and a descending key of type INTEGER. The actual values stored in the 10 records for the descending key are the integers 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. If the current record’s key value is 5 and you perform a Get Greater operation, the transactional interface returns the record containing the key value 4.
Similarly, when you perform a Get Less Than operation (10) using a descending key, the transactional interface returns the record with the next higher value than the one you specify in the key buffer. Using the preceding example, if the current record’s descending key has a value of 5 and you perform a Get Less Than operation, the transactional interface returns the record containing the key value 6.
Case Sensitivity
By default, the transactional interface is case sensitive when sorting string keys; that is, it sorts uppercase letters before lowercase letters. When you define a key to be case insensitive, the transactional interface sorts values without distinguishing case. Case sensitivity does not apply if the key has an alternate collating sequence (ACS).
Null Value
Pervasive PSQL v11 SP3 contains two methods of identifying a column of data as a Null value. The original type of Null value (referred to as legacy null) has been used in the transactional interface for years, and a new type of Null identification referred to as a true null. This section will briefly describe the legacy nulls and then detail the use of the true nulls in the transactional interface.
Legacy Null
The original method of defining a nullable field is referred to as Pseudo-Null or Legacy-Null. It is based on the premise that if the entire field is full of a particular byte value, typically ASCII zero, then the field is considered Null. The byte value is defined in the key definition supplied when creating the index. Using the transactional interface, the only thing that the transactional database engine can do with this knowledge is to decide whether or not to include the field in an index. There are no special sorting rules for Legacy Nulls since they are just values that sort just like all the other values, despite their special meaning.
If a key description contains the flag for "All Segments Null" (0x0008), then the key value is not put into the index if each and every segment in the key is considered to be Null by having the 'Null byte' in every byte of the field. Likewise, if a key description contains the flag for "Any Segment Null" (0x0200), then the key value is not put into the index if any one or more of the key segments is considered to be Null by the same rule
The SQL Relational Database Engine (SRDE) never uses these flags in the indices that it defines. The SRDE does not use these flags because in order to make joins between tables, all records in the table must be accessible through the index.
True Null Indexes
Starting with Pervasive.SQL 2000, a new type of Null Indicator was introduced called true null.
True nulls are implemented in the transactional interface by preceding a nullable field with a one byte Null Indicator Segment (NIS). This is an extra byte of data outside the normal column width that indicates whether the column is Null or not. A value of zero in this byte indicates that the column associated with it is normal, or Not Null. Any other value in this byte indicates that the column value is Null.
With true nulls, unlike legacy nulls, you can tell the difference between an integer that is zero and one that is Null. This is true for any type of number field. You can even distinguish a string field that has a zero length string in it from one that should be identified as null, if there is a need for such a distinction.
The SRDE can identify and use true null columns whether or not there is an index defined on them, but a basic data file only identifies the fields that are included in keys.
You can define true null fields within transactional interface keys by adding a Null Indicator Segment (NIS) before the nullable field in the key definition of a Create (14) or Create Index (31) operation. See Rules for True Null Keys for the rules regarding true null keys.
The transactional interface does not enforce any restrictions on the offset of the NIS whereas the SRDE assumes that it is immediately preceding the nullable field. As such, Pervasive recommends that you structure the fields within your record to make room for the NIS in the byte preceding any field that will use an NIS. This preserves your ability to access these tables through SQL should you need to do so.
Rules for True Null Keys
The following rules must be followed when using this new key type;
1
2
3
4
NIS Values
Any non-zero value is considered an indicator that the following segment is null. By default, the MKDE makes no distinction between non-zero numbers. The Pervasive PSQL relational engine currently uses only a value of 1 in this field to indicate a null. It is possible, however, to make a distinction between different types of NULLs. This can be done by using the Case Insensitive flag on the NIS. Since this key flag is normally only applicable to the various string and character fields, it is overloaded to have the special meaning of DISTINCT when used with an NIS. It means that different NIS values should be treated distinctly and sorted separately. Pervasive Software reserves the use of the first 15 values for future use. If you want to apply a special meaning to various types of nulls in your application, please use NIS values greater than 16. For example, more specific Null definitions could be:
When you add the DISTINCT flag (Case Insensitive) to the NIS, these non-zero values will be sorted separately as distinct values.
Sorting of True Null Values
A true null field has a non-determinate value. In other words, its value cannot be known. According to this definition, no two Null values are equal to each other, nor are they equal to any other key value. Yet the transactional database engine must group these Null values together and you must be able to find key values that are equal to Null. To accomplish this, the transactional database engine interprets true null values as if they are equal to each other, depending on the purpose of the comparison. When sorting, and finding a place for the Null values in an index, true null values are grouped together as if they were equal to each other. But when trying to determine if a value already exists in a unique index, true nulls are not equal to each other.
Any non-zero value in the NIS means the following field is Null. The default behavior is to treat all non-zero values in the NIS as if they were the same value and interpret them to indicate that the nullable field is Null. As such, if you insert records that contain a variety of non-zero values in the NIS and a variety of values in the nullable field that follows, they will all be interpreted as the same value, and will be sorted as a collection of duplicates.
Linked Duplicate Keys and True Nulls
This section discusses the results of inserting several Null values into a Linked Duplicate key into a Linked Duplicate key.
Linked Duplicates contains a single key entry for each unique value, with two record address pointers; one for the first duplicate record and one for the last record in the duplicate chain. Each record contains 8 bytes of overhead consisting of pointers to the previous and next records in the chain. Each new duplicate value is added at the end of the chain, thus ensuring that the duplicate records are linked in the order they were inserted. All true null values are considered duplicates for the purpose of adding them to an index, so they all will be linked to the same chain in the order they were inserted. Even if each record contained different byte values in the NIS and the associated nullable field, there will only be one key entry pointing to the first and last record in this chain. If the NIS key segment is defined as descending, this key entry will occur first in the index. Otherwise, it will occur last.
Repeating Duplicate Keys and True Nulls
Repeating Duplicate Keys contain an actual key entry for each record represented in the index. There is no overhead in the record itself and for each record, there is a key entry that points to it. Duplicate values in this kind of index are sorted by the physical record address to which they point. This means that the order of duplicates is unpredictable, especially in a highly concurrent environment where random records are being inserted and deleted by many clients.
True Null values are interpreted as if they are duplicates and are sorted not by the bytes found in the nullable field, but rather by the record address. So when using repeating duplicate keys, the records containing true null values are grouped together, but in a random fashion. If the NIS segment is descending, they will occur first in the index, otherwise, they will occur last.
Unique Keys and True Nulls
In the transactional interface, if you define an index without either duplicate flag, the index must contain only unique values. But since the value of a true null field is indeterminate, they should not be considered duplicates. For this reason, the transactional database engine allows multiple true null values to be entered into a unique key, assuming that once the value is assigned with an Update operation, then the uniqueness of the key can be determined. But for the purposes of sorting these values in the index, the transactional database engine groups them all together as if they were duplicates. So the section of the index containing the true null values resembles a Repeating Duplicate index. The nulls are sorted together according to the physical record address, the order of which cannot be predicted.
Non-Modifiable Keys and True Nulls
Once you put a value into a non-modifiable key, it cannot be changed. But because a true null value does not have an actual value, the transactional interface allows you to insert a record with a true null value in any or all fields defined in true null indexes, and then later change those field values in an Update operation from null to non-null. But once any field has become non-null, the non-modifiability is enforced and it cannot be changed again, even if to establish the field as null again.
Get Operations and True Nulls
Even though true null values are indeterminate and are not considered equal to each other, it is possible to locate a record with a true null key segment.
The various Get operations can address true null keys by using this sequence:
1
2
3
The following list shows the expected behavior from the Get operations:
This is consistent with the behavior of the Get operations for normal duplicate values.
Distinct True Nulls
It is possible to distinguish between different values in the NIS byte. The default behavior, as indicated, is that all non-zero values in the NIS are considered to be the same thing, and whatever the NIS contains, if it is not zero, the nullable field is Null. The SRDE currently uses this default behavior on all true null index segments that it creates.
However, if you want to store different kinds of Null values in your table, then you can add the NOCASE flag (0x0400) to the key definition of the NIS segment. Hereafter, we will call this the DISTINCT flag. When you do this, the transactional database engine will treat different NIS values as different or distinct from each other.
Distinct True Null segments are sorted in groups by their NIS value. The same rules apply as discussed above when building the various types of indexes. A linked duplicate key will have a single entry for each distinct NIS value with a pointer to the first and last occurrence of that type of Null. Repeating Duplicates and Unique keys will also group the null records by their distinct NIS value. Descending Keys have the highest NIS values grouped first, sorted down to the zero, or non-null values. Ascending keys sort the non-null records first, followed by NIS values of 1, then 2, and so on. Get operations pay attention to the value of the NIS. If you do a GetEQ using a key buffer where the NIS is 20, and all the NIS values in a Distinct True Null index are 1, then the transactional database engine will not find any matching values.
Although the SRDE nor any other Pervasive PSQL access method currently uses the DISTINCT flag when creating true null indexes, they might in the future. For this reason, Pervasive would like to reserve NIS values 2 through 16 for future use, in case we need to assign specific meanings to these 'types' of nulls. So if you use distinct null values for records accessed through the transactional Btrieve API, use values greater than 16.
Multi-Segmented True Null Keys
Consider a multi-segmented True Null index containing two nullable string columns. The key would actually be defined as a four segment index. The first segment is an NIS, followed by the first nullable field, then the second NIS followed by the second nullable field. Now consider what would happen if the following records were put into the file.
"AAA", NULL "BBB", NULL "CCC", NULL NULL, NULL
"AAA", "AAA" "BBB", "AAA" "CCC", "AAA" NULL, "AAA"
"AAA", "BBB" "BBB", "BBB" "CCC", "BBB" NULL, "BBB""
"AAA", "CCC" "BBB", "CCC" "CCC", "CCC" NULL, "CCC"
plus a couple more of these records; "BBB", NULL
The SRDE always creates True Null index segments such that the NULL values will occur first. It does this by adding the Descending flag (0x0040) to each NIS segment. Let's assume that the descending flag is used on each NIS and on the second nullable field, but not the first nullable field. If so, these records would be sorted like this.
1 NULL, NULL
2 NULL, "CCC "
3 NULL, "BBB""
4 NULL, "AAA "
5 "AAA", NULL
6 "AAA", "CCC"
7 "AAA", "BBB"
8 "AAA", "AAA"
9 "BBB", NULL
10 "BBB", NULL
11 "BBB", NULL
12 "BBB", "CCC "
13 "BBB", "BBB"
14 "BBB", "AAA "
15 "CCC", NULL
16 "CCC", "CCC "
17 "CCC", "BBB"
18 "CCC", "AAA "
The nulls always occur before the non-nulls since both NIS are descending. But when the NIS is zero, i.e, the fields are non-null, the first field is sorted ascending and the second is sorted descending.
The following is what would be returned by various Get operations;
GetLT "BBB", NULL returns record 8 "AAA", "AAA"
GetLE "BBB", NULL returns record 11 "BBB", NULL
GetEQ "BBB", NULL returns record 9 "BBB", NULL
GetGE "BBB", NULL returns record 9 "BBB", NULL
GetGT "BBB", NULL returns record 12 "BBB", "CCC "
The GetLE has the implication that you are looking to traverse the file in the reverse direction, so it returns the first occurrence of a key value that "matches" in the reverse direction. GetEQ and GetGE imply that you are moving in the forward direction.
Excluding Records from an Index
As with legacy nulls, you can also apply the flag for "All Segments Null" (0x0008) or "Any Segment Null" (0x0200) to each segment of any index containing an NIS. When you insert a record, the transactional database engine will determine of the nullable field is Null using the NIS. The same rules apply to determine if the key entry will be put into the index or not.
*Note: Files created by the SRDE do not use these flags.
So you should not use these flags if you think that you might at some point want to access these files from SQL, where a goal might be to find any records "where column IS NULL". The SRDE will use the index to find the null records, but they will not be accessible through the index.
Use of Null Indicator Segment in Extended Operations.
Extended operations allow your application to access fields in a table even if they do not have indexes created for them. You can apply a filter to the fields in your record, defining fields on the fly, using knowledge of the record from any source. Thus it is possible to define True Null fields in an extended operation and have the transactional database engine apply the same comparison rules that it would when sorting these fields into an index.
You must define the extended operations filter just like you would define a key. Include a filter segment for the NIS followed by the nullable field. You must include the nullable field in the filter even if you are searching for a null value, where the content of the nullable field does not matter. The MKDE needs both filter segments so that a GetNextExtended can be optimized against an index path and it enforces this with status 62, indicating that a filter expression for an NIS was not followed by a non-NIS.
The only comparison operator you can use for an NIS is EQ or NE. You will get status 62 if you try to use any of the other comparison operators; GT, GE, LT & LE.
A status 62 occurring from a badly formed extended operation descriptor adds a system error Pervasive Event Log. These system errors are listed in table 2 to help you identify the reason for the status 62.
If you want to treat different NIS values distinctly, then add 128 to the comparison operator on the NIS field. This is the same bias value that you would use to indicate case insensitivity. And just like when defining an index, the case insensitive flag has been overloaded for Null Indicator key types to indicate that the non-zero values should be compared distinctly meaning that they should be distinguished from one another instead of treating them all the same.
If you are using the extended operations to get the best performance possible, you will be trying to search along a key path for specific limited ranges of key values. First, establish currency at the beginning of the range by using GetGE. Then follow that with GetNextExtended. Or, you can do a GetLE followed by GetPrevExtended. These extended operations can stop searching automatically when there is no more chance of finding any more values that match the filter. This is called extended operation optimization. If your filter can use optimization, it will be much more efficient because there may be a huge number of records that can be skipped and not read from the file. In order to create an optimized search, you need to be traversing the index in a direction where a limit exists. Also, your filter must exactly match the index, using AND instead of OR as segment connectors.
If you do a GetNextExtended on an ascending index, then an optimized filter can stop at the limit when the conditional operator is EQ, LT or LE. A search will have to look to the end of the file for values greater than a particular value going forward along an ascending index. Likewise, if it is a descending index, then it can stop at a limit when the conditional operator is EQ, GT or GE. This can get much more complicated when there are multiple fields in the search criteria. The simple way to think about it is that in order to optimize a filter, only the last segment can have any other conditional operator than EQ. This includes the NIS. If the conditional operator on an NIS is NE, the filter can only be optimized up to the previous filter segment.
Exactly matching the index means that each filter expression should follow the order of the segments in the index, have the same offset, length, key type, case sensitivity (or distinct flag) & ACS specification. Without these things matching the index, extended operations cannot be optimized.
True Nulls and the SQL Engine
True nulls are implemented in the SRDE through the use of the Null Indicator key type and follow the rules described above. The transactional interface applications can also use this key type to identify the nullness of a nullable field regardless of its contents. This provides a way to identify null integers and other number data types, and fully manage these nullable fields.
True Nulls and Extended Operations
Status 62 occurring on an extended operation indicates that the descriptor is incorrect. Sometimes, it may be difficult to determine what exactly is wrong with the descriptor. The database engine adds a line in the Pervasive Event Log that can be used to determine the exact problem. The Event Log entry will look similar to the following:
12-12-2008 11:12:45 W3MKDE 0000053C W3dbsmgr.exe MY_COMPUTER E System Error: 301.36.0 File: D:\WORK\TEST.MKD
The number immediately after the system error 301 through 318, will identify the problem as follows;
Alternate Collating Sequences
You can use an alternate collating sequence (ACS) to sort string keys (types STRING, LSTRING, and ZSTRING) differently from the standard ASCII collating sequence. By using one or more ACSs, you can sort keys as follows:
By an international sorting rule (ISR) that accommodates language-specific collations, including multi-byte collating elements (such as ll in Spanish), diacritics (such as ô in French), and character expansions and contractions (such as ß expanding to ss in German).
Files can have a different ACS for each key in the file, but only one ACS per key. Therefore, if the key is segmented, each segment must use either the key’s specified ACS or no ACS at all. For a file in which a key has an ACS designated for some segments but not for others, the transactional interface sorts only the segments that specify the ACS.
User-Defined ACS
To create an ACS that sorts string values differently from the ASCII standard, use the format shown in the following table.
Because ACSs are created using a hex editor or defined when creating a transactional interface application, user-defined ACSs are useful to application developers and not typically created by end users.
Following are a 9-byte header and a 256-byte body that represent a collating sequence named UPPER. The header appears as follows:
AC 55 50 50 45 52 20 20 20
The 256-byte body appears as follows (with the exception of the offset values in the leftmost column):
00: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
10: 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F
20: 20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F
30: 30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F
40: 40 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F
50: 50 51 52 53 54 55 56 57 58 59 5A 5B 5C 5D 5E 5F
60: 60 41 42 43 44 45 46 47 48 49 4A 4B 4C 4D 4E 4F
70: 50 51 52 53 54 55 56 57 58 59 5A 7B 7C 7D 7E 7F
80: 80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F
90: 90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F
A0: A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF
B0: B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF
C0: C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 CA CB CC CD CE CF
D0: D0 D1 D2 D3 D4 D5 D6 D7 D8 D9 DA DB DC DD DE DF
E0: E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 EA EB EC ED EE EF
F0: F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 FA FB FC FD FE FF
The header and body forming this ACS are shipped with Pervasive PSQL as the file UPPER.ALT. UPPER.ALT provides a way to sort keys without regard to case. (You can define a key to be case-insensitive; even so, UPPER provides a good example when writing your own ACS.)
Offsets 0x61 through 0x7A in the example have been altered from the standard ASCII collating sequence. In the standard ASCII collating sequence, offset 0x61 contains a value of 0x61 (representing lowercase a). When a key is sorted with the UPPER ACS, the transactional interface sorts lowercase a (0x61) with the collation weight at offset 0x61: 0x41. Thus, the lowercase a is sorted as if it were uppercase A (0x41). Therefore, for sorting purposes UPPER converts all lowercase letters to their uppercase equivalents when sorting a key.
The following 256-byte body performs the same function as UPPER.ALT’s body except that ASCII characters preceding the ASCII space (0x20) are now sorted after all other ASCII characters:
00: E0 E1 E2 E3 E4 E5 E6 E7 E8 E9 EA EB EC ED EE EF
10: F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 FA FB FC FD FE FF
20: 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F
30: 10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F
40: 20 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F
50: 30 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F
60: 40 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F
70: 30 31 32 33 34 35 36 37 38 39 3A 5B 5C 5D 5E 5F
80: 60 61 62 63 64 65 66 67 68 69 6A 6B 6C 6D 6E 6F
90: 70 71 72 73 74 75 76 77 78 79 7A 7B 7C 7D 7E 7F
A0: 80 81 82 83 84 85 86 87 88 89 8A 8B 8C 8D 8E 8F
B0: 90 91 92 93 94 95 96 97 98 99 9A 9B 9C 9D 9E 9F
C0: A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 AA AB AC AD AE AF
D0: B0 B1 B2 B3 B4 B5 B6 B7 B8 B9 BA BB BC BD BE BF
E0: C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 CA CB CC CD CE CF
F0: D0 D1 D2 D3 D4 D5 D6 D7 D8 D9 DA DB DC DD DE DF
In this body, different collating weights have been assigned so that a character’s weight no longer equals its ASCII value. For example, offset 0x20, representing the ASCII space character, has a collating weight of 0x00; offset 0x41, representing the ASCII uppercase A, has a collating weight of 0x21.
To sort keys without regard to case, offsets 0x61 through 0x7A in the last example have been altered. As in the body for UPPER.ALT, offset 0x61 has the same collating weight as offset 0x41: 0x21. By having the same collating weight, offset 0x41 (A) sorts the same as offset 0x61 (a).
International Sort Rules
To specify an ACS that sorts string values using an ISO-defined, language-specific collating sequence, you must specify an ISR table name, as follows:
The ISR tables are provided with Pervasive PSQL and are based on ISO-standard locale tables. ISR tables are stored in the COLLATE.CFG file, which is installed with the Pervasive PSQL database engine. Multiple data files can share a single ISR.
For sample collations, refer to Appendix A, Sample Collations Using International Sorting Rules.
Key Specification
When creating indexes using either CREATE (14) or CREATE INDEX (31), the key specification structures (index segment descriptor) are provided. Each key specification is 16 bytes long, and contains the following information:
Table 6
The following flags should be "off": none
The following flags are ignored: BINARY (0x0004), ACS (0x0020).
Limitations and Side Effects
There are a few limitations with true null support:
1”n/a” stands for “not applicable”
2”rounded up” means that the page size is rounded up to the next size supported by the file version. For example, 512 is rounded up to 1,024, 2,560 is rounded up to 4,096, and so forth.
3The maximum number of index segments that can be used with the relational interface is 119. For the transactional interface, the maximum number is 204 for a page size of 4,096, and 420 for page sizes 8,192 and 16,384.