Performance
 
Performance
Analyzing and Tuning Database Performance
The following topics cover database performance:
Analyzing Performance
Tuning Performance
Performance on Hypervisor Products
Analyzing Performance
PSQL Server for Windows provides performance counters for use with the Windows Performance Monitor. The performance counters measure state or activity of the database engine, which allows you to analyze performance of your application.
See Monitoring Performance Counters.
Tuning Performance
This section provides some general tips on how to maximize performance on the initial database connection and on runtime operations. While we can offer some general guidelines, the performance of any specific application depends on a great number of factors, including but not limited to the following:
Network bandwidth and use
Coding techniques in the application
Physical storage space available
Memory available
Processor speed
Application usage patterns (such as write-heavy, read-heavy, transactions used/not used, small record size, large record size, complex queries, simple queries, ODBC, Btrieve only, and so forth)
Unrelated applications competing for CPU cycles
Database engine configuration
As you can see, the engine configuration plays a relatively limited role in the overall performance of any given application. Further, the database engine dynamically manages a variety of resources based on usage patterns. It tunes itself to your environment as needed. The following sections provided are offered only as helpful guidelines and are not a guarantee of any specific level of performance.
Spotting Performance Bottlenecks
You can use Monitor tool to discover performance bottlenecks related to certain database engine configuration options. You can start Monitor from the operating system Start menu or Apps screen or from the Tools menu in PSQL Control Center.
Monitor Displays and Configuration Properties
Two Monitor tabs display performance readings related to configuration options:
Resource Usage
MicroKernel Communications Statistics
The database engine dynamically manages several server configuration options, as shown in the following table.
Table 13 Dynamically Managed Settings Displayed in Monitor
Dynamically Managed Settings
Value Displayed in Resource Usage Tab
Value Displayed in Communications Tab
Files
* 
 
Handles
* 
 
Clients
* 
 
Worker Threads
* 
 
Total Remote Sessions
 
* 
Interpreting the Displays and Taking Action
You can make use of the information displayed in Monitor. Monitor displays three pieces of information about each type of resource. For example, the Total Remote Sessions display shows:
Current. The current number of actual remote sessions operating.
Peak. The highest number of actual remote sessions that have been used since the engine was started.
Maximum. The maximum number of remote sessions allowed.
If the Peak value for a resource is the same as the Maximum value, then you may want to set the configuration property to increase the Maximum value for the resource, thus allowing the database engine to allocate additional instances of that particular resource when it needs to.
Before You Modify Configuration Properties
The following sections assume the following:
1 PSQL Control Center (PCC) is already open.
If you need assistance with this task, see Starting PCC on Windows of PSQL User's Guide.
2 You have already registered (if applicable) the engine you wish to configure.
If you need assistance with this task, see To register a remote server engine of PSQL User's Guide.
3 You have appropriate operating system privileges to configure the given engine.
If you need assistance with this task, see Granting Administrative Rights for the Database Engine of PSQL User's Guide.
4 For some engine settings, you must restart the database engines after making configuration changes.
Minimizing Initial Connection Time
The theory underlying minimal connection time revolves around three requirements. These requirements are summarized next, and detailed procedures follow:
Known communications protocol. You do not want the client or server to spend additional time trying to connect via protocols that are not supported in your environment. By removing client-server support for unavailable protocols, you prevent the networking components from trying to use them.
Known location of database engine. You do not want the client to attempt to connect to an engine that does not exist. By specifying Workgroup-only or Server-only support, as appropriate, you prevent the client from timing out while attempting non-existent connections. In environments where you have both unshared and shared databases, you can use Gateway Durability to force the database engine to keep track of machines where no database engine is running, so that it never attempts to connect to an engine on these machines, but uses another method to access the data.
Database engine ready to execute. When a sleeping engine receives a new connection request, it takes time to reallocate resources and return to a runtime state. If connection speed is more important than resource usage on the server, you can prevent the server engine from sleeping when it is not in use.
Client Properties
You must be at the client machine to change the client properties. You must change the properties at each individual client.
To minimize client-side connection delays
1 In PSQL Explorer, expand the Local Client node in the tree (click the expand icon to the left of the node).
2 Right-click MicroKernel Router.
3 Click Properties.
4 Click Communication Protocols in the tree.
5 For Supported Protocols, ensure that the desired protocols are selected (check marked) and the protocols not being used are not check marked.
6 Click Apply.
The client is now prevented from attempting to communicate on protocols that are not being used.
7 Click Access in the Properties tree.
8 If you are using only a remote Server or remote Workgroup engine, ensure that Use Local MicroKernel Engine is not check marked. (That is, it is set it to Off.)
If you are using only a local Workgroup engine, ensure that Use Remote MicroKernel Engine is not check marked. (That is, it is set it to Off.).
If you sometimes use a Workgroup engine and you sometimes connect to a Server engine or a remote Workgroup engine, you must leave both settings On (check marked).
In such a mixed environment with shared and unshared data, you can set Gateway Durability to On at each client. This setting forces the client software to keep a list of the names of any machines on which it is unable to connect to a database engine. In order for the client software to determine no engine exists on a given computer, it waits for all of its network protocol requests to time out.
If your data is stored on a server that does not have a PSQL database engine on it, and you have set Use Remote MicroKernel Engine to Yes, the client must time out at least once to discover that there is no engine on that machine. Gateway Durability ensures that this time-out only happens the first time your application tries to access that data.
Note Using Gateway Durability fixes the network topology. If you later install a Server or Workgroup engine on the remote computer, you must turn off Gateway Durability on each client so that the list of computers without database engines is deleted (thus allowing you to connect to the new database engine). You may turn Gateway Durability back on immediately, but it starts with an empty list.
9 Click OK.
The client is now prevented from attempting to connect to any database engine types that are not in use.
Server Properties
To minimize server-side connection delays
1 In PSQL Explorer, expand the Engines node in the tree (click the expand icon to the left of the node).
2 Right-click the database engine for which you want to specify configuration settings.
3 Click Properties.
4 For Supported Protocols, ensure that the desired protocols are selected (check marked) and the protocols not being used are not check marked.
5 Click Apply.
The server is now prevented from attempting to communicate on protocols that are not being used.
Note Ensure that at least one protocol you have selected for the Server configuration is the same one as selected for the Client configuration. Your client and server cannot communicate if they are not using the same protocol.
6 Click Memory Usage in the Properties tree.
7 Select Allocate Resources at Startup to set the value to On (check box selected).
This option specifies that the database engine should allocate all necessary memory when it starts up, rather than when the first connection request comes in. Choosing this value requires more memory, but from the client perspective allows the engine to become operational faster.
8 Ensure that Back to Minimal State if Inactive is set to Off (check box cleared).
This option specifies that the database engine should not release resources back to the operating system if the engine is inactive. All resources remain allocated and ready for use at the next client connection.
9 Click OK.
10 Click Yes to restart the engine for these changes to take effect.
Maximizing Runtime Throughput
The theory behind maximum throughput relies on too many variables to list here. Several of the most significant factors are:
Adequate physical memory. If your host system has too little RAM, the system spends most of its time and CPU cycles swapping memory to disk and back, as different users and processes compete for limited resources.
Adequate CPU capacity. If your CPU cannot keep up with the inflow of data processing requests, application performance suffers.
Adequate network bandwidth. If your network is slow or suffers from a high collision rate, the database engine may sit idle between data access requests, while each client seems to display poor performance.
Minimal disk I/O. Disk I/O is significantly slower than memory
I/O. You want to avoid accessing the disk as much as possible, by having sufficient cache.
Adequate resource allocations. Even with massive physical memory available, database performance may suffer if database engine configuration properties are set artificially low.
In the end, optimal performance is a balancing act among network bottlenecks, disk I/O bottlenecks, memory bottlenecks, and CPU bottlenecks. This section provides some guidelines on how to reduce memory and disk I/O bottlenecks.
Fast Disk versus Fast CPU
If you want to maximize the effect of your hardware investment for performance gains, you must understand the existing constraints on your performance. If you have a database that is so large that you cannot reasonably buy and install enough memory to cache a significant part of the database, then performance is likely to be constrained by the disk I/O. Under these conditions, you may be better off to invest in a fast RAID disk array to maximize performance of the disk I/O.
In addition, if your application uses the Relational Engine and forces temporary files to be created frequently, you may want to ensure that the directory where these files are created is located on a fast disk drive. For more information about the location of this directory and the types of queries that generate temporary files, see Temporary Files in SQL Engine Reference.
If your database is small enough to be fully or near-fully cached in memory, then adding a fast disk array is unlikely to provide a significant performance boost. Under these conditions, upgrading the CPU or adding an additional CPU may provide the best performance improvement value.
Ensuring Adequate Physical Memory and Database Cache
Starting with Pervasive.SQL V8, the database engine offers Level 2 dynamic cache in addition to the Level 1 cache specified by the configuration setting, Cache Allocation Size. Assuming you do not turn off the Level 2 dynamic cache by setting Max MicroKernel Memory Usage to zero, the need to manually adjust the Level 1 cache size is much less critical than in previous releases. With that in mind, this section explains how to ensure that you have enough memory available for optimal performance of the database engine.
Ideally, your database engine should be able to allocate enough memory to cache full copies of every database it hosts, thus avoiding as much disk I/O as possible. Obviously, caching one or more entire databases is not practical in some situations, particularly when database size is very large. In addition, such measures as adding RAM to the machine only improve performance if the existing system resources are heavily loaded under normal usage.
The database engine dynamically selects a Level 1 cache size value when it starts the first time. However, this value is based on available memory and may not be the ideal amount of cache for your environment.
To calculate the ideal size of the database memory cache
1 Start by adding up the file sizes of all the data files serviced by the database engine.
Note If you have more than one database serviced by the engine, but they are never used at the same time, add up the file sizes of just the largest database.
For example, assume that your server has two databases with the following file sizes and that users access both databases at the same time:
Database A
Database B
file1.mkd
223 MB
Afile.mkd
675 MB
file2.mkd
54 MB
Bfile.mkd
54 MB
file3.mkd
92 MB
Cfile.mkd
318 MB
file4.mkd
14 MB
 
 
The sum of all these files is 1430 MB.
The number you have now is the maximum amount of memory that the database engine would use if it cached all its hosted data. This number can be referred to as MaxCache.
You would never want to specify a value greater than this for Cache Allocation Size, because you would be allocating memory to the database engine that it would likely never use. A reasonable rule of thumb is to set Cache Allocation Size to about 20% to 70% of MaxCache. Lower values in this range are best for read-intensive applications, and higher values are best for write-intensive applications since all write/update operations take place in the Level 1 cache.
Note File pages are only written to the database cache when they are accessed. Thus, for a database engine to use MaxCache amount of memory requires every page in the database to be accessed. This system of estimating assumes a long-term steady state for database usage. If you bring the database engine down nightly or weekly, it may be unlikely that the database engine would access every page in the database within the given period of uptime.

If this situation applies to you, you may wish to estimate the average number of distinct pages that your application accesses within the given uptime period, multiply that by the page size, and obtain a more realistic value of MaxCache for your particular uptime scenario.

See also Counters for MicroKernel Cache.
On Windows 32-bit operating systems, all user processes are limited to 2 GB of memory. If you have calculated a value of MaxCache larger than 2 GB, and your database engine runs on a 32-bit Windows operating system, then you should make sure that the MaxCache value you select never allows the engine to exceed the 2GB boundary, or your engine may fail to allocate memory and subsequently fail.
To determine how much total physical memory you need
Use the following equation to determine the approximate amount of total physical memory required by the database engine.
Maximum Database Memory Usage = L1_cache_size + internal_allocations + L2_cache_size
where:
L1_cache_size is the Cache Allocation Size
internal_allocations is approximately 25% of the size of the L1 cache
L2_cache_size is the amount of memory that expands and contracts based on memory load of the system
Note the following:
The L1 cache is a fixed size based on Cache Allocation Size. It does not expand or contract based on database operations. If your application performs numerous WRITE operations, increase the L1 cache as much as possible.
The greatest performance is obtained if all of the data can fit into the L1 cache. If all of the data will not fit into the L1 cache, adjust Cache Allocation Size and Max MicroKernel Memory Usage to use a reasonable amount of system memory.
The L2 cache expands and contracts based on memory load of the system. For example, if other applications require more memory, the L2 cache contracts. If ample memory is available, the L2 cache reaches a maximum based on the equation above. The expansion and contraction affects performance. Contraction causes data pages to be removed from the L2 cache, for example. Reading the pages back from disk storage takes longer than if the pages could have been retained in the cache.
The L2 cache contains compressed data pages (more pages in less space). Accessing pages from the L2 cache takes longer than from the L1 cache, but is faster than accessing the pages from disk storage. The L2 cache is helpful for applications that perform numerous READ operations but cannot fit all of the read data pages into the L1 cache.
Minimizing Disk I/O
Reading and writing data to/from disk is much slower than reading and writing to/from memory. Thus, one way to optimize performance is to minimize disk activity.
An important consideration in your attempts to minimize disk I/O is recoverability of data. Disk I/O is a direct trade off against transaction durability and recoverability. The more data you keep in memory without pausing to log changes to disk, the faster the database performs. On the other hand, the more data you keep in memory without pausing to log changes to disk, the more data you lose if the system experiences a failure.
To reduce disk I/O
1 As discussed in the topic Ensuring Adequate Physical Memory and Database Cache, one of the most important considerations is to ensure you have enough database memory cache to avoid frequently swapping data pages between disk and cache. See that section for details.
One of the best ways to reduce disk I/O is to make sure that the dynamic Level 2 cache is turned on. The Level 2 cache adjusts its size dynamically as application usage changes, storing as much data as possible in memory and thus avoiding disk I/O when cache demands exceed the capacity of the Level 1 fixed cache. By default, the Level 2 cache is turned on. To verify that your database engine is using Level 2 cache, check the properties configuration setting Max MicroKernel Memory Usage.
2 Next, consider how much logging you require and what quantity of database operations you are willing to lose in a system failure. The greater the quantity of changes you are willing to lose, the more you can risk in the pursuit of performance.
Using Archival Logging, Transaction Durability, and Transaction Logging all require log files. By default, archival logging is turned off. Turn it on only if you perform regular backups and you need the capability to restore data up to the moment of a system failure. When you specify the files to be logged, be sure to specify only the files for which you absolutely must have logging. See Chapter 9, Logging, Backup, and Restore, for more information.
By default, transaction logging is turned on. Turning off transaction logging should improve performance slightly, but does not guarantee multifile consistency and transaction atomicity. Before turning off transaction logging, check with your application vendor to be sure they allow the application to run without this feature.
Caution The consistency of any multifile database cannot be guaranteed if transaction logging is disabled.
By default, transaction durability is turned off. Turn on this feature only if your application requires completed transaction operations to be durable through a system crash. Transaction durability entails the highest performance penalty, and the trade off is the highest safety of your completed transactions.
3 If you have any logging features turned on, you can specify how much data the engine stores in memory before writing to disk. This feature is important because the changed data builds up over time. The more log data you allow to build up in memory, the less frequent the disk writes are.
The setting Log Buffer Size specifies the number of bytes of database operations that the engine stores in memory before writing them out to the log files. (Click Performance Tuning on the server Properties tree.)
If a system failure occurs, the data in the log buffer is lost.
4 If you have transaction durability turned on, you can specify the maximum size of the log segments on disk. Specifying a larger log segment size can improve performance slightly, because fewer log segments have to be created and closed over time.
The setting Transaction Log Size specifies the maximum number of bytes that can be stored in a log segment before closing it and opening a new segment. (Click Performance Tuning on the server Properties tree.)
5 If your database is highly used, consider configuring your system to maintain the logs on a separate physical volume from the volume where the data files are located. Under heavy load, performance is typically better when the writes to the log files and to the data file are split across different drives instead of competing for I/O bandwidth on a single drive. The overall disk I/O is not reduced, but the load is better distributed among the disk controllers.
6 If your application usage is weighted heavily in favor of database read operations, you can increase performance by turning on Index Balancing. (Click Performance Tuning on the server Properties tree.) Over time, index balancing increases the number of nodes on the average index page, allowing read operations to occur faster. However, for insert, update, and delete operations, additional time and disk I/O may be required because the engine balances the index nodes across adjacent pages.
7 Be sure that tracing is turned off, both in the MicroKernel and/or at the ODBC level. Tracing may cause a significant reduction in performance because it can introduce a large amount of disk I/O.
To ensure ODBC tracing is turned off, start ODBC Administrator from PSQL Control Center. In ODBC Administrator, click the Tracing tab. If tracing is off, you should see a button labeled Start Tracing Now, and thus you should click Cancel. If tracing is on, click Stop Tracing Now, then click OK.
To ensure MicroKernel tracing is turned off, set the properties configuration Trace Operation to Off (not check marked). (Click Debugging on the server Properties tree.)
Ensuring Adequate Resource Allocation
If your database server platform has adequate memory and CPU power, you should ensure that your database engine can take full advantage of the available hardware resources to service multiple clients and multiple data files most efficiently.
To configure multiple client and file handling
1 The setting Number of Input/Output Threads allows you to specify how many threads are available to handle file operations. (Click Performance Tuning on the server Properties tree.)
As a guideline, the value of this setting should be about 1/8 the number of files the application has open, on average. For example, if the application has 40 files open most of the time,
I/O Threads should be set to 5.
Using Monitor, click MicroKernel > Resource Usage from the menu. In the window that appears, the Files: display shows you current and peak number of files open. You can generate an average reading by recording several Current values over time. Then you can specify an appropriate setting for I/O Threads based on the average value.
Large System Cache
Some Windows operating systems provide a Large System Cache setting that allows the system to take advantage of free memory to cache recently accessed data. By default on certain server editions, this setting is on, which favors simple file sharing and can result in very aggressive system cache growth.
The aggressive use of memory for file caching can swap out PSQL, which can cause a substantial performance issue with the database. You may notice a performance decrease if Large System Cache is on, and the PSQL setting System Cache is also set to on (either explicitly or you have set Max MicroKernel Memory Usage to zero). One possible solution to increase database performance is to turn off Large System Cache.
To turn off the cache, access the system properties in the Control Panel or in the properties for My Computer. Click the Advanced tab, then the Settings button for Performance. Under Performance Options, click the Advanced tab.
Under Memory Usage, if the System Cache option is selected, Large System Cache is turned on. To turn it off, click the other option for Memory Usage: Programs. Click OK as required to close open dialogs, then restart the operating system for the setting to take effect.
Performance on Hypervisor Products
To achieve the best performance for PSQL, ensure the following:
Adherence to the performance best practices recommendations from the hypervisor vendor.
The VM hosting PSQL has ample memory (RAM).
The hypervisor host has enough virtual CPUs to minimize virtual CPU contention among all of the VMs on the same machine. This prevents contention with the VM running PSQL. If you use affinity rules, ensure that all cores are running on the same socket.
The PSQL data files reside on very fast physical storage and minimize spindle contention and controller contention for the physical storage device.