Logging, Backup, and Restore
 
Logging, Backup, and Restore
Understanding Logs, Backups, and Data Restoration
PSQL provides several powerful features to ensure data integrity and to support online backups and disaster recovery.
Transaction Logging and Durability
Understanding Archival Logging and Continuous Operations
Using Archival Logging
Using Continuous Operations
Data Backup with Backup Agent and VSS Writer
Transaction Logging and Durability
PSQL offers two levels of data integrity assurance for database operations that involve transactions: Transaction Logging and Transaction Durability.
This section contains the following topics:
Using These Features
Feature Comparison
Which Feature Should I Use?
How Logging Works
See Also
Using These Features
Both of these features can be turned on or off in the database engine using configuration within PSQL Control Center, or programmatically using the Distributed Tuning Interface. See Transaction Durability and Transaction Logging.
The default value for Transaction Durability is Off, and the default value for Transaction Logging is On.
Feature Comparison
Both features offer multifile transaction atomicity, to ensure that the data files remain consistent as a set and that incomplete transactions are never written to any data files.
Atomicity means that, if any given data operation within a transaction cannot successfully complete, then none of the operations within the transaction are allowed to complete. An atomic change does not leave partial or ambiguous effects in the database. Changes to individual files are always atomic whether Transaction Logging and Transaction Durability are on or off. But transactions make it possible to group changes to multiple files into one atomic group. The atomicity of these multifile transactions are assured by the MicroKernel only when using transactions in your application, and Transaction Logging or Transaction Durability is turned on.
In addition to these benefits, Transaction Durability guarantees that, in the event of a system crash, the data files will contain the full results of any transaction that returned a successful completion status code to the application prior to the crash.
In the interest of higher performance, Transaction Logging does not offer this guarantee. Whereas Transaction Durability ensures that a completed transaction is fully written to the transaction log before the engine returns a successful status code, Transaction Logging returns a successful status code as soon as the logger thread has been signaled to flush the log buffer to disk.
Transaction Logging is a subset of Transaction Durability; that is, if Transaction Durability is turned on, then logging takes place and the Transaction Logging setting is ignored by the database engine.
The main differences between Transaction Logging and Transaction Durability are shown in the following tables:
Table 23 Transaction Logging vs. Transaction Durability: Benefits
Feature
Guaranteed data consistency and transaction atomicity across multiple files
Guaranteed commit for all completed transactions that have returned a successful status code
Transaction Logging
Yes
No
Transaction Durability
Yes
Yes
Table 24 Transaction Logging vs. Transaction Durability: Function
Feature
Timing of log buffer writes to disk
Transaction Logging
The log buffer is written to the log file when the log buffer is full or Initiation Time Limit is reached. A successful status code for each End Transaction operation is returned to the application as soon as the logger thread has been signaled to flush the buffer to disk.
Transaction Durability
The log buffer is written to the transaction log file with each End Transaction operation. A successful status code for each End Transaction operation is not returned to application until the log disk write is successful. For insert or update operations that are not part of a transaction, the log buffer is written to the log file when the log buffer is full or Initiation Time Limit is reached.
Which Feature Should I Use?
For the fastest performance, you want to use the lowest level of logging that meets your transaction safety needs. The best way to determine your appropriate level of logging is to ask your application vendor. If you have multiple applications that use PSQL on the same computer, you must use the highest level of logging required by any of the applications.
If you only have one data file, or if none of your applications perform transactions involving multiple data files, you generally do not need to use Transaction Durability or Transaction Logging. Under these circumstances, PSQL guarantees the internal consistency of each data file, with or without logging.
Transaction Logging
Turn on Transaction Logging if at least one of your PSQL applications performs transactions across multiple data files. Without Transaction Logging, PSQL cannot guarantee multifile atomicity of transactions or multifile data integrity.
In the event of a system crash, this level of logging does not guarantee that every completed transaction has been written to the data files.
Transaction Durability
Turn on Transaction Durability if at least one of your PSQL applications requires that completed transactions across multiple data files be absolutely guaranteed to have been written to the data files under almost any circumstances.
In the event of a system crash, this level of logging guarantees that every transaction that has been successfully completed has been written to the data files.
How Logging Works
Note that these features ensure atomicity of transactions, not of operations. If you are using SQL, a transaction is defined as a set of operations that take place between a BEGIN statement or START TRANSACTION statement, and an END or COMMIT statement. If you are using Btrieve, a transaction is defined as a set of operations that take place between a Start Transaction operation and an End Transaction operation.
All data file inserts and updates are stored in the log buffer. When a transaction is completed (Transaction Durability) or when the buffer gets full or the Initiation Time Limit is reached (Transaction Durability or Transaction Logging), the buffer is flushed to the transaction log file.
In the case of Transaction Logging, when the engine receives the operation ending the transaction and successfully signals the logger thread to flush the log buffer to disk, the engine returns a successful status code to the application that initiated the transaction. In the case of Transaction Durability, the engine does not return the successful status code until the logger thread signals that is has successfully written the buffer to disk.
Transaction log file segments are stored in the location specified in the setting Transaction Log Directory. The log segments are named *.LOG, where the prefix can be 00000001 through FFFFFFFF.
Note All operations, regardless of whether they take place within a transaction, are written to the log file when Transaction Logging or Transaction Durability is in effect. However, only operations executed within a transaction are guaranteed to be atomic. In the case where a system crash has occurred and the transaction log is being rolled forward, only completed transactions are committed to the data files. All operations without an associated End Transaction operation are rejected, and are not committed to the data files.
Tip If your database is highly used, consider configuring your system to maintain the transaction logs on a separate physical volume from the volume where the data files are located. Under heavy load, performance is typically better when the writes to the log files and to the data file are split across different drives instead of competing for I/O bandwidth on a single drive. The overall disk I/O is not reduced, but the load is better distributed among the disk controllers.

You can specify the location of the transaction logs using the configuration setting Transaction Log Directory.
If a system failure occurs after the log file has been written but before the committed operations are flushed to the data files in a system transaction, the committed operations are not lost. In order to flush the committed operations the affected files need to be opened and operations performed after the system failure. When the files are opened and operations attempted, it is then that the data is rolled forward to the files affected at the time of system failure. Simply restarting the database engine will not invoke the roll forward operation nor will it make the data consistent.
Note Log files associated with the rolled forward files will not be automatically deleted, as they may be associated with more than one data file.
This feature allows individual client transactions to receive a successful status code as soon as possible while at the same time taking advantage of performance gains offered by grouping multiple client transactions together and writing them to the data files sequentially.
If your database server suffers a disk crash of the volume where the data files are stored, and you have to restore the data from an archival log, the engine does not roll forward the transaction log file. The archival log contains all the operations in the transaction log, so there is no need to roll forward the transaction log.
Tip After a system failure, open all data files and perform a stat or read operation on those files. Once you are certain that all data has been restored, old log files may then be stored in a safe location.
See Also
For further information, see:
Transaction Durability
Transaction Logging
Transaction Log Directory
Understanding Archival Logging and Continuous Operations
The product offers two mutually exclusive features to support online backups and disaster recovery.
If your situation is like this...
... use this feature:
You must keep your database applications running while performing backups.
Continuous operations
You are able to shut down the database engine to perform backups
Archival logging
Archival Logging allows you to keep a log of database operations since your last backup. In case of a system failure, you can restore the data files from backup then roll forward the changes from the log file to return the system to the state it was in prior to the system failure.
Caution Archival logging does not guarantee that all your data files will be in a consistent state after restoring from an archival log. In the interest of speed, the database engine does not wait for a successful status code from the logging function before emptying the log buffer. Thus, in rare circumstances such as a full disk or a write error in the operating system, updates that were successful in the data files may not be recorded in the archival log. In addition, archival logging does not require you to log all of your files, so a transaction that updates more than one file may not be completely recorded in the archival log if you are only archival logging some of those files. As a result, one file may not be consistent with another. If you use transactions and require multifile transaction atomicity, see Transaction Logging and Durability.
Continuous Operations allows you to backup database files while the database engine is running and users are connected. After starting Continuous Operations, the database engine closes the active data files and stores all changes in temporary data files (called delta files). While Continuous Operations are in effect, you perform a backup of the data files. The delta files record any changes made to the data files while the backup is taking place.
When the backup is complete, you turn off Continuous Operations. The database engine then reads the delta file and applies all the changes to the original data files. The temporary delta file may surpass the size of the original data file if users make extensive changes to the file during continuous operation.
A file put into continuous operations locks the data file from deletion through the Relational Engine and the MicroKernel Engine. In addition, the file is locked from any attempts to change the file structure, such as modifying keys and so forth.
Note Archival Logging and Continuous Operations are mutually exclusive features and cannot be used at the same time.
Difference Between Archival Logging and Transaction Logging
Transaction Logging is another feature designed to protect the integrity of your data in the event of a system failure, but it is not directly related to Archival Logging. You can have Transaction Logging in effect at the same time as either Archival Logging or Continuous Operations. Transaction Logging uses a short-term log file to ensure that transactions are safely written to disk. The transaction log is reset frequently as completed client transactions are rolled into the physical data files by way of system transactions. In the event of a system failure, when the database engine starts up again, it reads the transaction log and flushes to the data files the transactions that were completed prior to the system failure.
The archival log is written to at the conclusion of each system transaction, so the archival log and the transaction log should remain properly synchronized unless a system failure occurs exactly during the system transaction.
For more information on Transaction Logging, see Transaction Logging and Durability.
What if a File Restore is Needed
In the event of a system crash that requires restoring data files from backup, Archival Logging allows you to restore from backup and then recover database activity up to the moment of the crash.
If you experience a similar crash without Archival Logging (for example if you use Continuous Operations to perform backups), then you will not be able to recover database activity that took place between the last backup and the system crash.
Table 25 Data Restore Limits After Crash
If Archival Logging is...
... this much data will be unrecoverable after a crash:
On
Unfinished transactions at the moment of failure.
Off
All database operations that have occurred after the last backup of the data files.
The remainder of this chapter describes the options and procedures associated with Archival Logging and Continuous Operations.
Using Archival Logging
This section explains the procedures you must follow to set up Archival Logging, make backups, and restore data files. It is divided into the following topics:
General Procedures
Setting up Archival Logging
Roll Forward Command
General Procedures
For Archival Logging to work properly, you must follow a clearly defined setup procedure, and another procedure in the event that restoring from backup is needed.
Caution If any steps of the procedures are omitted or compromised, you may not be able to restore your data from backup.
To use Archival Logging properly
1 Turn on Archival Logging, if it is not already in effect. See Setting up Archival Logging for the detailed setup procedure.
2 Shut down the database engine.
3 Backup the data files.
4 After a successful backup, delete all existing archival logs.
Caution Delete the corresponding log files before you resume working with the data files. Synchronizing the backup data files and the corresponding log files is a critical factor of successful recovery.
5 Restart the database engine.
To restore data files from backup and apply changes from the archival logs
Note You cannot use this procedure to roll forward the archival logs if you experienced a hard disk crash and your archival logs and data files were both located on the lost hard disk.
1 When the computer restarts after the system failure, ensure that the database engine is not running, and ensure no other database engine is accessing the data files you wish to restore.
2 Restore the data files from backup.
3 Start the database engine, ensuring that no applications of any kind are connected to the engine.
Caution It is crucial that no database access occurs before the archival logs have been applied to the data files. Make sure no other database engine accesses the files. You must roll forward the archival logs using the same engine that encountered the system failure.
4 Issue the Roll Forward command as described in Roll Forward Command.
5 After the Roll Forward completes successfully, stop the database engine and make a new backup of the data files.
6 After you have successfully backed up the data files, delete the archival log files. You may now restart the database engine and allow applications to access the data files.
Setting up Archival Logging
Setting up Archival Logging requires two steps:
turning on the Archival Logging feature
specifying the files to archive and their respective log files
Note To perform these procedures, you must have full administrative permissions on the machine where the database engine is running or be a member of the Pervasive_Admin group on the machine where the database engine is running.
To turn on Archival Logging
1 Access Control Center from the operating system Start menu or Apps screen.
2 In PSQL Explorer, expand the Engines node in the tree (click the expand icon to the left of the node).
3 Right-click the database engine for which you want to specify archival logging.
4 Click Properties.
5 Click Data Integrity in the tree to display the settings for that category of options.
6 Click Archival Logging Selected Files.
7 Click OK.
A message informs you that the engines must be restarted for the setting to take effect.
8 Click Yes to restart the engine.
To specify files to archive
You specify the files for which you want the MicroKernel to perform Archival Logging by adding entries to an archival log configuration file you create on the volume that contains the files. To set up the configuration file, follow these steps:
1 Create the directory \BLOG in a real root directory of the physical drive that contains data files you want to log. (That is, do not use a mapped root directory.) If your files are on multiple volumes, create a \BLOG directory on each volume.
For example, if you have data files located on C:\ and D:\, and both drives are physical drives located on the same computer as the database engine, then you would create two BLOG directories, as next:
C:\BLOG\
D:\BLOG\
Note On Linux, macOS, and Raspbian, the log directory must be named blog and must be created in the directory specified by the PVSW_ROOT environment variable (by default, /usr/local/psql).
2 In each \BLOG directory, create an empty BLOG.CFG file. You can use any text editor, such as Notepad, to create the BLOG.CFG file. On Linux, macOS, and Raspbian, the file must be named blog.cfg (lowercase).
3 In each BLOG.CFG file, create entries for the data files on that drive for which you want to perform Archival Logging. Use the following format to create each entry:
\path1\dataFile1[=\path2\logFile1]
path1
The path to the data file to be logged. The path cannot include a drive letter.
dataFile1
The name of the data file to be logged.
path2
The path to the log file. Because the log file and the data file can be on different drives, the path can include a drive letter.
logFile1
The name of the log file. If you do not specify a name, the default value is the same directory and file name prefix as the data file, but replace the file name suffix with “.log.” You may specify a different physical drive, so that the log and the data files are not on the same drive. Each data file being logged requires a different log file.
A single entry cannot contain spaces and must fit completely on one line. Each line can contain up to 256 characters. If you have room, you can place multiple entries on the same line. Entries must be separated by white space.
Caution You must use a different log file for every data file that you wish to log. If you use the same log file for more than one data file, the MicroKernel cannot use that log file in the event that a roll-forward is needed.
If you do not provide a name for a log file, the MicroKernel assigns the original file name plus a .log extension to the log file when you first open it. For example, for the file b.btr, the MicroKernel assigns the name b.log to the log file.
Caution You are not required to log every file in your database. However, if your database has referential integrity (RI) rules defined, you must log all or none of the files involved in each RI relationship. If you log only a subset of the files involved in a given RI relationship, rolling the archival logs forward after a system crash may result in violations of your RI rules.
Examples
The following examples show three sample entries in the blog.cfg file on drive C. All three entries produce the same result: Activity in the file C:\data\b.bti is logged to the file C:\data\b.log.
\data\b.bti
\data\b.bti=\data\b.log
\data\b.bti=c:\data\b.log
The next example directs the engine to log activity in the file C:\data\b.bti to the log file D:\data\b.lgf. This example shows that archival log files do not have to reside on the same drive as the data file and do not require the .log extension. However, the .log extension is provided by default if no other extension is used.
\data\b.bti=d:\data\b.lgf
Tip Writing the log to a different physical drive on the same computer is recommended. If you experience a hard disk crash, having the log files on a different physical disk protects you from losing your log files and your data files at the same time.
The next example shows a blog.cfg file that makes the MicroKernel log multiple data files to a different drive (drive D), assuming this blog.cfg file is on drive C.
\data\file1.mkd=d:\backup\
\data\file2.mkd=d:\backup\file2.log
\data\file3.mkd=d:\backup\file3.log
Roll Forward Command
The Btrieve Maintenance tool (GUI or butil command line) provides a command allowing you to roll forward archival log files into the data files. See Performing Archival Logging.
Using Continuous Operations
Continuous operations provides the ability to back up data files while database applications are running and users are connected. However, in the event of a hard drive failure, if you use continuous operations to make backups, you will lose all changes to your data since the last backup. You cannot use Archival Logging and the Maintenance tool Roll Forward command to restore changes to your data files that occurred after the last backup.
PSQL provides backup capabilities in the butil command for continuous operations.
Note A file put into continuous operations locks the data file from deletion through the Relational Engine and the MicroKernel Engine. In addition, the file is locked from any attempts to change the file structure, such as modifying keys and so forth. Note also that PSQL provides its product Backup Agent to set and manage continuous operations. See Backup Agent Guide for details.
This section is divided into the following topics:
Starting and Ending Continuous Operations
Backing Up a Database with Butil
Restoring Data Files when Using Continuous Operations
Starting and Ending Continuous Operations
This topic covers details on the commands Startbu and Endbu.
Table 26 Commands to Start and Stop Continuous Operation 
Command
Description
Starts continuous operation on files defined for backup (BUTIL).
Ends continuous operation on data files defined for backup. (BUTIL).
Caution The temporary delta files created by continuous operation mode have the same name as the corresponding data files but use the extension “.^^^” instead. No two files can share the same file name and differ only in their file name extension if both files are in the same directory. For example, do not use a naming scheme such as invoice.hdr and invoice.det for your data files. If you do, the MicroKernel returns a status code and no files are put into continuous operations.
Continuous operation mode does not significantly affect MicroKernel performance. However, using a server to back up files can affect performance.
To protect against data loss using continuous operation
1 Use the startbu command to put your files in continuous operation. See Startbu for an explanation of the command syntax with butil.
2 Back up your data files.
3 Use the endbu command to take your files out of continuous operation. See Endbu for an explanation of the command syntax with butil.
Backing Up a Database with Butil
This topic provides detailed information on backing up a database using the butil commands Startbu and Endbu.
Startbu
The startbu command places a file or set of files into continuous operation for backup purposes.
Format
BUTIL -startbu <sourceFile | @listFile> [/UID<name> </PWD<word>> [/DB<name>]]
sourceFile
The fully qualified name of the data file, including the drive specification for Windows platforms, on which to begin continuous operation for backup.
This fully qualified name must reside on the same machine as the one from which you are running butil. You cannot used mapped drives with startbu.
listFile
The name of a text file containing the fully qualified names of files on which to begin continuous operation. Separate these file names with a carriage return/line feed. The file names may contain blank characters.

If the Maintenance tool cannot put all of the specified files into continuous operation, then it does not put any of the files into continuous operation.
/UID<name>
/UIDuname
The name of the user authorized to access a database with security enabled.
/PWD<word>
/PWDpword
The password for the user who is identified by uname. Pword must be supplied if uname is used.
/DB<name>
/DBdbname
The name of the database on which security is enabled. If omitted, the default database is assumed.
Note The startbu command begins continuous operation only on the files you specify. You cannot use wildcard characters with the startbu command.

On Linux, macOS, and Raspbian, all slash (/) parameters use the hyphen (-) instead of the slash. For example, the /DB parameter is -DB.
File Considerations
When selecting files for backup, we recommend that the temporary delta files created by Continuous Operations mode be excluded since they are open and in use during backup. If the delta files are included in the backup, they should be deleted before the database engine is started after the restore.
Examples for Windows Server
Example A The first example starts continuous operation on the course.mkd file.
For Windows Server:
butil -startbu file_path\PSQL\Demodata\course.mkd
For default locations of PSQL files, see Where are the PSQL files installed? in Getting Started With PSQL.
Example B The following example starts continuous operation on all files listed in the startlst.fil file.
butil -startbu @startlst.fil
The startlst.fil file might consist of the following entries:
file_path\PSQL\Demodata\course.mkd
file_path\PSQL\Demodata\tuition.mkd
file_path\PSQL\Demodata\dept.mkd
Endbu
The endbu command ends continuous operation on a data file or set of data files previously defined for backup. Issue this command after using the startbu command to begin continuous operation and after performing your backup.
Format
BUTIL -ENDBU </A | sourceFile | @listFile> [/UID<name> </PWD<word>> [/DB<name>]]
/A
If you specify /A, the tool stops continuous operation on all data files initialized by startbu and currently running in continuous operation mode.
sourceFile
The fully qualified name of the data file (including the drive specification for Windows platforms) for which to end continuous operation.
This fully qualified name must reside on the same machine as the one from which you are running butil. You cannot used mapped drives with the endbu command.
@listFile
The name of a text file containing a list of data files for which to end continuous operation. The text file must contain the fully qualified file name for each data file. Separate these file names with a carriage return/line feed. The file names may contain blank characters.

Typically, this list of data files is the same as the list used with the Startbu command.
/UID<name>
/UIDuname
Specifies the name of the user authorized to access a database with security enabled.
/PWD<word>
/PWDpword
Specifies the password for the user who is identified by uname. Pword must be supplied if uname is specified.
/DB<name>
/DBdbname
Specifies the name of the database on which security is enabled. If omitted, the default database is assumed.
Note On Linux, macOS, and Raspbian, all slash (/) parameters use the hyphen (-) instead of the slash. For example, the /A parameter for is -A, as in butil -endbu -A.
Example for Windows Server
The following example ends continuous operation on the course.mkd file.
butil -endbu file_path\PSQL\Demodata\course.mkd
However, you can also simply enter butil -endbu course.mkd instead of the full path if your current directory is F:\demodata.
Restoring Data Files when Using Continuous Operations
If you are using Continuous Operations for your backup strategy, then you have no recovery log that can be used to recover changes since your last backup. All database changes since your last backup are lost, with the possible exception of any transactions stored in the transaction log. Any such transactions are automatically rolled forward by the database engine when it starts up.
To restore data and normal database operations
1 Resolve the failure.
Perform the maintenance required to make the failed computer operational again.
2 Restore the data files from backup, or restore the hard drive image from backup, as appropriate.
3 Reinstall PSQL if it was not restored as part of a disk image.
Caution If the delta files were included in the backup, they should be deleted before the database engine is started in the next step.
4 Re-start the database engine.
Any database operations performed since the last backup must be performed over again.
Data Backup with Backup Agent and VSS Writer
In addition to the topics previously discussed in this chapter, both PSQL Server and PSQL Vx Server also provide the following solutions for data backup:
Backup Agent
PSQL VSS Writer
If your backup software is not aware of the Microsoft Volume Shadow Copy Service (VSS), you can use Backup Agent with your backup software. If your backup software is VSS aware, PSQL VSS Writer is automatically invoked during VSS backups. You do not need to use Backup Agent if your backup software is already VSS aware.
Backup Agent and PSQL VSS Writer can be used together, but there is no advantage in doing so. Your backup process will be more streamlined if you select one method or the other.
Backup Agent
Backup Agent is an optional product. By default, it is not installed. You must install it after you install PSQL Server.
Backup Agent provides a quick and simple method for you to set and manage Continuous Operations on your PSQL database files. Setting and managing Continuous Operations is a critical piece when backing up your PSQL databases without using Microsoft Volume Shadow Copy Service. Backup Agent handles setting and managing Continuous Operations on your open files so that your data is still available from your application during your backup. Once the backup procedure is complete, stopping Backup Agent automatically takes the files out of Continuous Operations and rolls in all the changes captured during the backup.
Backup Agent is compatible with many popular backup applications on the market. Note that the backup application must be able to issue commands to start and stop other applications (so that the commands can start and stop Backup Agent).
For details on Backup Agent, see Backup Agent Guide, which is available on the PSQL website.
PSQL VSS Writer
The Microsoft Volume Shadow Copy Service (VSS) consists of Writer, Provider, and Requestor components. PSQL supports VSS with only a Writer component, PSQL VSS Writer.
PSQL VSS Writer is a feature of the database engine and is enabled for PSQL Server. PSQL VSS Writer is available for use after that product is installed. PSQL VSS Writer is currently not available for use with PSQL Workgroup.
PSQL VSS Writer is available only on Windows operating systems. For more information on Volume Shadow Copy Service, see the Microsoft document, A Guide for SQL Server Backup Application Vendors.
Overview
During VSS snapshots, PSQL VSS Writer quiesces all disk I/O write activity to all PSQL data and transaction log files, regardless of the volume on which they reside. After the snapshot is taken, PSQL VSS Writer allows all disk I/O to resume, including any writes deferred during the quiesced period.
PSQL VSS Writer never quiesces disk I/O read activity, allowing normal database processing to continue during the quiesced period as long as writes are not required. PSQL VSS Writer operates normally during the backup phase, although performance may likely be reduced due to the backup activity of the VSS service and VSS Requestor.
The Microsoft Volume Shadow Copy facility allows Backup and Restore products to create a shadow copy for backup in which the files are in either one of the following states:
1 A well-defined and consistent state
2 A crash-consistent state (possibly not suitable for a clean restore).
Files in the VSS snapshot will be in the well-defined and consistent state if all of the following are true:
1 The file’s writer is VSS-aware.
2 The Backup and Restore product recognizes and notifies the VSS-aware writer to prepare for a snapshot.
3 The VSS-aware writer successfully prepares for the snapshot.
Otherwise the writer’s files are backed up in the crash-consistent state.
VSS Writer Details
The following items discuss specifics about PSQL VSS Writer.
Supported Operating Systems
The same Windows operating systems that support the PSQL server products also support PSQL VSS Writer. PSQL VSS Writer is functional on the same bitness as the machine’s operating system and the installed PSQL server product. PSQL VSS Writer 32-bit is supported only on 32-bit machines, and 64-bit is supported only on 64-bit machines. If the bitness does not match, PSQL functions properly, but VSS Writer is unavailable.
Supported Backup Types
PSQL VSS Writer supports manual or automatic backups of data volumes. PSQL VSS Writer is supported on Full and Copy Volume backups. Incremental, Differential, and Log backups are not supported. VSS recognizes PSQL VSS Writer as a component. However, PSQL VSS Writer does not support component backups. If the VSS Requestor does call PSQL VSS Writer in a component backup, the VSS Writer performs the same actions as in a Full or Copy Volume backup.
Virtualized Environment Support
PSQL VSS Writer supports VSS Requesters that trigger VSS backups in virtualized environments. Performing a VM snapshot does not invoke a VSS backup.
Multiple Volume PSQL Data Files
PSQL files and transaction logs can reside on multiple volumes. When backing up PSQL files, remember to backup the transaction logs and related files on other volumes simultaneously. Files that are independent of one another may not need to be backed up at the same time as related PSQL files.
Backup Solution Compatibility
To determine if a particular backup product recognizes PSQL VSS Writer and will notify the Writer to prepare for a snapshot, start a backup with the product. After the backup is in progress, consult the pvsw.log to determine if the PSQL VSS Writer logged the Frozen or Thawed states. If the backup and restore product did not notify PSQL VSS Writer to prepare for the backup, another solution must be used. For example, you could use Backup Agent to backup PSQL data files in the well-defined and consistent state.
PSQL VSS Writer and Restore Operations
Stop the PSQL services prior to performing a Restore operation with the Backup software. Failure to do so causes the VSS Writer to inform the VSS Requestor that it cannot participate in the Restore. Transaction logs will need to be restored along with the data files to guarantee the integrity of the data. If PSQL data and transaction log files are restored while PSQL is running, the results are unpredictable and could lead to data corruption.
PSQL VSS Writer and PSQL Continuous Operations
You may have an existing backup process that already uses PSQL Continuous Operations or Backup Agent. If you choose, you can continue to use that process with PSQL and PSQL VSS Writer. PSQL VSS Writer does not interfere with Continuous Operations or Backup Agent. However, there is no advantage to using both PSQL VSS Writer and Continuous Operations (or Backup Agent) together. Your backup process will be more streamlined if you select one method or the other.
When PSQL VSS Writer is called and files are in Continuous Operations, be aware that VSS Writer operates independently from any Continuous Operations. If files are in Continuous Operations when a VSS backup is in progress, view PVSW.LOG after the backup completes. Ensure that the Frozen and Thawed states completed successfully and that the data is in a well-defined and consistent state.
Also note that PSQL VSS Writer requires the Microsoft VSS framework. Backup Agent does not use the Microsoft VSS framework. Consequently, Backup Agent does not participate in the backup when the VSS framework calls PSQL VSS Writer and I/O operations are quiesced. Backup Agent must be added separately to the backup process. The backup process must also start and stop Backup Agent.