Was this helpful?
Creating Checkpoint Locations
Checkpoint locations are created using the 'with CLOUD_STORAGE=(<api>,<bucket>,<config>)' clause in the 'create location' SQL statement. CLOUD_STORAGE values:
CLOUD_STORAGE Values
Description
API
The api for the cloud provider. Valid values:
s3 - Amazon s3 or s3 compatible providers like MinIO (s must be lowercase)
abs - for Azure blob storage provider
Bucket
The name of the cloud bucket you are using in your cloud provider.
Config
Any additional parameters needed for the cloud connection.
Area and usage clauses are also needed in this statement, see Setting up a Cloud Checkpoint Location for correct checkpoint location creation.
The parameters need to be divided by semicolons in the <config> string. See the list of used parameters below:
S3 parameters:
endpoint - The location where the Cloud storage is accessed, written in URL format
provider - s3 provider name
region - Region of the bucket being accessed (if necessary)
Examples:
CLOUD_STORAGE=
('s3', 'backup-bucket', 'provider=AWS;endpoint=http://s3.us-east-1.amazonaws.com;region=us-east-1')
CLOUD_STORAGE=('s3', 'backup-bucket', 'provider=Minio;endpoint=http://11.111.11.11:1111/')
CLOUD_STORAGE=('s3', 'backup-bucket', 'provider=GCS;endpoint=https://storage.googleapis.com;region=us-east1')
Azure Blob Storage parameters:
account - Azure Blob Storage account name
Example:
CLOUD_STORAGE=('abs','backup-bucket','account=azureaccountname')
bucket_namespace:
Acts as a suffix to your bucket name, which allows you to separate items in the cloud provider. If the bucket_name is set to "area1" for the location, all the objects for this location would be stored (assuming s3) in:s3ckp:<bucket name>/area1
You can add this as a parameter to both S3 and ABS <config> strings.
To read or write data into the cloud provider's bucket, rclone must have the storage credentials for the bucket. The "access_key" and "secret_access_key" authorization strings are used to access the bucket.
To avoid exposing these strings in process parameters or database utilities, they are set using the following environment variables:
S3:
RCLONE_CONFIG_S3CKP_ACCESS_KEY_ID
RCLONE_CONFIG_S3CKP_SECRET_ACCESS_KEY
Azure Blob Storage:
RCLONE_AZUREBLOB_KEY
Note:  The environment variables must be set in the local environment when you start Ingres so running Ingres processes will have the values stored in memory.
Checkpoint and rollforwarddb run rclone to copy the data.
Note:  The environment variables must be set before running the checkpoint and rollforwarddb.
When a rollforwardb or ckpdb is run against a database, a new file “rclone.<database>.conf” is created in the II_SYSTEM/ingres/files directory. This file contains the CLOUD_STORAGE parameters for the database cloud checkpoint location (except the bucket).
The rclone configuration file is used for all the rclone operations for the database. You can use this file for debugging purposes and can be used with rclone with the --config=<file> option.
Note:  The recovery process from cloud backups relies on streaming data through rclone. On GCP and AWS this is a multi-threaded operation, but on Azure this is a single-threaded operation. This makes a recovery from cloud backups on Azure more time consuming than on GCP and AWS. Rclone support is aware of this performance limitation.
Setting up a Cloud Checkpoint Location
Follow these steps for setting up a cloud checkpoint location:
1. Install rclone.
Unix:
rclone must exist in the path used by the Ingres server, and in the path used for running checkpoints or rollforwards.
Windows:
On Windows, rclone.exe must be in %II_SYSTEM%\ingres\bin.
Note:  Cloud checkpoints use rclone to copy files to the cloud location. It is recommended to first run rclone with the intended configuration to make sure it runs without errors. Also, the version of rclone on the machine might need to update because rclone will require the use of the --s3-no-check-bucket flag.
2. Set the following storage credential variables of the bucket of your provider:
S3:
RCLONE_CONFIG_S3CKP_ACCESS_KEY_ID
RCLONE_CONFIG_S3CKP_SECRET_ACCESS_KEY

Azure Blob Storage:
RCLONE_AZUREBLOB_KEY
For example:
When using MinIO as an s3 server with the default passwords:
Unix:
export RCLONE_CONFIG_S3CKP_ACCESS_KEY_ID=minioadmin
export RCLONE_CONFIG_S3CKP_SECRET_ACCESS_KEY=minioadmin
 
Windows:
set RCLONE_CONFIG_S3CKP_ACCESS_KEY_ID=minioadmin
set RCLONE_CONFIG_S3CKP_SECRET_ACCESS_KEY=minioadmin
3. Set the RCLONE environment variables for your Windows service:
a. In the Command Prompt, type regedit and press Enter to open the Windows registry.
b. Open: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services
c. Select Services from the dropdown, open the key for your Ingres Installation. (The key will be named 'Ingres_Database_XX' where XX is the installation code of your Ingres installation).
d. In the right panel, select New, Multi-String Value. Then use the following:
Name: Environment. Then double-click on the new Multi-String Value.
For Value Data, enter the following values by using the appropriate access key and secret access key for your bucket:
RCLONE_CONFIG_S3CKP_ACCESS_KEY_ID=<value>
RCLONE_CONFIG_S3CKP_SECRET_ACCESS_KEY=<value>
4. Confirm if the following rclone settings are set to access the cloud bucket:
a. Create an rclone configuration file with appropriate rclone parameters.
Example configuration sets: note endpoints/regions must be set appropriately.
MinIO:
[s3ckp]
type = s3
provider = Minio
endpoint = http://127.0.0.1:9999/
Google Cloud Services:
[s3ckp]
type = s3
provider = GCS
endpoint = https://storage.googleapis.com
region = us-east1
Amazon Web Services:
[s3ckp]
type = s3
provider = AWS
endpoint = http://s3.us-east-1.amazonaws.com
region = us-east-1
 
Azure Blob Storage:
[absckp]
type = azureblob
account = azureaccountname
b. Test rclone:
Assume the test configuration is named ‘test.conf’ and check that every parameter in ‘test.config’ (except ‘type=’) is in the csconfig of your cloud location. Run the following commands:
S3:
rclone ls s3ckp:<bucket> --config=test.conf
Azure Blob Storage:
rclone ls absckp:<bucket> --config=test.conf
where <bucket> is the name of the storage bucket in your cloud provider.
Example:
rclone ls s3ckp:backupbucket --config=test.conf
Verify your access keys for any authorization errors. For any other status errors, verify the status of your bucket with your cloud provider.
5. Start Ingres:
IMPORTANT!  To avoid any authorization errors in the Ingres processes while running rclone, the storage credential environment variables must be set correctly.
6. Create a Cloud checkpoint location by running the following query in iidbdb:
create location <location name> with area = 'II_CHECKPOINT', usage=(CHECKPOINT), CLOUD_STORAGE=(<api>, <bucket>, <config>)
Note:  The values for <api>, <bucket>, and <config> should be set to the appropriate values for your cloud provider.
7. Assign the checkpoint location to an existing database using relocatedb, or create a database using this checkpoint location:
createdb -cnewckp testdb or
createdb -cnewckp testdb -no_x100 to enable journaling.
8. Checkpoint the database.
Note:  The RCLONE storage credential variables must be set in the environment.
When you checkpoint the database, the checkpoint uses the rclone rcat command to store a compressed tar file of a database location. Settings in the II_SYSTEM/ingres/files/cktmpl.def entries will cause rclone to use a chunk-size of 220 MB. Therefore, rclone can copy a compressed tar file of up to 2.1 TB. You can lower the chunk-size, which will reduce the amount of memory rclone uses, but it might cause the rclone to fail if the tar file is too big. If you need to checkpoint a tar file larger than 2.1 TB, you can increase the chunk-sizing, but this increases the amount of memory used.
Last modified date: 08/29/2024