Advanced Reference : Advanced Operations Guide : High Availability Support
 
High Availability Support
Using Zen in High Availability Environments
This chapter includes the following sections:
Overview of Technologies
Failover Clustering
Migration
Fault Tolerance
Disaster Recovery
Overview of Technologies
Zen is compatible with numerous solutions that maximize uptime in physical and virtual environments. Such solutions continually evolve but can be classified generally as high availability, fault tolerance, and disaster recovery.
High Availability
The definition of high availability can differ depending on the software vendor that provides high availability solutions. In general, it refers to a systems design approach for a predictable baseline level of uptime, despite hardware failure, software failure, or required maintenance.
A common approach to ensure high availability in a physical environment is failover clustering. A common approach in a virtual machine (VM) environment is migration.
Failover Clustering
Zen is designed to function as a resource in a failover cluster environment in which only one server node at a time accesses the shared storage subsystem. If the primary node fails, a failover (or switch) to a secondary node occurs. Failover clustering allows a system to remain available while you perform software upgrades or hardware maintenance.
Zen is compatible with Microsoft Failover Cluster Services and with Linux Heartbeat. Refer to the documentation from those vendors for the specific manner in which they define and implement failover clustering. Zen Enterprise Server and Cloud Server are the recommended editions for failover clustering.
See Failover Clustering.
Migration
In general terms, migration allows a running VM or application to be moved between different physical machines without disconnecting the client or application. The memory, storage, and network connectivity of the VM are typically migrated to the destination.
Zen is compatible with the migration capability offered by Microsoft Hyper-V, VMware vSphere, and Citrix XenServer. As long as host names remain the same after the VMs are moved, Zen continues to operate normally. See the documentation from those vendors for the specific manner in which they define and implement migration.
See Migration.
Fault Tolerance
While high availability aims for a predictable baseline level of uptime, fault tolerance is the uninterrupted operation of a system even after the failure of a component. Fault tolerance requires synchronized shared storage. In virtualized environments, the VM that fails must be on a different physical host from the VM that replaces it.
Fault tolerance can be achieved using just physical machines. However, virtual environments lend themselves so readily to maintaining virtual servers in lockstep with each other that exclusively physical environments are increasingly less common. Zen servers are compatible with fault tolerance capabilities in an exclusively physical environment.
For virtual environments, Zen is compatible with the fault tolerance capability offered by VMware vSphere and Citrix XenServer. Refer to the documentation from those vendors for the specific manner in which they define and implement fault tolerance.
See Fault Tolerance.
Disaster Recovery
Disaster recovery involves duplicating computer operations after a catastrophe occurs and typically includes routine off-site data backup as well as a procedure for activating vital information systems in a new location.
Zen is compatible with major hypervisors that support disaster recovery technology that initializes backup physical or virtual machines. As long as all host names remain the same after the VMs are moved, Zen continues to operate normally. This allows rapid server replacement and recovery time.
Refer to the documentation from the hypervisor vendors for the specific manner in which they define and implement disaster recovery.
See Disaster Recovery.
Hardware Requirements
For all of the technologies mentioned in this section, we recommend that you select servers, disk subsystems, and network components from the hardware compatibility list provided by the vendor. We follow this same practice when testing for compatibility with vendor products.
Failover Clustering
Failover clustering provides a group of independent nodes, each with a server, all of which have access to a shared system of shared data files or volumes. The failover services ensure that only one server at a time controls data files or storage volumes. If the currently active server fails, then control is passed automatically to the next node in the cluster, and the server on that node takes over.
In Zen, only one database engine can service data. In a cluster, because a Zen engine is installed on every node, you must configure these engines so that they appear as a single engine, and not the multiple engines that they actually are.
Each Zen engine must be licensed separately for each cluster node, whether the node is a physical or a virtual machine. Each node needs an Enterprise Server - Active-Passive license.
This section covers the following topics:
Microsoft Failover Cluster for Windows Server
Linux Heartbeat
Managing Zen in a Cluster Environment
Microsoft Failover Cluster for Windows Server
This topic covers adding a Zen engine service to a Microsoft failover cluster and assumes the following:
You know how to install and configure a failover cluster and need only the information required to add and manage the Zen engine service.
You are familiar with using Zen and its utilities such as Zen Control Center (ZenCC).
You can set up DSNs using Zen ODBC Administrator.
General Steps
Here are the steps for creating a Windows failover cluster with Zen engines.
1 Set up and configure the nodes of the failover cluster.
You must set up the Windows servers in the cluster and confirm that failover succeeds before you add a Zen engine. For example, verify that a failover finishes and that all resources remain available, then fail over back to the original node and verify again. See the Microsoft documentation for setting up and verifying failover clustering using the Server Manager dashboard and Failover Cluster Manager.
2 Set up a shared file server on a separate, dedicated system.
After you install the Zen engines, you will set them to use this common data location.
3 Install a Zen engine on each cluster node.
See the steps in the following table. Install all of the Zen engines before moving to the next step.
4 Return to Failover Cluster Manager to configure them. Configure the Zen installations in the cluster to appear as a single database server.
Continue with the steps in the following table.
Installing and Configuring Zen in a Windows Failover Cluster
The following table gives the recommended steps to add Zen to a failover cluster in Windows Server.
Action
Notes
Install Zen on cluster nodes.
Install a Zen server on each node and choose identical settings for each installation.
Do not install Zen on the dedicated shared storage system where the Zen data files reside.
By default the Zen engine service starts automatically with Windows. After installing on each node, in the Actian Zen Enterprise Server service properties, change the startup type to manual and stop the service. Failover Cluster Manager will start it when needed.
Add a role and select the Zen server as a generic service.
Select the storage to be used for the application data files.
Add a resource and select the Zen server as a generic service.
Do these steps in Failover Cluster Manager.
Registry replication is not supported when running Zen engine as a service, so skip that option. You must manually configure the database properties for each Zen engine on each node, as explained later in this table.
Set the file share as a dependency for the Zen engine service. Select the option Use network name for computer name.
Confirm that shared storage has the necessary files and directories.
The Zen engine service typically runs under the LocalSystem account. Confirm that the LocalSystem account has permissions to read and write to the shared location.
On the active node where you installed Zen, copy the dbnames.cfg file from the ProgramData directory to your chosen directory in shared storage.
On the active node, copy the following directories from ProgramData to the same directory in shared storage. Optionally, you can copy them to the same directory as dbnames.cfg.
defaultdb
Demodata
tempdb
Transaction Logs
Configure database engine properties with ZenCC.
In ZenCC, configure the engine on the current active node in your cluster, then do the same settings for each of the other nodes.
Set these engine properties for Directories. When prompted to restart the engine, click No.
For Transaction Log Directory, enter the location on the shared disk where you copied the Transaction Logs directory.
For DBNames Configuration Location, enter the location on the shared disk where you copied the dbnames.cfg file.
In Failover Cluster Manager, take Zen resources offline and back online to apply changes.
In ZenCC, under the Databases node for your server, set these properties:
In the DEFAULTDB properties for Directories, set Dictionary Location and Data Directories to the location on the shared disk where you copied the Defaultdb directory.
In the DEMODATA properties for Directories, set Dictionary Location and Data Directories to the location on the shared disk where you copied the Demodata directory.
In the TEMPDB properties for Directories, set Dictionary Location and Data Directories to the location on the shared disk where you copied the Tempdb directory.
The failover cluster is now configured. Remember that when failover occurs, client connections may also fail and require restarting of client applications.
Note To apply a patch to Zen servers in a cluster environment, see this knowledge base article.
Linux Heartbeat
The Heartbeat program is one of the core components of the Linux-HA (High-Availability Linux) project. Heartbeat runs on all Linux platforms and performs death-of-node detection, communications and cluster management in one process.
This topic discusses adding the Zen engine service to Linux Heartbeat and assumes the following:
You know how to install and configure the Heartbeat program and need only the information required to add Zen to a Cluster Service group.
You are familiar with using Zen and its primary utilities such as ZenCC.
Preliminary Requirements
It is essential that Linux Heartbeat be functioning correctly before you add Zen to the cluster. See the documentation from the High-Availability Linux Project (www.linux-ha.org) for how to install Heartbeat, verify that it is working correctly, and perform tasks with it.
Just as you would for any application, set up the essential clustering components before you add Zen.
Installing and Configuring Zen in a Linux Heartbeat Failover Cluster
The following table gives the recommended steps to add Zen to Linux Heartbeat.
Action
Discussion
Install Zen on the Cluster Nodes
Install a Zen server on each cluster node and choose identical options for each installations. Do not install Zen on the cluster shared storage where the Zen data files reside.
After installation, the database engine is set to start automatically when the operating system starts. With clustering, however, Linux Heartbeat controls starting and stopping the database engine. The controlling node in the cluster starts the engine, the other nodes do not.
After you install a Zen server, ensure that the Group IDs for zen-data and zen-adm and the UID for zen-svc match on all nodes. If required, change the IDs to ensure they are the same.
Configure the Shared Storage
The shared storage is where the Zen data files reside. Shared storage for Heartbeat can be implemented many different ways. The multitude of possible implementations is beyond the scope of this document. This documentation assumes the use of an NFS mount.
Create (or at least identify) a location on shared storage where you want the database to reside. The location is your choice. Confirm that user zen-svc has read, write, and execute authority for the location.
Create two groups and a user on the shared storage so that each cluster node can access the database files.
Groups zen-data and zen-adm must match zen-data Group ID and zen-adm Group ID, respectively, on the cluster nodes.
User zen-svc must match zen-svc UID on the cluster nodes.
Create the Directory for the Shared Storage Mount
On each cluster node, log in as user zen-svc then create a directory that will be mounted to the shared storage. User zen-svc has no password and can only be accessed through the root account with the su command. The name of the directory is your choice.
Configure Heartbeat Server
Configure the Heartbeat server on each of the nodes that will control the Zen database engine. Configure the following:
Nodes. Add all nodes that you want in the cluster.
Authentication. Specify the type of authentication to use for the network communication between the nodes.
Media. Specify the method Heartbeat uses for internal communication between nodes.
Startup. Specify the setting for when the Heartbeat Server starts. Set this to on, which means that the “server starts now and when booting.”
Assign Password for Heartbeat User
Linux Heartbeat provides a default user named hacluster for logging in to the Heartbeat Management Client. Assign a password to user hacluster on each of the nodes from which you want to run Heartbeat Management Client.
Add a Resource Group for Zen
Log in as root and start the Heartbeat Management Client on one of the cluster nodes. Log in as user hacluster and add a new group. For ID, specify a name for the Zen group. Set Ordered and Collocated to true.
Add the Resources to the Group
Add three resources to the Zen group:
IPaddr
Filesystem
Zen (OCF resource agent)
IPaddr
In the Heartbeat Management Client, add a new native item. For Belong to group, select the group you added for Zen. For Type, select IPaddr.
On the resource you just added, specify the IP address of the cluster for the IP Value. Use the IP address assigned to the cluster (not the node) when Linux Heartbeat was installed and configured.
Filesystem
Add another new native item. For Belong to group, select the group you added for Zen.
For Type, select Filesystem and delete the parameter fstype, which is not required. Add a new parameter and select “device” for Name. For Value, specify the device name of the shared storage, a colon, and the share mount location.
Add another new parameter and select “directory” for Name. For Value, specify the directory to use with the NFS mount.
Zen (OCF resource agent)
Add another new native item. For Belong to group, select the group you added for Zen. For Type, click zen-svc with a Description of Zen OCF Resource Agent. No additional settings are required.
Create the Subdirectories on the Mounted Shared Storage
Now that you have added the Filesystem resource, the mount exists between the cluster server and the shared storage. On one of the cluster nodes, log in as user zen-svc. Under the shared storage mount, create a directory named “log” and another named “etc.”
For example, if the mount directory is /usr/local/actianzen/shared, you would add directories /usr/local/actianzen/shared/log and /usr/local/actianzen/shared/etc.
Configure the Cluster Server in ZenCC
On each of the cluster nodes, you need to configure the cluster server with ZenCC.
Place all cluster nodes into standby mode except for the one from which you will run ZenCC. As user zen-svc, start ZenCC on the one active node or from a client that can access the active node.
In Zen Explorer, add a new server and specify the name (or IP address) of the cluster.
Access the properties for the server you just added. If prompted to log in, log in as user admin. Leave the password blank. Access the Directories Properties. For Transaction Log Directory, specify the directory that you created for the “log” location. For DBNames Configuration Location, specify the directory that you created for the “etc” location. See Create the Subdirectories on the Mounted Shared Storage.
Use ZenCC to add a new server and set its properties from each of the other cluster nodes. Place all nodes into standby mode except for the one from which you run ZenCC.
Create the Database on the Shared Storage
From the operating system on one of the cluster nodes, log on as user zen-svc and create the directory under the file system share where you want the database to reside. (If you create the directory as user root, ensure that user zen-svc has read, write, and execute authority on the directory.)
Place all cluster nodes into standby mode except for the one from which you will run ZenCC.
As user zen-svc, start ZenCC on the one active node or from a client that can access the active node. Create a new database for the server you added in Configure the Cluster Server in ZenCC. For Location, specify the directory you created where you want the database to reside. Specify the other database options as desired.
For the new database, create tables as desired.
Verify Access to the Database from each Node
Each cluster node must be able to access the Zen database on the shared storage. Place the cluster node from which you created the database into standby mode. This is the node running the zen-svc resource (the database engine).
Fail over to the next node in the cluster. Verify that the next node receives control of running the zen-svc resource. Repeat the standby, fail over and verification process for each node in the cluster until you return to the node from which you began.
Managing Zen in a Cluster Environment
After you install Zen in a failover cluster environment, you can manage it as a resource. The following items discuss common management topics:
Zen Licensing and Node Maintenance
Zen Failure Behavior
Stopping or Restarting the Actian Zen Engine Service
Zen Configuration Changes
Software Patches
Zen Licensing and Node Maintenance
The normal Zen licensing and machine maintenance procedures also apply to the nodes in a failover cluster environment. Deauthorize the Zen key before you modify the configuration of the physical or virtual machine where the database engine is installed. Reauthorize the key after changes are complete.
See To Deauthorize a Key and To Authorize a Key in Zen User’s Guide.
Zen Failure Behavior
If a cluster node fails, a Zen client does not automatically reconnect to the Zen engine on the surviving node. Your application must reconnect the client to the Zen database or you must restart the application. This applies even if Enable Auto Reconnect is turned on for the database engine.
If transaction durability is turned off and a failure occurs before a transaction completes, the transaction is automatically rolled back to its state before the transaction began. That is, to the last completed check point. The rollback occurs when the active server requests access to the data file.
If transaction durability was turned on, completed changes can be recovered that occurred between the time of the cluster node failure and the last check point. Transaction durability must be configured the same way on all nodes and the transaction log located on the shared storage. Transactions that had not completed at the time of the cluster failure, however, are lost even if transaction durability was in effect.
Stopping or Restarting the Actian Zen Engine Service
A cluster failover occurs from the active node if you manually stop the Zen database engine service through the operating system. If you are performing service node maintenance and want to avoid such a failover, stop the service through the cluster utilities.
Zen Configuration Changes
Some configuration changes require that you restart the database engine. See Configuration Reference.
To stop and restart the Zen database engine service to apply configuration changes
Use the following steps in the Windows Cluster Administrator in the order listed:
1 Right-click the Actian Zen engine service and select Bring this resource offline.
2 Right-click the Actian Zen engine service and select Bring this resource online.
Software Patches
At some point, you may need to patch Zen or the failover cluster software. To help you do so, Actian Technical Support provides a knowledge base article on this topic.
Migration
Migration moves a VM running Zen from one physical host to another. The memory, storage, and network connectivity of the VM are typically migrated to the destination. Depending on the hypervisor, migration is sometimes referred to as “live” migration or “hot” migration.
With a “live” or “hot” migration, client connections to Zen remain intact. This allows changes to hardware or resource balancing. With a “cold” migration, network connectivity is interrupted because the VM must boot. Client connections to Zen must be reestablished.
A migration environment has only one instance of Zen running, which makes the environment somewhat vulnerable if the host machines crashes or must be quickly taken offline. Also, if the shared storage fails, the database engine cannot process reads from or writes to physical storage. Some hypervisors offer a migration solution that does not use shared storage.
As long as host names remain the same after the VM migrates, Zen continues to operate normally. The product key remains in the active state.
No special steps are required to install or configure Zen in a migration environment. Refer to the hypervisor documentation.
Fault Tolerance
A fault tolerant environment is similar to a migration environment but includes additional features to ensure uninterrupted operation even after the failure of a component. A fault tolerant environment ensures network connections, continuous service, and data access through synchronized shared storage. If a component switch occurs, client machines and applications continue to function normally with no database engine interruption.
No special steps are required to install or configure Zen in a fault tolerant environment. Refer to the hypervisor documentation.
Disaster Recovery
Disaster recovery includes data recovery and site recovery. Data recovery is how you protect and restore your data. Site recovery is how you protect and restore your entire site, including your data.
Data recovery is facilitated with the hypervisor shared storage and Zen transaction logging and transaction durability. See Transaction Logging and Durability. You can use transaction logging and transaction durability with Zen Enterprise Server and Cloud Server.
Site recovery can be accomplished with both physical machines and virtual machines. Zen operates normally provided that host names remain the same in the recovered site. This is typically the case for virtual machines. If you are recovering physical machines and the host name at the recovery site is different, the Zen product keys will change to the failed validation state when Zen starts. Zen will continue to operate normally in the failed validation state for several days, during which you can either repair the key or move back to the original site.
No special steps are required to install or configure Zen in a disaster recovery environment. Refer to the hypervisor documentation.