User Guide : Map Connectors : Source and Target Map Connectors : Actian Avalanche
 
Share this page                  
Actian Avalanche
Actian Avalanche is a fully managed cloud data warehouse delivered as a service. It is a database application that you can connect natively to through ODBC 3.5. For more information, see Actian Avalanche documentation available at docs.actian.com.
The integration platform uses the Actian Avalanche connector for reading and writing data to Avalanche data tables.
Prerequisites
Install Actian Avalanche client runtime. Go to esd.actian.com and download the required version of Actian Avalanche.
For the installation details, see Actian Avalanche client documentation available at docs.actian.com.
Setup ODBC Data Source for Avalanche: Perform the following steps:
a. From the Start menu, go to Administrative Tools.
b. Double-click ODBC Data Sources (64-bit).
The ODBC Data Source Administrator window is displayed.
c. Click Add.
The Create New Data Source window appears.
d. Select Ingres CR driver and click Finish.
The ODBC Administrator window appears for the selected driver.
e. Specify the following:
Data Source: Name of data source. For example, Avalanche.
Description: Description of the data source.
Host Name: Host IP address of the server where Actian Avalanche is installed.
Note:  It is recommended to specify the Host IP address instead of Host Name because Host Name is too long and the driver may not accept it.
Listen Address: Port to be used to connect to Actian Avalanche.
Database: Name of the Avalanche database to connect.
f. Click Apply and then click OK.
The created data source is displayed on the System DSN tab. The name of the driver is also displayed.
g. Click OK to close the ODBC Administrator window.
Connector Properties
You can set the following source (S) and target (T) properties.
 
Property
Source/Target
Description
Encoding
S/T
Type of character encoding to use with source and target files. The default value is OEM.
Shift-JIS encoding is used only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the data file, change UCS-2 to UCS2.
Note:  This property is not encoding of the database that you connect to, but it is the encoding in which the connector expects to receive SQL query statements that must be sent to the database.
WhereStmt
S
Pass-through mechanism for SQL connectors where you can construct the Where clause of the SQL query. It is also used as an alternate to writing lengthy query statements in the Query Statement text box. You can use the Where statement to instruct the SQL database server to filter the data based on a condition before it is sent to the integration platform. Do not use Where when you enter the clause.
When the source connection is a Select statement, do not apply WhereStmt. Instead, include the Where clause in the Select statement.
Note:  This property enables data filtering when a table is selected.
SystemTables
S/T
If set to True, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. The default value is False.
Note:  This property is applicable only if you have logged in to the database as the Database Administrator (DBA). Only the DBA has access to system tables.
Views
S/T
If set to True (default), allows you to see views. View names appear in the table list along with table names.
Synonyms
S/T
If set to True, allows you to see synonyms. The alias names appear in the table list along with the tables. The default value is False.
DSNType
S/T
Data sources or drivers to connect to in the Data Source Name (DSN) list. DSNs are listed in Control Panel > Administrative Tools > ODBC Data Source Administrator window.
CursorType
S
Type of cursor to use for retrieving records from the source table. The options available are:
Dynamic
Static
Forward Only
The default value is Forward Only.
DriverCompletion
S/T
Control whether or not the driver prompts you for information. The options available are:
Prompt: Prompts you for each information.
Complete: Prompts you for information you forgot to enter.
Complete Required: Prompts you for information that is essential to complete the connection.
No Prompt: Does not prompt you for any information.
The default value is Complete.
IdentifierQuotes
S/T
Quoted identifiers are used to parse the SQL statement and distinguish between columns and character data in SQL statements. All databases have quoted identifiers.
In a SQL statement, you must enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma.
IdentifierQuotes has three options:
Default
None
"
ModifyDriverOptions
S/T
If set to True, stores the ODBC connection information. If set to False (default), you are prompted for the connection information each time you access multiple tables in the same transformation when the transformation runs.
DriverOptions
S/T
Valid ODBC connect string options.
MaxDataLength
S/T
Maximum number of characters to write to a field. It is the maximum data length for long data types. The default value is 1 MB. You can change this value based on the available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, then the MaxDataLength value is used.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. In this case, do not set the MaxDataLength property to a higher value.
TransactionIsolation
S/T
Isolation level when reading from or writing to a database table with ODBC. The isolation levels are:
Read uncommitted – Permits P1, P2, and P3.
Read committed – Permits P2 and P3. Does not permit P1.
Repeatable read – Permits P3. Does not permit P1 and P2.
Serializable – Does not permit P1, P2 or P3.
The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
For further details about TransactionIsolation levels, see the IBM DB2 Universal Database ODBC documentation.
ConstraintDDL
T
Additional SQL data definition language statements that must be executed after the target table is created. This is similar to the support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, you can have the following statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements creates two indices on the table mytable. The first statement does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is run only if the output mode is Replace for the target. If there are any errors, they are written to the error and event log file. An error during transformation displays the Transformation Error dialog box. You can ignore the DDL errors and continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows you to specify DDL in the native SQL of the DBMS. Any statement preceded by an @ sign is sent directly to DBMS.
The following is a DDL statement for creating a primary key for the table mytable:
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions required to create a primary key with the ODBC variant of the SQL CREATE statement. In these cases, to create primary keys, use native SQL.
CommitFrequency
T
Controls how often data is committed to the database. The default value is zero that is, the data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When performing large transformations, this is not practical as it may produce too many transaction log entries.
Specifying a nonzero value indicates that data is committed to the database after inserting or updating specified number of records. This helps to avoid the transaction log from getting too large but limits the restartability of the transformation.
AutoCommit
T
Automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If this option is set to True, you cannot roll back changes after they are done. The default value is False.
BulkOperations
T
Determines if an insert statement is run for each record or a bulk add is executed for each record. The default is False, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, change this setting to True. Use bulk operations for faster insert.
PrimaryKey
T
List of field names that are used to create the primary key. Field names are delimited by commas.
If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in Replace mode.
To use the PrimaryKey property, the ODBC driver must support Integrity Enhancement Facility (IEF). Only advanced ODBC drivers support this.
UseCursors
T
If set to True and the specified ODBC driver does not support cursor inserts, the integration platform falls back on the SQL INSERT mode of adding records. The default value is False.
For exports, cursor support is meant to enhance the performance of inserting records. This is applicable for desktop databases.
For database servers, there is no noticeable change in insert speed. Another complication of cursor inserts is that some drivers require the target table be indexed, otherwise positioned updates (cursors) are not allowed. The PrimaryKey and ConstraintDDL properties in the ODBC export connector addresses this issue.
ArraySize
T
Determines the number of rows to be sent to the server at a time. The default value is 1 and indicates each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error will not be detected or reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows a high value to be set, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires BulkOperations and UseCursors is set to True or both must be set to False. If BulkOperations is False and UseCursors is True, then the array size is reset to 1 and a message logged indicating this condition occurred.
UsePartition
T
If set to True, the WITH PARTITION=(...) clause is added to SQL statement when creating a table. The (...) is partitioning scheme that is specified in the Partition option.
If set to False, the WITH NOPARTITION clause is added to SQL statement when creating a table
The default value is False.
Partition
T
Partitioning scheme. A table partition definition format is:
PARTITION = (dimension)
The syntax for each partition dimension is:
dimension = rule ON column {, column} partitionspec {, partitionspec}| rule partitionspec {, partitionspec}
where:
rule: Defines the type of distribution scheme for assigning rows to partitions. The only valid value is HASH, which distributes rows evenly among the partitions according to a hash value.
ON column {, column}: Specifies the columns to partition the table on.
partitionspec: When rule is HASH, defines the number of partitions and optionally their names:
partitionspec = DEFAULT PARTITIONS | [nn] PARTITION[S] [(name {, name})] [with_clause]
where:
DEFAULT PARTITIONS: Uses the number of partitions specified in the DBMS configuration parameter default_npartitions. The statement returns an error if the default partition value is not set. If DEFAULT PARTITIONS is specified, neither explicit partition names nor a with_clause can be specified.
nn: Number of partitions, which defaults to 1 if omitted.
name: Identifies the partition. When the number of partitions is two or more, a comma-separated list of names can be provided to override the default value. The default value is iipartNN.
with_clause: Specifies WITH clause options for partitioning. The with_clause for partitioning has the format WITH with-option, where with-option = LOCATION = (location). When specifying a LOCATION for the table, this location will only be used for PARTITIONS that are lacking a WITH LOCATION clause.
For more information about partitioning schemes, see Partitioned Tables section in Actian Vector documentation available at docs.actian.com.
Supported Output Modes
Actian Avalanche connector supports the Replace, Append, and Delete and Append output modes. For information about output modes, see Target Output Modes.
Data Types
Actian Avalanche connector supports the following data types:
Binary
Bit
Byte
Char
Counter
Currency
Datetime
Double
Guid
Integer
LongBinary
LongChar
Real
SmallInt
Varbinary
VarChar
If you are appending data to an existing table in target, then the data type of each field (from source) uses the data type in the selected table by default.