User Guide > Map Connectors > Source and Target Map Connectors > IBM DB2 Universal Database 9.5 Multimode
Was this helpful?
IBM DB2 Universal Database 9.5 Multimode
DB2 can be a source or target connection either directly or through an ODBC driver. You still need to set up an ODBC data source to make the connection. For more information, see the procedure under ODBC 3.5. Also see IBM DB2 Universal Database 9.5.
The integration platform allows for concurrent writes to multiple target tables. Multimode connections allow you to perform table drop and table insert operations directly on your target database.
Connector-Specific Notes
In multimode targets, modifications to column names, data types, and sizes are not permitted.
Connector Parts
Connector parts are the fields you configure to connect with a data source or target.
The settings that are available depend on the connector you select.
For a list of all parts for target connectors, see Specifying Connector, Parts, and Properties.
Property Options
You can set the following target properties.
Property
Description
AutoCommit
If set to True, it allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction.
Also, if this option is set to True, you cannot roll back changes after they are done. It will overwrite the CommitFrequency value, which means that the changes are committed by each SQL statement irrespective of the value set for CommitFrequency.
The default value is False.
CommitFrequency
Allows you to control how often data is committed to the database when the AutoCommit property is set to False.
The default value is zero that is, the data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When performing large transformations, this is not practical as it may produce too many transaction log entries.
Specifying a nonzero value indicates that data is committed to the database after inserting or updating specified number of records.
ConstraintDDL
Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see ConstraintDDL Example.
DriverCompletion
Allows you to control the driver prompt for information. The options are Prompt, Complete (default), Complete Required, and No Prompt.
Prompt option: Asks the user all information.
Complete option: Asks the user for information they forgot to enter.
Complete Required option: Asks the user only for information required to complete the connection.
No Prompt option: Does not ask the user for information.
DriverOptions
Enter any valid ODBC connect string options here. There is no default.
IdentifierQuotes
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma.
IdentifierQuotes has four options:
Default
None
"
'
Maximum Array Size
Maximum number of records fetched or inserted with each cursor operation. The default is 1.
MaxDataLength
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
ModifyDriverOptions
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
Encoding
Allows you to select the type of encoding used with your source and target files.
PrimaryKey
Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default.
To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
SQL Output
Allows you to specify bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster, as bind variables are used.
Select from the following:
Target Only (default) - Use bound mode, which uses bind variables. SQL statements are sent to the target only.
Target Only (Unbound Mode) - Use unbound mode. Does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log file specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and the SQL log file specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SQL Log
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
Note:  SQL statements are sent to the SQL Log file only if the SQL Output property is set to either Target and SQL Log or SQL Log Only.
SystemTables
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
Allows you to specify any one of five isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The four isolation levels are as follows:
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation.
UpdateNullFields
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Best Practice — If fields in the target record are not mapped, then the null values are passed to the target. If you do not want to write to these fields, then it is recommended to set the value for UpdateNullFields to False.
Views
If set to True, this property allows you to see the view names in the table list along with the table names. Default is True.
Note:  This property supports only Append and DeleteAndAppend output modes and does not support the Replace output mode.
BulkOperations
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
UseCursors
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
ArraySize
Determines the number of rows to be sent to the server at one time. The default value is 1000, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires BulkOperations and UseCursors is set to True or both must be set to False. If BulkOperations is False and UseCursors is True, then the array size is ignored and a message is logged indicating this condition.
ConstraintDDL Example
In these example statements, we create two indices on the table called "mytable". The first index does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
ConstraintDDL is executed only if replace mode is used for the target. If there are errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. To ignore DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism to specify DDL in the native SQL of the DBMS. Any statement preceded by an "@" is sent to the DBMS.
The following is a DDL statement for creating a primary key for the table mytable.
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key in these cases, use native SQL.
Supported Data Types
The following data types are supported for data fields:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
Note:  Column sizes for text-type fields (char, varchar, etc.) should be specified in bytes, not characters. This is important when sending Unicode that contains characters outside the regular ASCII range. The strings are encoded using UTF-8, and the column width needs to be specified as the (max) number of bytes that will occur in the UTF-8 string.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Last modified date: 02/01/2024