Property | S/T | Description |
SchemaFile | ST | Allows you to choose a document schema file (.ds.xml) as a source or target. Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use. |
CodeSetFile | ST | A CodeSet file (.dsc.xml) is automatically generated when you save a document schema in Document Schema Designer. They are stored in the same location as document schemas. CodeSets contain CodeTable values and any date-time masks used in the schema as well as segment qualifiers. QualifiedFieldNames |
Validation | ST | None is the default setting. The following validation methods are available: • Composite/Element – Validate maxuse, requirement, min/max length and default value. • Loop/Segment – Validate maxuse and requirement. • Composite/Element+Loop/Segment – Validate both composite/element and loop segment. |
QualifiedFieldNames | ST | Qualified Name is the name path of the parents of the segment in the Schema tree. If QualifiedFieldNames is true, Qualified Name is put before a field name. The default is false. |
Skip | S | When the Skip property is set to true, segments that do not match those in the applied schema file are not read. If you are reading large HIPAA files and have many mismatches, parsing time increases. To improve performance, set it to False. |
SegmentTerminator | T | A character that terminates a segment. Select a Segment Terminator from the list. The default is CR-LF. For your options, see ElementSeparators. |
ElementSeparator | T | Select an element separator from the list. The default is *(002A). For your options, see ElementSeparators. |
SubElementSeparator | T | The SubElementSeparator supersedes any values mapped in the ISA segment. Allows overrides of values from the command line or through the API. The default is :(003A). For your options, see ElementSeparators. |
RepetitionSeparator | T | Separates multiple occurrences of a field. The default is ^(005E). For your options, see ElementSeparators. |
LineWrap | T | Sets the type of line wrap in your source file. The default is None. Some files may consist of a constant stream of data with no line separators. LineWrap "forces" a temporary line-wrapping behavior. If you attempt to connect to your file and receive parse error messages, change the setting to CR-LF, CR, LF, or LF-CR. This property is available when the SegmentTerminator property is not set to CR or LF. |
WrapLength | T | Sets the length of line wrap in your target file. The default is 80. This property is available when the LineWrap property is not set to None. |
HIPAA form Title | Form Number |
Health Claims or Equivalent Encounter Information (Dental, Professional, Institutional) | 837 |
Enrollment and Dis-enrollment in a Health Plan | 834 |
Eligibility for a Health Plan | 270, 271 |
Health Care Payment and Remittance Advice | 835 |
Health Plan Premium Payments | 820 |
Health Claim Status | 276, 277 |
Referral Certification and Authorization | 278 |
Coordination of Benefits | 837 |
Property | S/T | Description |
Encoding | ST | To specify encoding for reading source and writing target data, select a code page translation table. The default value is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252. |
HIPAASchemaFile | ST | Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use. |
Skip | S | If set to true, segments that do not match those in the specified schema file are not read. If you are reading large HIPAA files and have many mismatches, the parsing time increases. To improve performance, set this option to False. |
SegmentTerminator | T | A character that terminates a segment. Select a Segment Terminator from the list. The default is CR-LF. For the options, see
Element Separators. |
ElementSeparator | T | Select an element separator from the list. The default is *(002A). For the options, see
Element Separators. |
SubElementSeparator | T | Supersedes any values mapped in the ISA segment. Allows overrides of values from the command line or through the API. The default is :(003A). For the options, see
Element Separators. |
RepetitionSeparator | T | Separates multiple occurrences of a field. The default is ^(005E). For the options, see
Element Separators. |
LineWrap | T | Sets the type of line wrap in your source file. The default is None. Some files may consist of a constant stream of data with no line separators. LineWrap forces a temporary line-wrapping behavior. If you attempt to connect to your file and receive parse error messages, change the setting to CR-LF, CR, LF, or LF-CR. Note: This property is available when the SegmentTerminator property is not set to CR or LF. |
WrapLength | T | Sets the length of line wrap in the target file. The default is 80. Note: This property is available when the LineWrap property is not set to None. |
Property | S/T | Description |
SchemaFile | ST | Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use. |
SegLibFile | ST | Segment library (.dsl.xml) file. This .dsl.xml file is created in Document Schema Designer. The segment library file is used during design time, not run time. |
FldLibFile | ST | Field library (.dfl.xml) file. This .dfl.xml file is created in Document Schema Designer. The field library file is used during design time, not run time. |
SegmentTerminator | ST | Character that terminates a segment. Select a Segment Terminator from the list. The default is CR(000D). For your options, see
Segment Terminators. |
Validation | ST | If set to true, the adapter validates: • Field requirement • Field default value (if this value is defined) • CheckSum Default value is False. Validation works differently on source and target files. Validation on the source occurs when you click Apply in Source Properties. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source-side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run time. |
Unicode | ST | This property is important if your HL7 file is Unicode encoded. The default is false, therefore if your source or target HL7 file is Unicode encoded, change this setting to true. |
StuffedLength | ST | Specify carriage return length. This property is not required by the HL7 standard, but is used by Lower Layer Protocols. By default, StuffedLength is zero. HL7 encoding rules do not place any limits on the length of a segment. Many languages or operating systems have terminal-oriented input disciplines that place a limit on the number of characters received before they must receive a carriage return character. To overcome this problem, use this property. If some characters are transmitted without a carriage return, the sending system inserts a carriage return character into the output stream. The receiving system is also counting characters from the last carriage return seen. If the limit is reached, the receiving system knows that the next character is a stuffed carriage return and must be removed from the data. |
FieldSeparator | T | Separates two adjacent data fields within a segment. An HL7 file is presumed by the integration platform to have a pipe | (007C) between each field. To specify a different field separator, click the FieldSeparator arrow to display the list of options. The default is | (007C). For your options, see
Segment Terminators. |
ComponentSeparator | T | Separates adjacent components of data fields. The default is ^(005E). For your options, see
Segment Terminators. |
RepetitionSeparator | T | Separates multiple occurrences of a field. The default is ~(007E). For your options, see
Segment Terminators. |
EscapeCharacter | T | Escape character for use with any field represented by an ST, TX or FT data type, or for use with the data (fourth) component of the ED data type. For your options, see
Segment Terminators. |
SubcomponentSeparator | T | Separates adjacent subcomponents of data fields. The default is &(0026). For your options, see
Segment Terminators. |
StuffedLength | T | This property is not required by the HL7 standard, but is used by Lower Layer Protocols. By default, StuffedLength=0. HL7 encoding rules do not place any limits on the length of a segment. Many languages or operating systems have terminal oriented input disciplines that place a limit on the number of characters that can be received before they must receive a carriage return character. To overcome this problem, you may use the StuffedLength property. If a certain number of characters have been transmitted without a carriage return, the sending system inserts a carriage return character into the output stream. The receiving system is also counting characters from the last carriage return seen. If the limit is reached, the receiving system knows that the next character is a stuffed carriage return and must be removed from the data. |
Property | S/T | Description |
Encoding | ST | To specify encoding for reading source and writing target data, select a code page translation table. Default is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252. |
SchemaFile | ST | Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema that you want to use. |
SegmentTerminator | ST | Character that terminates a segment. Select a Segment Terminator from the list. The default is CR(000D). For other options, see
Segment Terminators. |
StuffedLength | ST | Specify carriage return length. The default value is zero. HL7 encoding rules does not have any limitation on the segment length. Many languages or operating systems have terminal-oriented input disciplines that sets a limit on the number of characters received before they receive a carriage return character. To overcome this problem, use this property. If some characters are transmitted without a carriage return, the sending system inserts a carriage return character into the output stream. The receiving system is also counting characters from the last carriage return seen. If the limit is reached, the receiving system knows that the next character is a stuffed carriage return and must be removed from the data. Note: This property is not required by the HL7 standard, but is used by Lower Layer Protocols. |
FieldSeparator | T | Separates two adjacent data fields within a segment. The integration platform presumes that a HL7 (eDoc) file has a pipe | (007C) between each field. The default is | (007C). For the options, see
Segment Terminators. |
ComponentSeparator | T | Separates adjacent components of data fields. The default is ^(005E). For the options, see
Segment Terminators. |
RepetitionSeparator | T | Separates multiple occurrences of a field. The default is ~(007E). For the options, see
Segment Terminators. |
EscapeCharacter | T | Escape character to use with any field represented by ST, TX, or FT data type, or to use with the data (fourth) component of the ED data type. For the options, see
Segment Terminators. |
SubcomponentSeparator | T | Separates adjacent subcomponents of data fields. The default is &(0026). For the options, see
Segment Terminators. |
Property | S/T | Description |
Index | T | Setting this option to true (the default) causes the integration platform to create a list of hypertext-linked keys at the beginning of the HTML document you are writing. The keys are the data in the first field of your target schema. Currently, a limit of 16,000 key entries is allowed. After the HTML document is created, you can open the document in your browser and click the key entries to jump down to the appropriate block of information in the body of the document. |
Mode | T | The mode options allow you to select the format in which the integration platform writes out the HTML file. The following options are available: • List (default) – Writes out the information in a list format. Each field is written to a new line in the document (long text wraps). • Label – Writes out the field name (in bold) and the data for each record. A colon is inserted between the field name and the data. Each field is written to a new line in the document (long text wraps). • Table – Writes out the data in a 2-dimensional matrix format. This format is supported in HTML 3. This format is particularly appropriate for data publishing projects involving such items as part numbers and price lists. |
TableBorder | T | Turns the table border on or off. Off (false) is the default. If set to true, a border is generated around the table (set the border attribute to 1 in the Table element). This property only works if the Mode property is set to Table (see Mode above). |
CodePage | T | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | ST | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
BulkOperations | T | Specify whether or not to instruct the integration platform to use bulk operations for faster insert. If set to true, an insert statement or bulk add is executed for each record. This allows you to maximize speed. The default is false, the slower setting. |
CommitFrequency | T | Controls how often data is committed to the database. The default is zero, which means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce a significant number of transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see
ConstraintDDL Example. |
CursorType | S | Type of cursor to use for retrieving records from the source table. Available options are forward only, static, and dynamic. The default setting is forward only. For more details, see
Cursors. |
DriverCompletion | ST | Allows you to control the driver prompt for information. The available options: • prompt - asks the user all information. • complete (default) - asks the user for information they forgot to enter. • complete required - asks the user only for information required to complete the connection. • no prompt - does not ask the user for any information. |
DriverOptions | ST | Specify valid ODBC string connections options. There is no default. |
Encoding | ST | Type of encoding to use with source and target files. |
IdentifierQuotes | ST | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, double quotes ("), and a single quote ('). |
MaxDataLength | ST | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based on your available memory and target requirements. When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
ModifyDriverOptions | ST | Modify driver options with information returned from connector. Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation. |
PrimaryKey | T | Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in Replace mode. This property has no default. To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
SystemTables | ST | If set to true, this property allows you to see all tables in the database that were created by the DBA. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | ST | Allows you to specify an isolation level when reading from or writing to a database table with ODBC. The default is serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The isolation levels: • read_uncommitted – Permits P1, P2, and P3. • read_committed – Permits P2 and P3. Does not permit P1. • repeatable_read – Permits P3. Does not permit P1 and P2. • serializable – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to IBM DB2 database ODBC documentation. |
Views | ST | If set to true (default), allows you to see table views. View names appear in the table list along with table names. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors for advanced users to construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit Where when you enter the clause. This property has no default. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table. |
UseCursors | T | The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | Use |
AutoCommit | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
DriverCompletion | Allows you to control the driver prompt for information. The available options: • prompt - asks the user all information. • complete (default) - asks the user for information they forgot to enter. • complete required - asks the user only for information required to complete the connection. • no prompt - does not ask the user for any information. |
DriverOptions | Specify valid ODBC connect string options. There is no default. |
Encoding | Allows you to select the type of encoding used with your target file. |
IdentifierQuotes | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '. |
MaxDataLength | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
ModifyDriverOptions | Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation. |
SQL Log | SQL log file. The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name. |
SQL Output | Allows you to specify bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster, as bind variables are used. Select from the following: • Target Only (default) - Use bound mode, which uses bind variables. SQL statements are sent to the target only. • Target Only (Unbound Mode) - Use unbound mode. Does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log file specified in the SQL Log property. • Target and SQL Log - Sends SQL statements to the target and the SQL log file specified in the SQL Log property. • SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property. |
SystemTables | If set to true, this property allows you to see all tables in the database that were created by the DBA. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | Allows you to specify an isolation level to use when reading from or writing to a database table with ODBC. The default is serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The isolation levels: • read_uncommitted – Permits P1, P2, and P3. • For further details about TransactionIsolation levels, refer to IBM DB2 database ODBC documentation. • read_committed – Permits P2 and P3. Does not permit P1. • repeatable_read – Permits P3. Does not permit P1 and P2. • serializable – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation. |
UpdateNullFields | Null values are sent to the database when inserting or updating records. The default is true. If you select false, null values are not sent to the database when you insert or update records. This forces the connector to operate in unbound mode, which may cause slower performance. |
Views | If set to true (default), allows you to see table views. View names appear in the table list along with table names. |
Property | ST | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
BulkOperations | T | Specify whether or not to instruct the integration platform to use bulk operations for faster insert. If set to true, an insert statement or bulk add is executed for each record. This allows you to maximize speed. The default is false, the slower setting. |
CommitFrequency | T | Controls how often data is committed to the database. The default is zero, which means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce a significant number of transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see
ConstraintDDL Example. |
CursorType | S | Type of cursor to use for retrieving records from the source table. Available options are forward only, static, and dynamic. The default setting is forward only. For more details, see
Cursors. |
DriverCompletion | ST | Allows you to control the driver prompt for information. The available options: • prompt - asks the user all information. • complete (default) - asks the user for information they forgot to enter. • complete required - asks the user only for information required to complete the connection. • no prompt - does not ask the user for any information. |
DriverOptions | ST | Specify valid ODBC string connection options. There is no default. |
Encoding | ST | Type of encoding to use with source and target files. |
IdentifierQuotes | ST | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, double quotes ("), and a single quote ('). |
MaxDataLength | ST | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based on your available memory and target requirements. When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
ModifyDriverOptions | ST | Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation. |
PrimaryKey | T | Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default. To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
SystemTables | ST | If set to true, this property allows you to see all tables in the database that were created by the DBA. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | ST | Allows you to specify an isolation level to use when reading from or writing to a database table with ODBC. The default is serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The isolation levels: • read_uncommitted – Permits P1, P2, and P3. • read_committed – Permits P2 and P3. Does not permit P1. • repeatable_read – Permits P3. Does not permit P1 and P2. • serializable – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation. |
Views | ST | If set to true (default), allows you to see table views. View names appear in the table list along with table names. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit Where when you enter the clause. This property has no default. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | Use |
AutoCommit | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
DriverCompletion | Allows you to control the driver prompt for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. • Prompt option: Asks the user all information. • Complete option: Asks the user for information they forgot to enter. • Complete Required option: Asks the user only for information required to complete the connection. • No Prompt option: Does not ask the user for information. |
DriverOptions | Specify valid ODBC connect string options. There is no default. |
Encoding | Allows you to select the type of encoding used with your target file. |
IdentifierQuotes | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '. |
MaxDataLength | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
ModifyDriverOptions | Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation. |
SQL Log | The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name. |
SQL Output | Allows you to specify bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster, as bind variables are used. Select from the following: • Target Only (default) - Use bound mode, which uses bind variables. SQL statements are sent to the target only. • Target Only (Unbound Mode) - Use unbound mode. Does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log file specified in the SQL Log property. • Target and SQL Log - Sends SQL statements to the target and the SQL log file specified in the SQL Log property. • SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property. |
SystemTables | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | Allows you to specify an isolation level to use when reading from or writing to a database table with ODBC. The default is serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The isolation levels: • read_uncommitted – Permits P1, P2, and P3. For further details about TransactionIsolation levels, refer to IBM DB2 database ODBC documentation. • read_committed – Permits P2 and P3. Does not permit P1. • repeatable_read – Permits P3. Does not permit P1 and P2. • serializable – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation. |
UpdateNullFields | Null values are sent to the database when inserting or updating records. The default is true. If you select false, null values are not sent to the database when you insert or update records. This forces the connector to operate in unbound mode, which may cause slower performance. |
Views | If set to true (default), allows you to see table views. View names appear in the table list along with table names. |
Property | ST | Description |
MessageFile | T | You can designate a message file to write errors to when loading a database. To select the message file, navigate to the drive, directory path, and file. Then click OK. |
Null Indicators | T | Generate Null indicators. The default is true. |
OtherOptions | T | Use the OtherOptions Property to enter additional text options. For the OtherOptions property, enter options in the text box, and click OK. |
SaveCount | T | The SaveCount property is set to prevent total loss of data from a failed Load operation. You set SaveCount to create a consistency point at a specified number of rows that are loaded then if the LOAD operation fails, the rows that were saved prior to the failure are not rolled back. The benefit of specifying this parameter is only realized in a recover situation where you can restart the load from a consistency point. The default for SaveCount is zero, indicating no consistency points. |
System Type | T | Type of DB2 system. The default is "other" which includes Linux, DOS, and Windows. The other option is MVS, which is MVS (EBCDIC) DB2. |
WarnCount | T | Use WarnCount to set the number of warnings returned by the utility before a load operation is forced to terminate. The load operation stops after the Warncount number is reached. By using the WarnCount you can fix problems that occur early in the process without waiting for the entire Load to complete. The default is set to zero. |
Control File | ST | File containing load commands. There is no default. |
RecordSeparator | ST | A DB2 Loader target file is presumed to have a carriage return-line feed (CR-LF) between records (default setting). To specify some other characters for a record separator, click Record Separator and select the desired record separator in the list. The choices are carriage return-line feed (CR-LF), line feed (LF), and no record separator (None). Note: When the unload file is from a mainframe system, the data is in EBCDIC and there is no record separator. In these cases, the record separator is saved and loaded as None. When the unload file is not from a mainframe system, the record separator property can be set and saves and loads correctly. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | ST | Version | Use |
AutoCommit | T | 9.5 | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
BulkOperations | T | 9.5 | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, change this setting to true. |
CommitFrequency | T | 9.5 | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | ST | 9.5 | Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see
ConstraintDDL Example. |
CursorType | S | 9.5 | Type of cursor to use for retrieving records from the source table. Available options are Forward Only, Static and Dynamic. The default setting is Forward Only. For more details, see
CursorType. |
DriverCompletion | ST | 9.5 | Allows you to control the driver prompt for information. The options available are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option asks the user all information. The Complete option asks the user for information they forgot to enter. The Complete Required option asks the user only for information required to complete the connection. The No Prompt option does not ask the user for any information. |
DriverOptions | ST | 9.5 | Enter any valid ODBC connect string options here. There is no default. |
Encoding | ST | 9.5 | Type of encoding to use with source and target files. |
IdentifierQuotes | ST | 9.5 | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, ", and '. |
MaxDataLength | ST | 9.5 | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements. When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
Maximum Array Size | T | 9.5 | Maximum number of records fetched or inserted with each cursor operation. The default is 1. |
ModifyDriverOptions | ST | 9.5 | Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation. |
PrimaryKey | T | 9.5 | Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default. To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
SystemTables | ST | 9.5 | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | ST | 9.5 | Allows you to specify any one of five isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The four isolation levels are as follows: • READ_UNCOMMITTED – Permits P1, P2, and P3. • READ_COMMITTED – Permits P2 and P3. Does not permit P1. • REPEATABLE_READ – Permits P3. Does not permit P1 and P2. • SERIALIZABLE – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation. |
Views | ST | 9.5 | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
WhereStmt | S | 9.5 | Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit Where when you enter the clause. This property has no default. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table |
UseCursors | T | 9.5 | The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
ArraySize | T | 9.5 | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
CommitFrequency | T | Controls how often data is committed to the database. By default, CommitFrequency is zero, which means that data is committed at the end of the transformation. When doing large transformations, this is not practical as it may fill up the transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full, but limits the ability to restart the transformation. |
ConstraintDDL | T | Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see
ConstraintDDL Example. |
DriverCompletion | T | Allows you to control the driver prompt for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. • Prompt option: Asks the user all information. • Complete option: Asks the user for information they forgot to enter. • Complete Required option: Asks the user only for information required to complete the connection. • No Prompt option: Does not ask the user for information. |
DriverOptions | T | Enter any valid ODBC connect string options here. There is no default. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '. |
Maximum Array Size | T | Maximum number of records fetched or inserted with each cursor operation. The default is 1. |
MaxDataLength | T | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
ModifyDriverOptions | T | Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation. |
Encoding | T | Allows you to select the type of encoding used with your source and target files. |
PrimaryKey | T | Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default. To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
SQL Log | T | The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name. |
SystemTables | T | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | T | Allows you to specify any one of five isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The four isolation levels are as follows: • READ_UNCOMMITTED – Permits P1, P2, and P3. • READ_COMMITTED – Permits P2 and P3. Does not permit P1. • REPEATABLE_READ – Permits P3. Does not permit P1 and P2. • SERIALIZABLE – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation. |
UpdateNullFields | T | Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
Views | T | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true. |
UseCursors | T | The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | ST | Description |
CodePage | ST | Translation table that determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
CommitFrequency | T | Controls how often data is committed to the database. By default, CommitFrequency is zero, which means that data is committed at the end of the transformation. When doing large transformations, this is not practical as it may fill up the transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full, but limits the ability to restart the transformation. |
ConstraintDDL | T | Allows you to specify some additional SQL data definition language statements to be executed after the target table is created. This is similar to the support provided for SQL pass-thru in the SQL import connectors. Each line must be a valid ODBC DDL statement. For example, the following statements create two indices on the table mytable. The first index does not allow duplicates and the index values are stored in ascending order. The second is a compound index on Field2 and Field3. CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC) CREATE INDEX index2 ON mytable (Field2, Field3) ConstraintDDL is executed only if the replace mode is used for the target. If there are errors, they are written to the error and event log file. During transformation, the Transformation Error dialog box appears. You can ignore the DDL errors and continue the transformation. ConstraintDDL also supports an escape mechanism that allows you to specify DDL in the native SQL of the database management system. Any statement that is preceded by an ampersand (@) is sent straight to the DBMS. The following statement is a DDL statement for creating a primary key for the table mytable. @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key, use native SQL. Note: This option only works in REPLACE mode. |
CursorType | S | The type of cursor to use for retrieving records from the source table. The choices are Forward Only, Static, and Dynamic. The default setting is Forward Only. For more information, see Cursors. |
DataTypes | S | Data types of the fields in the target table |
MaxDataLength ST | ST | The maximum data length for long data types. The default is one megabyte, or 1,048,576 bytes. You can reset this number as needed, based on the available memory and target requirements. Some ODBC drivers can only handle limited data lengths. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, then the integration platform has preset the default for that particular ODBC driver. Do not set the MaxDataLength property any higher when this happens. |
PrimaryKey | ST | Allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. When using the PrimaryKey property, the ODBC driver must support integrity enhancement facility (IEF). Only advanced ODBC drivers support this. |
Synonyms | ST | If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | ST | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | ST | Allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The four isolation levels are as follows: • READ_UNCOMMITTED - Permits P1, P2, and P3. • READ_COMMITTED - Permits P2 and P3. Does not permit P1. • REPEATABLE_READ - Permits P3. Does not permit P1 and P2. • SERIALIZABLE - Does not permit P1, P2, or P3. For details about transaction isolation levels, see the Microsoft ODBC SDK documentation. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors so that advanced users can construct the WHERE clause of the SQL query themselves and avoid the need to write a lengthy query statement. For example, to instruct the SQL database server to filter the data based on a condition before it is sent to the integration platform, omit WHERE when you enter the clause. No default exists for this property. Note: When the source connection is a SELECT statement, do not apply the WhereStmt property. Instead, include the WHERE clause in your SELECT statements. This property enables data filtering when you select a table. |
UseCursors | T | Allows you to turn cursor support on or off. The default is False. When UseCursor is true and the specified ODBC driver does not support cursor inserts, the SQL INSERT mode of adding records is used instead. For exports, cursor support is supposed to enhance the performance of record insertion. This appears to be the case for desktop databases but not for database servers, which execute prepared queries about as quickly as they handle cursor inserts. Some drivers require that the target table be indexed to enable positioned updates (cursors). The PrimaryKey and ConstraintDDL properties address this issue for the ODBC export (see details above). |
Views | ST | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
Property | ST | Description |
Data Types | T | The only available data type is Text. |
Length | T | Specifies field lengths for data. If you need to change field lengths, reset them in the schema. |
MaxDataLength | T | The maximum data length for long data types. Default is one megabyte, or 1,048,576 bytes. You can reset this number as need based on the amount of memory available and the target requirements. |
SQL Log | T | The default is sql.log, which is in the default installation directory. To use a different log, either browse to select the file or enter the path and file name. |
SQL Output | T | Allows you to select bound or unbound mode and whether to write SQL statements to a SQL log. SQL Output has these options: • Target Only (the default) - Uses bound mode, which uses bind variables and is faster. SQL statements are sent to the target and not to the SQL log specified by the SQL Log property. • Target Only - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log. • Target and SQL Log - Sends SQL statements to the target and to the SQL log file. • SQL Log Only - Sends SQL statements to the SQL log file only. |
Synonyms | T | If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. | |
UpdateNullFields | T | When set to True (the default) null values are sent to the database during insertion or updating of records. When false, null values are not sent to the database and UpdateNullFields forces the connector to operate in unbound mode, which may result in slower performance. |
Views | T | When set to True (default) you can see views. The view names appear in the table list along with the tables. |
Keyword | Short Version |
driver | driver |
dsn | dsn |
filedsn | filedsn |
uid | uid |
database | db |
host | host |
pwd | pwd |
server | srvr |
service | serv |
protocol | pro |
client_locale | cloc |
db_locale | dloc |
translationdll | tdll |
translationoption | topt |
connectdatabase | condb |
exclusive | xcl |
cursorbehavior | curb |
savefile | savefile |
options | opt |
description | desc |
enablescrollablecursors | scur |
enableinsertcursors | icur |
optimizeautocommit | oac |
optofc | optofc |
needodbctypesonly | odtyp |
reportkeysetcursors | rkc |
fetchbuffersize | fbc |
describedecimalfloatpoint | ddfp |
donotuselvarchar | dnl |
reportcharcolaswidecharcol | rcwc |
Property | S/T | Description |
CursorType | S | The CursorType option specifies the type of cursor to use for fetching records from the source table. The choices are Forward Only, Static, and Dynamic. The default setting is Forward Only. For more information, see
Cursors. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement in the Query Statement text box. This might be used to get the SQL database server to filter the data based on a condition before it is sent to Map Designer. Omit "WHERE" when you enter the clause. No default exists for this property. Note: Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, you should include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table. |
CommitFrequency | T | This allows you to control how often data is committed to the database. By default, CommitFrequency is zero. This means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. If you are doing large transformations, this is not practical, since it may fill up your transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full but limits the restartability of the transformation. |
ConstraintDDL | T | Allows you to specify some additional SQL data definition language statements to be executed after their target table is created. This is similar to the support provided for SQL Pass-thru in the SQL import connectors. Each line must be a valid DDL statement. For example, you could have the statements CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)CREATE INDEX index2 ON mytable (Field2, Field3) These statements would create two indices on the table mytable. The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3. The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, you can continue the transformation. ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an @ is sent straight to the DBMS. The statement @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY is a DDL statement for creating a primary key for the table mytable. Note: This option works only in REPLACE mode. |
UseCursors | T | The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified driver does not support cursor inserts, the SQL INSERT mode of adding records is used. For exports, cursor support is supposed to enhance the performance of inserting records. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts. |
DriverOptions | ST | Enter Informix client connection information, such as database, server, and protocol, in the format shown in this example: Database=sysutils;Server=ol_qadb1;Protocol=onsoctcp |
DriverCompletion | ST | Allows you to control whether or not the driver prompts you for information. The options available are prompt, complete, complete required, no prompt. The Prompt option prompts the user for every individual bit of information. Complete is the default option. This option prompts the user for any information that they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information. |
MaxDataLength | ST | The maximum data length for long data types. Default is 1 MB, 1048576. You can reset this number as you choose based on your available memory capabilities and Target requirements. Some drivers have limitations concerning the maximum data length they can handle. If you choose a Source or target connector and the default setting is not 1 MB, then Map Designer has preset the default in respect for the capabilities of that particular driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
PrimaryKey | ST | The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. There is one additional requirement for using the PrimaryKey property. The driver must support integrity enhancement facility (IEF). Only the more advanced drivers support this. |
Synonyms | ST | If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | ST | The SystemTables property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
TransactionIsolation | ST | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table. The default for version 3.x, 3.5 and IBM DB2 Universal Database is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms). The five isolation levels are: • READ_UNCOMMITTED – Permits P1, P2, and P3. • READ_COMMITTED – Permits P2 and P3. Does not permit P1. • REPEATABLE_READ – Permits P3. Does not permit P1 and P2. • SERIALIZABLE – Does not permit any of P1, P2, and P3. • VERSIONING – Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency. |
Views | ST | If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is false. |
CodePage | ST | The code page translation table tells Map Designer which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. The following code pages are available: • ANSI • OEM • 0037 US (EBCDIC) • 0273 Germany (EBCDIC) • 0277 Norway (EBCDIC) • 0278 Sweden (EBCDIC) • 0280 Italy (EBCDIC) • 0284 Spain (EBCDIC) • 0285 UK (EBCDIC) • 0297 France (EBCDIC) • 0437 MSDOS United States • 0500 Belgium (EBCDIC) • 0850 MSDOS Multilingual (Latin 1) • 0860 MSDOS Portuguese • 0861 MSDOS Icelandic • 0863 MSDOS Canadian French • 0865 MSDOS Nordic • 1051 Roman-8 |
Keyword | Short Version |
driver | driver |
dsn | dsn |
filedsn | filedsn |
uid | uid |
database | db |
host | host |
pwd | pwd |
server | srvr |
service | serv |
protocol | pro |
client_locale | cloc |
db_locale | dloc |
translationdll | tdll |
translationoption | topt |
connectdatabase | condb |
exclusive | xcl |
cursorbehavior | curb |
savefile | savefile |
options | opt |
description | desc |
enablescrollablecursors | scur |
enableinsertcursors | icur |
optimizeautocommit | oac |
optofc | optofc |
needodbctypesonly | odtyp |
reportkeysetcursors | rkc |
fetchbuffersize | fbc |
describedecimalfloatpoint | ddfp |
donotuselvarchar | dnl |
reportcharcolaswidecharcol | rcwc |
Property | S/T | Description |
MaxDataLength | T | The maximum data length for long data types. Default is 1 MB: 1,048,576. You can reset this number as you choose based on your available memory capabilities and Target requirements. |
SQL Log | T | The default is sql.log in the default installation directory. If you want to use a different log, click once in the value cell, then select the ellipsis to browse to the location of the file, or enter the path and file name. |
UpdateNullFields | T | Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
SQL Output | T | Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output: • Target Only – Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target Only (Unbound Mode) – Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target and SQL Log – Sends SQL statements to the Target and to the SQL log specified in the SQL Log property. • SQL Log Only – Sends SQL statements only to the SQL log file specified in the SQL Log property. |
Synonyms | T | If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | T | The SystemTables property is applicable only if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
Views | T | If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Property | S/T | Description |
ByteOrder | S | Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian, and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation. |
ProgramVariables | S | Type the program variable. |
ReportReadingScriptFile | S | The script file used for the source structure. The default file is djwinlog.djp. |
Encoding | S | Select the type of encoding used with source and target files. The default encoding is OEM. Shift-JIS encoding is meaningful only in Japanese operating systems. |
StartOffSet | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
Property | S/T | Description |
Batch Size | S | Number of source records the connector caches before processing them in a map. Default is 100. |
Read Nulls | S | If set to True, the connector reads null values in the source file. If set to False, null values in the source file are skipped. The null field is added to the schema, but it is skipped when the map is run. Default is True. |
Read Empty Tokens | S | If set to True, the connector reads empty array ([]) and empty object ({}) values in the source file. If set to False, empty array ([]) and empty object ({}) values in the source file are skipped. The schema is updated with a reference to a new record and the JSON INPUT field holds the {} and [] characters, respectively. The characters in the JSON INPUT field are not mapped to the target. Default is True. |
Property | Use |
Auto Type Text Fields | Automatic styling changes the way Lotus 123 data is read or written. For example, when Delimited ASCII is the source connector, all data is "Text" (except when you change the Delimited ASCII Source Property AutomaticStyling to true, or when you change the "Type" from Text to a numeric data type in the Delimited ASCII source schema grid. For details on these options, see the ASCII Delimited section.) When that data is transformed to a spreadsheet, the user may need the fields that contain numbers to be formatted as Numeric (to perform calculations in the spreadsheet). The default for this option is true. To transform the number fields as Text in the Target Lotus 123 file, change the setting to false. |
Header Record Row | For Lotus 123 source files, this is the row number of any column headings. For Lotus 123 target files, this sets the row number where the integration platform writes column headings. If there are no column headings involved in the data you access multiple tables in the same transformation, each time that transformation runs, the value should be left to the default setting of 0. If the column heading is the first record (row 1), change the value to 1. |
Worksheet | For versions of Lotus 123 that support multiple worksheets within one file (often referred to as 3D spreadsheets), this option allows you to specify which of those worksheets that the integration platform uses for the transformation. If you want the integration platform to read the first worksheet, leave the default value of "0". To read the second sheet, change the setting to "1"; for the third sheet, change the setting to "2" and so on. |
CodePage | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | ST | Version | Description |
BatchResponse | T | 4 | Optional. Creates a Batch Response file, which is a reporting mechanism for the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases: • You need to capture system-generated object IDs for use in future updates. • You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis. • For more information, see "Batch Response File". |
BatchSize | S | 4 | Optional. Number of source records the connector caches before processing them in a map. Default is zero. |
Flush Frequency | T | 4 | Leave setting as default, since you cannot change this setting. |
Page Size | S | 4 | The number of records stored in memory. |
DestinationEndpoint | ST | 4 | The receiver of the message. |
SourceEndpoint | ST | 4 | The sender of the message. |
SourceEndpointUser | ST | 4 | The user who sends the message. The endpoint user is defined by the Windows account of the endpoint user in a Domain\User format. |
ShowChildren | ST | 4 | Determines whether to add the child record types to the list of available record types. • False – List only the parent-level record types. • True – Include child record types. |
Entity Name | Create | Read | Update | Delete |
ASN | No | Yes | No | No |
Address | Yes | Yes | Yes | Yes |
CashDisc | No | Yes | No | No |
ChartofAccounts | Yes | Yes | No | No |
ConfigTable | No | Yes | No | No |
ContactPersons | Yes | Yes | No | Yes |
Customer | Yes | Yes | Yes | Yes |
CustomerGroups | No | Yes | No | No |
DeliveryModes | No | Yes | No | No |
Dimensions | Yes | Yes | No | No |
ExchangeRates | No | Yes | No | No |
InventColor | No | Yes | No | No |
InventDimCombination | No | Yes | No | No |
InventItemGroup | No | Yes | No | No |
InventLocation | No | Yes | No | No |
InventSize | No | Yes | No | No |
InventoryOnhand | No | Yes | No | No |
InventoryTransactions | No | Yes | No | No |
Item | Yes | Yes | No | No |
PaymentTerms | No | Yes | No | No |
PurchaseRequisition | No | Yes | No | No |
SalesInvoice | No | Yes | No | No |
SalesOrder | Yes | Yes | No | No |
Unit | No | Yes | No | No |
UnitConvert | No | Yes | No | No |
Property | S/T | Description |
Authentication | ST | The authentication protocol for your source or target data: • Default - Uses Windows cached credentials. If necessary, add or edit your user name and password information in the Manage Passwords section of the Windows Control Panel. • Basic • Digest • NTLM • Anonymous |
ServiceType | ST | Select the service type for connecting Microsoft Dynamics CRM options: • On-premise (default) • Online |
Update Nulls | T | Select this option if you want to allow null values from source rows to overwrite existing data in the target rows during an update operation. The available options are Always (the default), Null only, and Never nulls |
Batch Response | T | In delete operations, one batch response entry exists for every delete action. In insert and update operations, one batch response entry exists for every insert or update action. The return code for individual operation will show the result of the operation execution. If the operation is success then the return code will be 0. For failed operations, the error Description tag will show the detailed error message. This will include any messages returned from Microsoft Dynamics CRM 4.0 as well as errors generated by DataConnect. When the return code is 0 (success), then this tag will return the Entity's unique ID for which the operation was successful. |
MaxBatch | S | Controls how many records the server is asked to send at one time (read operations only). If you use a large number in this setting, you may see faster transformation run times. However, larger numbers may increase the need for memory. If you experience memory problems, lower the number. The default is 500. To change this property, overtype the default with a new value. The maximum setting is 2000. The recommendation for inserts is 500 maximum and for queries, the maximum can be 2000. |
WhereStmt | S | Provides a pass-through mechanism where advanced users can construct the Where clause of the query themselves. It can be used as an alternative to writing a lengthy query statement in the Query Statement text box. This might be used to get the CRM server to filter the data based on a condition before it is sent to Map Designer. Omit WHERE when you enter the clause. There is no default for this setting. Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, you should include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table. |
Property | S/T | Version | Description |
BatchResponse | T | 9, 10 | Optional. Creates a BatchResponse file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases: • You need to capture system-generated object IDs for use in future updates. • You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis. For more details, see Batch Response File. |
BatchSize | S | 9, 10 | Optional. Number of source records cached before processing them. Default is zero. |
CultureName | ST | 9, 10 | Optional. Specifies the culture (locale) of the user making the web service call. Enter a culture name. |
CurrencyType | ST | 9, 10 | Optional. Specifies how currency information is used for the web service call. This property is used only for documents that support multicurrency. If multicurrency is not supported, Local is used. • Transactional (default) – The originating currency amount is used. • Local – The local currency amount is used. |
Domain | ST | 9, 10 | Required. Enter the domain name to use for NTLM authentication. |
FlushFrequency | T | 9, 10 | Optional. Number of operations the connector caches before writing them to the target. Default is zero. |
IgnoreRecordErrors | S | 10 | Returns an error when retrieving any record if set to false (default). If true, records that cause errors are ignored. |
OrganizationKey | ST | 9, 10 | Required. Specifies the unique identifier of the company where the operation should be executed. The web service determines the appropriate company database to use. |
PreValidate | T | 9, 10 | Optional. Analyzes the data before sending it to the server, which can be time-consuming because this validation is not optimized. Default is false. |
RoleKey | ST | 9, 10 | Optional. Permissions that allow access to specific areas of Microsoft Dynamics GP. If this property is supplied, the role is used to choose the policy instance and appropriate behavior options for the web service call. If you do not supply a role key, the web service attempts to find a role for the user and company specified in the context object. If only one role is found, that role is used. If more than one role is found, or no roles are found, the default role is used. For more information, see the Microsoft Dynamics GP Web Services Reference. You can download this document from the Learn tab of the
Microsoft Dynamics GP Developer Center home page. |
ShowChildren | ST | 9, 10 | Optional. Determines whether to add the child record types to the list of available record types. • False (default in target) – List only the parent-level record types. • True (default in source) – Include child record types, allowing you to write data to them. Note: On the target, if you want to manipulate any child record types, set this option to true. |
Property | S/T | Description |
AlternateFieldSeparator | S | Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some of the fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave the setting at None (the default). The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, and pipe (|). To select a separator from the list, click AlternateFieldSeparator. If you have an alternate field separator other than one from the list, you can type it here. If the alternate field separator is not one of the choices in the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk, type an asterisk from the keyboard. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
AutomaticStyling | S | Automatic styling changes the way Microsoft IIS Extended Logfile data is read or written. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, the integration platform automatically detects and formats particular data types, such as numeric and date fields. During the transformation process, autostyling insures that a date field in the source file is formatted as a date field in the target file, as opposed to character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling as False, so leading zeros in some zip codes in the eastern United States are not deleted. |
FieldEndDelimiter | S | The default delimiter is None (to read from or write to a file with no delimiters). If your source file contains separators or delimiters, you must specify them by clicking the Source Properties icon in the toolbar and choosing the correct separators and delimiters for your source data file. If you do not know what the separators or delimiters are in your source file, search for "hex browser" in the documentation. |
FieldSeparator | S | The integration platform that Microsoft IIS Extended Logfile Format files use a space between each field. To specify some other field separator, click FieldSeparator to display the list of options. The options are comma (default), tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, Ctrl+R, a pipe (|), and no field separator. If you have or need an alternate field separator other than one from the list, you can type it here. If the field separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
FieldStartDelimiter | S | The default delimiter is None (to read from or write to a file with no delimiters). If your source file contains separators or delimiters, you must specify them by clicking the Source Properties icon in the toolbar and choosing the correct separators and delimiters for your source data file. If you do not know what the separators or delimiters are in your source file, search for "hex browser" in the documentation. |
Header | S | In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false. |
RecordFieldCount | S | If your Microsoft IIS Extended Logfile Format data file has field separators, but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero. |
RecordSeparator | S | A Microsoft IIS Logfile Extended Format file is presumed to have a carriage return-line feed (CR-LF) between records. To use a different character, select from the choices in the list. To use a separator other than one in the list, enter it here. If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard. If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is 106. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. Note: This property is set in number of bytes, not characters. |
StripLeadingBlanks | S | Allows you to determine if leading blanks are stripped from all data fields. The default is false; leading blanks are not stripped from Microsoft IIS Extended Logfile Format data. To remove them, set StripLeadingBlanks to true. |
StripTrailingBlanks | S | Determine if trailing blanks are stripped from the data fields. By default, the integration platform does not strip trailing blanks in Microsoft IIS Extended Logfile Format data. To remove them, set StripTrailingBlanks Current to true. |
StyleSampleSize | S | Set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated. To change the value, highlight the default value, and enter a new one. |
CodePage | S | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
Autocommit | T | Allows automatic commitment of changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If set to true, you cannot roll back changes. Default is false. |
BulkOperations | T | Determines if an insert statement is executed for each record or a bulk add is executed for each record. Default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, set to true. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Specifies additional SQL data definition language statements to be executed after the target table is created. This is similar to the support provided for SQL passthrough in the SQL import connectors. Each line must be a valid SQL DDL statement. No default exists for this property. This option works only in Replace mode. |
CursorType | S | Type of cursor to use for retrieving records from source table. Choices are Forward Only, Static, and Dynamic. Default is Forward Only. For more information, see Cursors. |
Encoding | ST | Character encoding used with XML source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. Default is OEM. |
IdentifierQuotes | ST | Quoted identifiers are used to make the SQL statement parsable and to distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following Select statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes two tables, Accounts and Receivable, and returns a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '. Default is the default. |
MaxDataLength | ST | Maximum data length for long data types. Default is 1 MB. You can reset this number based on available memory and target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default in respect for the capabilities of that particular ODBC driver, and it is not recommended to set MaxDataLength any higher. |
PrimaryKey | T | Sets a list of field names used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL Create statement when the connector is in replace mode. No default exists for this property. This property has one additional requirement for use. The driver must support integrity enhancement facility (IEF). |
SystemTables | ST | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
Transactionisolation | ST | Allows setting of the transaction isolation level for reading or writing tables. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following isolation levels are supported. Default is Serializable. • Read_Uncommitted – Permits P1, P2, and P3. • Read_Committed – Permits P2 and P3. Does not permit P1. • Repeatable_Read – Permits P3. Does not permit P1 and P2. • Serializable (default)– Does not permit any of P1, P2, and P3 For more information, see the Microsoft ODBC SDK documentation. |
UseCursors | T | Turns cursor support on and off. Default is false. If set to true and the specified driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, insert speed shows no noticeable change. These servers execute prepared queries as quickly as they handle cursor inserts. Some drivers require that the target table be indexed, and if not, then positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
Views | ST | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
WhereStmt | S | Provides a passthrough mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. Omit WHERE when you enter the clause. No default exists for this property. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | S/T | Description |
Autocommit | T | Allows automatic commitment of changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If set to true, you cannot roll back changes. Default is false. |
Encoding | T | Character encoding used with XML source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. Default is OEM. |
MaxDataLength | T | Maximum data length for long data types. Default is 1 MB. You can reset this number based on available memory and target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default in respect for the capabilities of that particular ODBC driver, and it is not recommended to set MaxDataLength any higher. Default is 1048576 |
SQL Log | T | The default is sql.log in the default installation directory. To use a different log, browse to the file or enter the path and file name. |
SQL Output | T | Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output: • Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target Only (Unbound mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property. • SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property. |
SystemTables | T | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
Transactionisolation | T | Allows setting of the transaction isolation level for reading or writing tables. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following isolation levels are supported. Default is Serializable. • Read_Uncommitted – Permits P1, P2, and P3. • Read_Committed – Permits P2 and P3. Does not permit P1. • Repeatable_Read – Permits P3. Does not permit P1 and P2. • Serializable (default)– Does not permit any of P1, P2, and P3 For more information, see the Microsoft ODBC SDK documentation. |
UpdateNullFields | T | Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
Views | T | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. Options for IdentifierQuotes are Default, None, " and '. |
Property | S/T | Description |
AutoCommit | T | Allows automatic commitment of changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If set to true, you cannot roll back changes. Default is false. |
BulkOperations | T | Determines if an insert statement is executed for each record or a bulk add is executed for each record. Default is false, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, set to true. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Specifies additional SQL data definition language statements to be executed after the target table is created. This is similar to the support provided for SQL passthrough in the SQL import connectors. Each line must be a valid SQL DDL statement. No default exists for this property. This option works only in Replace mode. |
CursorType | S | Type of cursor to use for retrieving records from source table. Choices are Forward Only, Static, and Dynamic. Default is Forward Only. For more information, see Cursors. |
DriverCompletion | ST | Controls whether the driver prompts the user for information. • Prompt – Prompts for every individual bit of information. • Complete (default) – Prompts only for missing information. • Complete Required – Prompts only for information needed to complete a connection. • No Prompt – Does not prompt for any information. |
DriverOptions | ST | Enter any valid ODBC connect string options. No default. |
DSNType | ST | The integration platform allows you to specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
Encoding | ST | Character encoding used with XML source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
IdentifierQuotes | ST | Quoted identifiers are used to make the SQL statement parsable and to distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following Select statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes two tables, Accounts and Receivable, and returns a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '. |
MaxDataLength | ST | Maximum data length for long data types. Default is 1 MB. You can reset this number based on available memory and target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default in respect for the capabilities of that particular ODBC driver, and it is not recommended to set MaxDataLength any higher. |
ModifyDriverOptions | ST | Allows storing of the ODBC connection. Default is true. If set to false, prompts for connection information each time the transformation runs. |
PrimaryKey | ST | Sets a list of field names used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL Create statement when the connector is in replace mode. No default exists for this property. Present only in Target GUI not in Source GUI. This property has one additional requirement for use. The driver must support integrity enhancement facility (IEF). |
Synonyms | ST | If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | ST | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | ST | Allows setting of the transaction isolation level for reading or writing tables. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following isolation levels are supported. Default is Serializable. • Read_Uncommitted – Permits P1, P2, and P3. • Read_Committed – Permits P2 and P3. Does not permit P1. • Repeatable_Read – Permits P3. Does not permit P1 and P2. • Serializable – Does not permit any of P1, P2, and P3 • Versioning – Provides Serializable transactions but does so without significant impact on concurrency. For more information, see the Microsoft ODBC SDK documentation. |
UseCursors | T | Turns cursor support on and off. Default is false. If set to true and the specified driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, insert speed shows no noticeable change. These servers execute prepared queries as quickly as they handle cursor inserts. Some drivers require that the target table be indexed, and if not, then positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
Views | ST | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
WhereStmt | S | Provides a passthrough mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. Omit WHERE when you enter the clause. No default exists for this property. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | S/T | Description |
BatchResponse | T | Path of the batch response file, which provides detailed results for each object in a batch. These results are useful for the following: • Capturing system-generated object IDs for use in future updates. • Associating errors with objects for exception handling and error diagnosis. A batch response entry is generated for each top-level record. For example, when a Customer record is inserted with an Addressbooklist, only one entry appears in the batch response. For more information about Batch Response File, see
Batch Response File. |
CustomizationLevel | ST | Specify the allowed level of NetSuite customizations: • Full (default) - Use this option if using custom record types. Also, provides access to custom fields defined for the standard record types. • CustomFieldsOnly - Use this option if using custom fields defined for the standard record types. Custom fields are requested based on your requirements. For example, the connector does not retrieve custom fields for sales orders if retrieving only customer records. • None - Use this option when custom record types and custom fields are not defined for the standard record types. It improves performance since this option does not require generating A PI calls at startup. Note: This property is applicable only when the DataType property is set to Record. |
DataType | S | Specify how the connector obtains allowable values for the Table list: • Record (default) – Queries the normal record types such as Customer, Contact, SalesOrder, Your CustomRecordType. • Dynamic – Queries using NetSuite saved searches. • CustomList – Queries the NetSuite custom, user-defined lists of ID and value pairs. |
DisableMandatoryCustomFieldValidation | T | Enable or disable validation for the required custom fields: • Default (default) – Connector uses default settings in NetSuite. • True – Data for a custom field (required for NetSuite UI submissions) is not required when submitting a web services request. • False - Connector returns an error when data for a required custom field is not provided. |
DisableSystemNotesCustomFields | T | Enable or disable system-generated notes when importing data from custom fields: • Default - Connector uses the default setting in NetSuite • True - Enable system-generated notes when importing data from custom fields • False - Disable system-generated notes when importing data from custom fields Note: When importing large records, you may consider disabling system notes to improve performance. For more information, see the disableSystemNotesForCustomFields property in the NetSuite documentation. |
FlushFrequency | T | Number of operations the connector caches before writing to the target. Default is zero. |
IgnoreReadOnlyFields | T | Specify whether read-only fields are ignored: • Default - Connector uses the default setting in NetSuite • True - Connector ignores read-only fields. • False - Connector does not ignore read-only fields and enables error messages as required. |
NestedChildren | ST | Set the connector to retrieve records (parents) and their associated lists (children): • True – Connector retrieves records (parents) and their associated lists (children). • False (default) - Connector retrieves records (parents) only. This can significantly improve performance. Note: This property is applicable only when the DataType property is set to Record. |
PageSize | S | Indicates the number of records returned for a page in each web service. It is only applicable when the DataType is selected as Record. Ideally, if DataType is selected as Dynamic or CustomList, the PageSize property should not be displayed. But, this is currently displayed for the connector. The default value is 50. Note: This property is intended for advanced users who have to fine-tune performance. For more information, see the NetSuite documentation. |
Role | ST | Specify the NetSuite role ID. A role specifies the permissions required to access different parts of NetSuite. If role ID is not provided, the connector uses the user's default role. For more information, see the NetSuite documentation. |
SavedSearchID | ST | The ID of a NetSuite saved search. The SavedSearchId and the Table selection determine the selected saved search.. |
TreatWarningAsError | T | Controls whether warning messages generated by NetSuite are treated same as errors, which causes a request to be rejected. The options are: • Default - Connector uses the default setting in NetSuite. • True - Warning messages generated by NetSuite are treated same as errors. • False - Warning messages generated by NetSuite are not treated same as errors. |
UpdateNulls | T | The NetSuite target may contain empty or null value. The UpdateNulls property controls whether these values are sent to NetSuite: • True - Connector sends null values and empty strings to NetSuite server. • False (the default) - Connector does not send null values and empty strings to NetSuite server.. Note: The setting for this property affects only parent-level record types since child record types ignores the null values and empty strings. |
UseUILabel | ST | When creating a custom field, you must specify a name (label). Also, the system automatically generates an ID. The UseUILabel option allows to display either label or custom ID for a custom field. The available options are: • False (default) - Displays labels for custom fields. • True - Displays custom IDs for custom fields. |
Custom FieldType | NetSuite Display Name |
Boolean | Check box |
Date | Date Time of Day |
Double | Decimal Number Currency Percent |
Long | Integer Number |
MultiSelect | Multiple Select |
Select | List/Record Image |
String | Free-Form Text Text Area Phone Number E-mail Address HyperLink Rich Text Help Long Text Password |
Example SQL | SQL Statement |
Simple query with 'where' clause and field list | Select ExternalId, Title, Status, Message, CUSTEVENT4 from PhoneCall where Select ExternalId, Title, Status, Message, CUSTEVENT4 from PhoneCall where |
Search With Numeric Filter | Select Probability, LeadSource_InternalId, Status, Pro-jectedTotal from Opportunity where ProjectedTotal not-GreaterThanOrEqualTo 1396.70 |
Search with String Filter | Select Title, Status, Message from PhoneCall where Title doesNotStartWith 'TC24889' |
Search with Date Filter | Select Title, StartDate, EndDate, Status, NoEndDate from CalendarEvent where StartDate notOnOrBefore '2006-07-25' |
Simple Subquery | Select InternalId, (select (select Addr1 from 'Addressbook') from 'AddressbookList') from Customer |
Multiple 'and' Clauses | Select EntityId, FirstName, LastName, Email from Contact where EntityId contains 'John' and Email startsWith 'John' and LastName is 'Boy' |
Address-Contact Primary Joined Search | Select CompanyName, Email from Vendor where Email notEmpty '' and ContactPrimaryJoin(Title is 'Quality') |
Customer-SubCustomer Joined Search | Select EntityId, Email from Customer where Email notEmpty '' and SubCustomerJoin(EntityId contains 'subcust') |
Query Using a Single Quote in a Field Value | Select EntityId from Contact where Email is JohnO'Doe@company.com |
Query Using Double Quotes in a Field Value | Select EntityId from Contact where NickName is `John "the bomb"` |
Query Using a Backtick in a Field Value | Select EntityId from Contact where Height is "5`3" |
Operator Category | Operator |
Date-Only Operators | after before on onOrAfter onOrBefore notAfter notBefore notOn notOnOrAfter notOnOrBefore notWithin within |
String-only Operators | contains doesNotContain doesNotStartWith is isNot startsWith |
Numeric-Only Operators | between equalTo greaterThan greaterThanOrEqualTo lessThan lessThanOrEqualTo notBetween notEqualTo notGreaterThan notGreaterThanOrEqualTo notLessThan notLessThanOrEqualTo |
ID/Select Field-Only Operators | anyof noneof |
Grammar | Query Statements |
query | select field_list from table (where query_terms)? |
field_list | ('(' child_query ')' ',')* ('' (',' '(' child_query ')')) | (field (',' (field | '(' child_query ')'))*) |
child_query | select field_list from table |
Field | Label |
Table | label | string_literal | string_literal_2 | string_literal_3 |
query_terms | simple_term | joined_search (and (simple_term | joined_search)) |
*simple_term | label (operation | label)? value (',' value)* |
operation | '=' | '>' | '<' | '<>' | '>=' | '<=' |
Value | string_literal | decimal | macrodef |
joined_search | label '(" simple_term (and simple_term)* ')' |
Label | character (character | digit | "_")* |
character | ['A' - 'Z'] | ['a' - 'z'] |
Digit | ['0' - '9'] |
Select | 'SELECT' | 'Select' | 'select' |
From | 'FROM' | 'From' | 'from' |
And | 'AND' | 'And' | 'and' |
where | 'AND' | 'And' | 'and' |
string_literal | ''' ( (~[''','\\','\n','\r']) | ('\\' ( ['n','t','b','r','f','\\','\'','\''] | ['0'-'7'] ( ['0'-'7'] )? | ['0'-'3'] ['0'-'7'] ['0'-'7'] ) ) )* ''' |
string_literal_2 | "`" ( (~["`","\\","\n","\r"]) | ("\\" ( ["n","t","b","r","f","`","\\","\'","\""] | ["0"-"7"] ( ["0"-"7"] )? | ["0"-"3"] ["0"-"7"] ["0"-"7"] ) ) )* "`" |
string_literal_3 | "\"" ( (~["\"","\\","\n","\r"]) | ("\\" ( ["n","t","b","r","f","`","\\","\'","\""] | ["0"-"7"] ( ["0"-"7"] )? | ["0"-"3"] ["0"-"7"] ["0"-"7"] ) ) )* "\"" |
decimal | number ('.' number)? |
number | digit+ |
macrodef | '$(' ~[')']+ ')' |
Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to rollback changes once they have been made. The default is false. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | This is a pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default. |
DriverCompletion | T | Allows you to control whether or not the driver prompts you for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option prompts the user for every individual bit of information. Complete prompts the user for any information they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information. |
DriverOptions | T | Enter any valid ODBC connect string options here. There is no default. |
DSNType | T | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and `. |
MaxDataLength | T | This is the maximum data length for Long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or Target connector and the default setting is not 1 MB, the integration platform presets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
ModifyDriverOptions | T | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the Transformation. |
PrimaryKey | T | The PrimaryKey property allows the user to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property. There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
Synonyms | T | If set to true this property allows you to see synonyms. The alias names appear in the table drop down list along with the tables. Default is false. |
SystemTables | T | The SystemTables property is only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table drop down list. The default is false. |
TransactionIsolation | T | The Translation Isolation option allows the user to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default for version 2.x is Versioning. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms). The following lists the five supported isolation levels. • READ_UNCOMMITTED - Permits P1, P2, and P3. • READ_COMMITTED - Permits P2 and P3. Does not permit P1. • REPEATABLE_READ - Permits P3. Does not permit P1 and P2. • SERIALIZABLE - Does not permit any of P1, P2, and P3. • VERSIONING - Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency. For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation. |
UseCursors | T | The UseCursors property allows you to turn cursor support on and off. The default is false. When set to true and the specified ODBC driver does not support cursor inserts, and the integration platform falls back on the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
Views | T | If set to true this property allows you to see views. The view names appear in the table drop down list along with the tables. Default is true. |
Encoding | T | Sets the character encoding for NonStop SQL MX (ODBC) source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Propety | S/T | Description |
Record Count | S | The number of records supplied or accepted. Default is 100. |
Property | S/T | Description |
Batch Response | T | Creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than one. This is useful in the following cases: • You need to capture system-generated object IDs for use in future updates • You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis For more information, see
Batch Response File. |
Batch Size | S | The maximum number of source records to retrieve from the connector at one time. The default is zero, which means to read all. |
FlushFrequency | T | Number of operations the connector caches before writing them to the target. Default is zero. |
HTTP Method | T | HTTP Method to be executed. Methods supported: • POST - Create the entity. Default. • PUT - Replace the entity with the one specified. • MERGE - Update the entity fields that are specified (ignores null fields). • DELETE - Delete the entity. See the HTTP 1.1 specification for additional details. |
Truststore File | ST | The full path to the trust store certificate from the default keystore location. |
Truststore Password | ST | The password for the specified trust store. |
Property | S/T | Description |
CursorType | S | The type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only. For more information, see
Cursors. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. You might use this to instruct the SQL database server to filter the data based on a particular condition before sending it to the integration platform. Omit Where when you enter the clause. No default exists for this property. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table. |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. No default exists for this property. For example, you could have the statements: CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC) CREATE INDEX index2 ON mytable (Field2, Field3) These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3. The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation. ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent straight to the DBMS. The statement @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys in these cases, use native SQL. Note: This option works only in REPLACE mode. |
UseCursors | T | The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
DriverCompletion | ST | Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete (default), Complete Required and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information. |
DriverOptions | ST | Default is empty for a new map. This property will be automatically populated with ODBC connect string upon successful connection. For ODBC connectors, the map saved database information in the Driver Option property with real values (not using macros) during the last successful connection. If you want to run the map with a different database server, you have to delete the value for Driver Option and use the correct macros for the connection information. The map repopulates the value of Driver Option when it successfully connects to the new database server. |
DSNType | ST | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
IdentifierQuotes | ST | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes that there are two tables, Accounts and Receivable, and returns a syntax error indicating that they are not separated by a comma. The options for IdentifierQuotes are Default, None, " and '. |
MaxDataLength | ST | The maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
ModifyDriverOptions | ST | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the transformation. |
PrimaryKey | ST | The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property. There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
Synonyms | ST | If set to true this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | ST | The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
TransactionIsolation (v3.x and v3.5 only) | ST | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms). The following lists the four supported isolation levels. • READ_UNCOMMITTED – Permits P1, P2, and P3. • READ_COMMITTED – Permits P2 and P3. Does not permit P1. • REPEATABLE_READ – Permits P3. Does not permit P1 and P2. • SERIALIZABLE – Does not permit any of P1, P2, and P3. For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation. |
Views | ST | If set to true this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Encoding | ST | Sets the character encoding for ODBC 3.x source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
Property | S/T | Description |
Encoding | T | Sets the character encoding for ODBC 3.x Mass Insert source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
ConstraintDDL | T | Allows you to specify some additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. For example, you could have the statements: CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC) CREATE INDEX index2 ON mytable (Field2, Field3) These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3. The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, they can continue the transformation. ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an '@' is sent straight to the DBMS. The statement: @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys, use native SQL. Note: This option works only in REPLACE mode. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
SystemTables | T | The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
Views | T | If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Synonyms | T | If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
DSNType | T | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is true. To cause an insert statement to be executed for each record, this should be set to false, the slower setting. |
DriverCompletion | T | Allows you to control whether or not the driver prompts you for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. Options for IdentifierQuotes are Default, None, " and '. |
ModifyDriverOptions | T | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your transformation. |
DriverOptions | T | Enter any valid ODBC connect string options here. |
MaxDataLength | T | The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
PrimaryKey | T | The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
TransactionIsolation | T | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following lists the four supported isolation levels. • READ_UNCOMMITTED - Permits P1, P2, and P3. • READ_COMMITTED - Permits P2 and P3. Does not permit P1. • REPEATABLE_READ - Permits P3. Does not permit P1 and P2. • SERIALIZABLE - Does not permit any of P1, P2, and P3. For more information, see the Microsoft ODBC SDK documentation. |
UseCursors | T | The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of the cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL (see details above). |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false. |
DriverCompletion | T | Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete, Complete Required, No Prompt. The Prompt option prompts the user for every individual bit of information. Complete prompts the user for any information they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information. The default is Complete. |
DriverOptions | T | Enter any valid ODBC connect string options here. |
DSNType | T | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. IdentifierQuotes has four options: Default, None, ", and '. |
MaxDataLength | T | The maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
ModifyDriverOptions | T | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your you access multiple tables in the same transformation, each time that transformation runs. |
SQL Log | T | The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name. |
UpdateNullFields | T | Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
SQL Output | T | Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output: • Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property. • SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property. |
Synonyms | T | If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | T | The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
TransactionIsolation | T | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms). The following lists the four supported isolation levels. • READ_UNCOMMITTED - Permits P1, P2, and P3. • READ_COMMITTED - Permits P2 and P3. Does not permit P1. • REPEATABLE_READ - Permits P3. Does not permit P1 and P2. • SERIALIZABLE - Does not permit any of P1, P2, and P3. For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation. |
Views | T | If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Encoding | T | Sets the character encoding for ODBC 3.5 source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
Property | S/T | Description |
CursorType | S | The type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only. For more information, see
Cursors. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. You might use this to instruct the SQL database server to filter the data based on a particular condition before sending it to the integration platform. Omit Where when you enter the clause. No default exists for this property. Note: When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table. |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. No default exists for this property. For example, you could have the statements: CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC) CREATE INDEX index2 ON mytable (Field2, Field3) These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3. The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation. ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent straight to the DBMS. The statement @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys in these cases, use native SQL. Note: This option works only in REPLACE mode. |
UseCursors | T | The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
DriverCompletion | ST | Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete (default), Complete Required and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information. |
DriverOptions | ST | Default is empty for a new map. This property will be automatically populated with ODBC connect string upon successful connection. For ODBC connectors, the map saved database information in the Driver Option property with real values (not using macros) during the last successful connection. If you want to run the map with a different database server, you have to delete the value for Driver Option and use the correct macros for the connection information. The map repopulates the value of Driver Option when it successfully connects to the new database server. |
DSNType | ST | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
IdentifierQuotes | ST | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes that there are two tables, Accounts and Receivable, and returns a syntax error indicating that they are not separated by a comma. The options for IdentifierQuotes are Default, None, " and '. |
MaxDataLength | ST | The maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
ModifyDriverOptions | ST | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the transformation. |
PrimaryKey | ST | The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property. There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
Synonyms | ST | If set to true this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | ST | The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
TransactionIsolation (v3.x and v3.5 only) | ST | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms). The following lists the four supported isolation levels. • READ_UNCOMMITTED – Permits P1, P2, and P3. • READ_COMMITTED – Permits P2 and P3. Does not permit P1. • REPEATABLE_READ – Permits P3. Does not permit P1 and P2. • SERIALIZABLE – Does not permit any of P1, P2, and P3. For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation. |
Views | ST | If set to true this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Encoding | ST | Sets the character encoding for ODBC 3.x source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is true. To cause an insert statement to be executed for each record, this should be set to false, the slower setting. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Allows you to specify some additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. For example, you could have the statements: CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC) CREATE INDEX index2 ON mytable (Field2, Field3) These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3. The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, they can continue the transformation. ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an '@' is sent straight to the DBMS. The statement: @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys, use native SQL. Note: This option works only in REPLACE mode. |
DriverCompletion | T | Allows you to control whether or not the driver prompts you for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information. |
DriverOptions | T | Enter any valid ODBC connect string options here. |
DSNType | T | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. Options for IdentifierQuotes are Default, None, " and '. |
MaxDataLength | T | The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
ModifyDriverOptions | T | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your transformation. |
PrimaryKey | T | The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
Synonyms | T | If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | T | The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
TransactionIsolation | T | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following lists the four supported isolation levels. • READ_UNCOMMITTED - Permits P1, P2, and P3. • READ_COMMITTED - Permits P2 and P3. Does not permit P1. • REPEATABLE_READ - Permits P3. Does not permit P1 and P2. • SERIALIZABLE - Does not permit any of P1, P2, and P3. For more information, see the Microsoft ODBC SDK documentation. |
UseCursors | T | The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records. For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts Another complication of the cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL (see details above). |
ArraySize | T | Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server). The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1. Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred. |
Views | T | If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Encoding | T | Sets the character encoding for ODBC 3.x Mass Insert source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false. |
DriverCompletion | T | Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete, Complete Required, No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information. The default is Complete. |
DriverOptions | T | Enter any valid ODBC connect string options here. |
DSNType | T | You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes that there are two tables, Accounts and Receivable, and returns a syntax error indicating that they are not separated by a comma. IdentifierQuotes has these options: Default, None, " and '. |
MaxDataLength | T | The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements. Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions. |
ModifyDriverOptions | T | Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your transformation. |
SQL Output | T | Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output: • Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property. • SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property. |
SQL Log | T | The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name. |
UpdateNullFields | T | Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
Synonyms | T | If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | T | The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false. |
TransactionIsolation | T | The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms). The following lists the four supported isolation levels. • READ_UNCOMMITTED - Permits P1, P2, and P3. • READ_COMMITTED - Permits P2 and P3. Does not permit P1. • REPEATABLE_READ - Permits P3. Does not permit P1 and P2. • SERIALIZABLE - Does not permit any of P1, P2, and P3. For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation. |
Views | T | If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true. |
Encoding | T | Sets the character encoding for ODBC 3.x source and target files. Note: Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file. |
Property | ST | Version | Use |
AllowDistributedTransactions | ST | 10g, 11g, 12c | If set to true (default), distributed transaction processing is allowed. |
AppendHint | T | 10g, 11g, 12c | Applies only to insert operations. The default value is false. If this property is set to true, the hint is used in the Insert statement. |
ArraySize | T | 10g, 11g, 12c | Number of records fetched or inserted with each cursor operation. Default is zero. Maximum allowed is 100. |
Code Page | ST | 10g,11g | The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see
CodePage. |
CommitFrequency | T | 10g, 11g, 12c | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | 10g, 11g, 12c | Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default. |
Directory Alias | T | 10g, 11g, 12c | Allows you to specify the name of the Directory Alias. No default exists for this property. |
Encoding | ST | 11g, 12c | Determines how to store characters in Oracle 11g and 12c source and target files. Default is OEM. For choices, see
Encoding Reference. |
ParallelHint | ST | 10g, 11g, 12c | Use this property to include Parallel Hints in statements. Supported ParallelHint values are listed below • 0 (default) – No hint is included. • 1 – The hint is included in statements as: "parallel(table, DEFAULT)". This sets the degree of parallelism to the settings of the initialization parameters. • > 1(greater than 1) – The value is used as a degree of parallelism, with the hint included in statements as: "parallel(table, PROPVAL)". |
QueryRewrite | S | 12c | Allows connection to enable query rewriting and to set integrity levels. Property behavior and integrity levels are listed below: None - No altered session is issued and query rewriting is disabled. Enforced - The session is altered to allow query rewriting. A materialized view is used, provided that it guarantees the same result as using the detail tables. QueryRewrite can use no stale materialized views or trust relationships. This is the default value. Trusted - The session is altered to allow query rewriting. A materialized is used, provided that it is not stale. However, QueryRewrite may use trusted relationships, such as those declared by dimension objects or constraints that have not been validated. Stale Tolerated - The session is altered to allow query rewriting. The materialized view is used even when data in detail tables has changed. Note that this property must be set before connecting since the alter sessions occur during connection. |
Synonyms | ST | 10g, 11g, 12c | If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | ST | 10g, 11g, 12c | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. |
UseArrays | T | 10g, 11g, 12c | To use arrays, select true. The default is false (no arrays). UseArrays allows this connector to send batches of records (multiple rows/columns contained in one block as a blob array) to an Oracle Target at one time. This reduces some of the function call and network overhead that would occur if each record had to be sent to Oracle one at a time. Performance improvements of about four times occur with array inserts. Limitation: Array inserts make error handling difficult. If one of the records has invalid data, such as nulls or a duplicate key, you get one error for the entire batch of records. Because of this, using reject files is not suggested when using this property. |
Views | ST | 10g, 11g, 12c | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
WhereStmt | S | 10g, 11g, 12c | Provides a pass-through mechanism where advanced users can construct the Where clause of the query themselves. It can be used as an alternative to writing a lengthy query statement. This might be used to get the Oracle server to filter the data based on a condition before it is sent to the integration platform. Omit WHERE when you enter the clause. No default exists for this property. |
Name | Syntax | Notes |
VARCHAR2 | (size[BYTE|CHAR]) | Variable-length character string having maximum length of size bytes for characters. Maximum size is 4000 bytes or characters and minimum is 1 byte or 1 character. You must specify size for VARCHAR2. BYTE indicates the column has byte length semantics; CHAR indicates the column has character semantics. |
NVARCHAR2 | (size) | Variable-length character string having maximum length of size characters. Maximum size is determined by the national character set definition. You must specify size for NVARCHAR2. |
NUMBER | (p,s) | Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127. |
LONG | N/A | Character data of variable length up to 64000 bytes. |
DATE | N/A | Valid date range from January 1, 4712 BC to December 31, 9999 AD. |
BINARY_FLOAT | N/A | 32-bit floating point number. This data type requires 5 bytes, including the length byte. |
BINARY_DOUBLE | N/A | 64-bit floating point number. This data type requires 9 bytes, including the length byte. |
TIMESTAMP | (fractional_seconds_precision) | Year, month and day values of date, as well as hour, minute and second values of time, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values of fractional_seconds_precision are 0 to 9. The default is 6. |
TIMESTAMP with TIMEZONE | (fractional_seconds_precision) | All values of TIMESTAMP as well as TIME ZONE displacement value, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6. |
TIMESTAMP with LOCAL TIME ZONE | (fractional_seconds_precision) | All values of TIMESTAMP with TIME ZONE, with the following exceptions: • Data is normalized to the database time zone when it is stored in the database. • When the data is retrieved, users see the data in the session time zone. |
INTERVAL YEAR TO MONTH | (year_precision) | Stores a period of time in years and months, where year_precision is the number of digits in the YEAR datetime field. Accepted values are 0 to 9. The default is 2. |
INTERVAL DAY TO SECOND | (day_precision) (fractional_seconds_precision) | Stores a period of time in days, hours, minutes and seconds, where • day_precision is the maximum number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2. • fractional_seconds_precision is the number of digits in the fractional part of the SECOND field. Accepted values are 0 to 9. The default is 6. |
RAW | (size) | Raw binary data length of size bytes. Maximum size is 2000 bytes. You must specify size for a RAW value. |
LONG RAW | N/A | Raw binary data of variable length up to 2 GB. |
ROWID | N/A | Base 64 string representing the unique address of a row in its table. This data type is primarily for values returned by the ROWID pseudocolumn. |
UROWID | [(size)] | Base 64 string representing the logical address of a row of an index-organized table. The optional size is the size of a column of type UROWID. The maximum size and default is 4000 bytes. |
CHAR | (size [BYTE|CHAR]) | Fixed-length character data of length of size bytes. Maximum size is 2000 bytes or characters. Default and minimum size is 1 byte. BYTE and CHAR have the same semantics as VARCHAR2. |
NCHAR | (size) | Fixed-length character data of length of size characters. Maximum size is determined by the national character set definition, with an upper limit of 2000 bytes. Default and minimum size is 1 character. |
CLOB | N/A | A character large object containing single- or multibyte characters. Both fixed and variable-width character sets are supported, both using the database character set. Maximum size is (4 GB - 1 byte) * (database block size). |
NCLOB | N/A | A character large object containing Unicode characters. Both fixed-width and variable width character sets are supported, both using the database national character set. Maximum size is (4 GB - 1 byte) * (database block size). Stores national character set data. |
BLOB | N/A | A binary large object. Maximum size is (4 GB - 1 byte) * (database block size). |
BFILE | N/A | Contains a locator to a large binary file stored outside the database. Enables byte stream I/O access to external LOBs residing on the database server. Maximum size is 4 GB. |
Property | Version | Use |
AllowDistributedTransactions | 10g, 11g, 12c | If set to true (default), distributed transaction processing is allowed. |
Code Page | 10g | The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see CodePage. |
Directory Alias | 10g, 11g, 12c | Allows you to specify the name of the Directory Alias. There is no default. |
Encoding | 11g, 12c | Determines how to store characters in Oracle 11g and 12 c target files. Default is OEM. For encoding choices, see
Encoding Reference. |
SQL Log | 10g, 11g, 12c | The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name. |
SQL Output | 10g, 11g, 12c | Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. • Target Only – Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. Default. • Target Only (Unbound Mode) – Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target and SQL Log – Sends SQL statements to the Target and to the SQL log specified in the SQL Log property. • SQL Log Only – Sends SQL statements only to the SQL log file specified in the SQL Log property. |
Synonyms | 10g, 11g, 12c | If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false. |
SystemTables | 10g, 11g, 12c | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. |
UpdateNullFields | 10g, 11g, 12c | Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
Views | 10g, 11g, 12c | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
Name | Syntax | Notes |
VARCHAR2 | (size[BYTE|CHAR]) | Variable-length character string having maximum length of size bytes. Maximum size is 4000 bytes and minimum is 1 byte. You must specify size for VARCHAR2. BYTE indicates the column has byte length semantics. |
NVARCHAR2 | (size) | Variable-length character string having maximum length of size bytes. Maximum size is determined by the national character set definition. You must specify size for NVARCHAR2. |
NUMBER | (p,s) | Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127. |
LONG | N/A | Character data of variable length up to 64000 bytes. |
DATE | N/A | Valid date range from January 1, 4712 BC to December 31, 9999 AD. |
BINARY_FLOAT | N/A | 32-bit floating point number. This data type requires 5 bytes, including the length byte. |
BINARY_DOUBLE | N/A | 64-bit floating point number. This data type requires 9 bytes, including the length byte. |
TIMESTAMP | (fractional_seconds_precision) | Year, month and day values of date, as well as hour, minute and second values of time, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values of fractional_seconds_precision are 0 to 9. The default is 6. |
TIMESTAMP with TIMEZONE | (fractional_seconds_precision) | All values of TIMESTAMP as well as TIME ZONE displacement value, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6. |
TIMESTAMP with LOCAL TIME ZONE | (fractional_seconds_precision) | All values of TIMESTAMP with TIME ZONE, with the following exceptions: • Data is normalized to the database time zone when it is stored in the database. • When the data is retrieved, users see the data in the session time zone. |
INTERVAL YEAR TO MONTH | (year_precision) | Stores a period of time in years and months, where year_precision is the number of digits in the YEAR datetime field. Accepted values are 0 to 9. The default is 2. |
INTERVAL DAY TO SECOND | (day_precision) (fractional_seconds_precision) | Stores a period of time in days, hours, minutes and seconds, where • day_precision is the maximum number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2. • fractional_seconds_precision is the number of digits in the fractional part of the SECOND field. Accepted values are 0 to 9. The default is 6. |
RAW | (size) | Raw binary data of length of size bytes. Maximum size is 2000 bytes. You must specify size for a RAW value. |
LONG RAW | N/A | Raw binary data of variable length up to 2 GB. |
ROWID | N/A | Base 64 string representing the unique address of a row in its table. This data type is primarily for values returned by the ROWID pseudocolumn. |
UROWID | [(size)] | Base 64 string representing the logical address of a row of an index-organized table. The optional size is the size of a column of type UROWID. The maximum size and default is 4000 bytes. |
CHAR | (size [BYTE|CHAR]) | Fixed-length character data of length of size bytes. Maximum size is 2000 bytes or characters. Default and minimum size is 1 byte. BYTE and CHAR have the same semantics as VARCHAR2. |
NCHAR | (size) | Fixed-length character data of length of size characters. Maximum size is determined by the national character set definition, with an upper limit of 2000 bytes. Default and minimum size is 1 character. |
CLOB | N/A | A character large object containing single- or multibyte characters. Both fixed and variable-width character sets are supported, both using the database character set. Maximum size is (4 GB - 1 byte) * (database block size). |
NCLOB | N/A | A character large object containing Unicode characters. Both fixed-width and variable width character sets are supported, both using the database national character set. Maximum size is (4 GB - 1 byte) * (database block size). Stores national character set data. |
BLOB | N/A | A binary large object. Maximum size is (4 GB - 1 byte) * (database block size). |
BFILE | N/A | Contains a locator to a large binary file stored outside the database. Enables byte stream I/O access to external LOBs residing on the database server. Maximum size is 4 GB. |
Property | S/T | Description |
Query Source | S | Table/View (default option): This option lists all the objects available in Oracle CRM On Demand. After selecting this option, click the arrow Next to Source Table/View, to choose the Table or View that you want to access from the list. You must choose at least one table or view to be able to run a transformation with this connector. Query Statement: This option will allow you to extract data from Oracle CRM On Demand using a Query written in XML format. After selecting this option, click the ellipsis button to open the Query Statement Box to enter the Query XML. For details on writing Query XML statements, see the Web Services On Demand API Calls > Query Page topic located in the Oracle CRM On Demand Documentation. Note that the XML format for Web Service 1.0 and Web Service 2.0 API calls are different from those provided in the CRMOD documentation. You can identify which version (1.0 or 2.0) to use for a given object by referring to the "Entity Names" section of this document. Query File: Use this option to extract data using a query stored in a text file. After selecting this option, click the ellipsis button to browse and select your query file. Note that this query file should contain the XML required for the Query Page API call as given in the example below. Note: The XML should start with <ListOfxxxx> ELEMENT, where xxxx is the Parent Entity's name, as seen in the following example: <ListOfContact> <Contact> <ContactFirstName>='John'</ContactFirstName> <ContactLastName/> <ListOfAddress> <Address> <Country>='Canada'</Country> </Address> </ListOfAddress> </Contact> </ListOfContact> |
Table | T | Select the Table (Oracle CRM On Demand Object). This should be the transformation target object where the data will be written. |
Batch Response | Select the file name to which the target operation responses are logged. Notes: • In delete operations, there is one batch response entry for every delete action. • In Insert, Upsert, and Update operations, there is one batch response entry for every Insert or Update action. • The return Code for a single operation shows the result of the operation execution. If the operation executes successfully, then the return Code is 0. • For failed operations, the error Description tag shows a detailed error message (see below). This includes messages from both Oracle CRM On Demand and PDI. Note: When the return Code is 0 (execution was successful), the Description tag will return the Entity's unique ID for which the operation was successful. For example: <BatchResponse:batchResult xmlns:BatchResponse="http://www.pervasive.com/BatchResponse"> <BatchResponse:results> <BatchResponse:result> <BatchResponse:returnCode>34</BatchResponse:returnCode> <BatchResponse:errorDescription>No rows retrieved corresponding to the business component 'Campaign'(SBL-EAI-04378)</BatchResponse:errorDescription> <BatchResponse:objectContainerName>Campaign</BatchResponse:objectContainerName> <BatchResponse:updateOperation>DELETE</BatchResponse:updateOperation> </BatchResponse:result> </BatchResponse:results> <BatchResponse:componentInfo> <BatchResponse:returnCode>0</BatchResponse:returnCode> <BatchResponse:componentName>Oracle CRM On Demand</BatchResponse:componentName> </BatchResponse:componentInfo> </BatchResponse:batchResult> | |
Batch Size | S | The number of records buffered by PDI before processing them to a map and transfering data to the target. |
Page Size | ST | The number of records fetched for each Query Page Web Service call. Max Page Size = 100 (this limit is imposed by Oracle CRM On Demand). Default value is 50. |
Use Field Display Names | ST | When set to True, the Display Names of the fields are used as Field Names in the transformation. Display names are stripped from special characters. If a name is found to be a duplicate for an object, then an integer value is appended to the duplicate. When this property is set to True, the Field Description column shows the actual Field Name. The default value is False. |
UseChildAnd | S | Sets the UseChildAnd property of the Query Page web service call. Refer to the Oracle CRM On Demand documentation for details. The default value is True. |
Default Access Profile Name | ST | The default Access Profile name for the Role assigned to the current logged in user. To get this value, do the following: 1. Log in to CRM OnDemand. 2. Select Admin > UserManagementAndAccessControls > AccessProfiles > RoleManagement > [Edit] [roleName] > Step3-AccessProfiles > DefaultAccessProfile |
RecordsPerReq | T | Allows user to set the number of records inserted/updated during each request between 1-20. For example, if the batch size is set to 20 and the 10th record fails, an attempt will be made to run the operation again one record at a time until all of the records in that batch are processed. As a result, the batch response will have individual record level success/failure entry. Default value is 20. |
Entity Name | Create | Read | Update | Delete |
Account (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Activity (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Asset (WebService 1.0 API) | No | Yes | No | No |
Campaign (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Contact (WebService 1.0 API) | Yes | Yes | Yes | Yes |
CurrentUser (WebService 1.0 API) | No | Yes | No | No |
CustomObject1 (WebService 1.0 API) | Yes | Yes | Yes | Yes |
CustomObject2 (WebService 1.0 API) | Yes | Yes | Yes | Yes |
CustomObject3 (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Lead (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Opportunity (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Product (WebService 1.0 API) | Yes | Yes | Yes | No |
ProductCategory (WebService 1.0 API) | Yes | Yes | Yes | Yes |
ServiceRequest (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Solution (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Territory (WebService 1.0 API) | Yes | Yes | Yes | Yes |
User (WebService 1.0 API) | No | Yes | No | No |
UserGroup (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Application (WebService 2.0 API) | Yes | Yes | Yes | Yes |
BusinessPlan (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Claim (WebService 2.0 API) | Yes | Yes | Yes | Yes |
ContactBestTimes (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Coverage (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject4 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject5 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject6 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject7 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject8 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject9 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject10 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject11 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject12 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject13 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject14 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
CustomObject15 (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Damage ( WebService 2.0 API) | Yes | Yes | Yes | Yes |
DealRegistration (WebService 2.0 API) | Yes | Yes | Yes | Yes |
FinancialAccount (WebService 2.0 API) | Yes | Yes | Yes | Yes |
FinancialAccountHolder (WebService 2.0 API) | Yes | Yes | Yes | Yes |
FinancialAccountHolding (WebService 2.0 API) | Yes | Yes | Yes | Yes |
FinancialPlan (WebService 2.0 API) | Yes | Yes | Yes | Yes |
FinancialProduct (WebService 2.0 API) | Yes | Yes | Yes | Yes |
FinancialTransaction (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Fund (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Household (WebService 2.0 API) | Yes | Yes | Yes | Yes |
InsuranceProperty (WebService 2.0 API) | Yes | Yes | Yes | Yes |
InvolvedParty (WebService 2.0 API) | Yes | Yes | Yes | Yes |
MDFRequest (WebService 2.0 API) | Yes | Yes | Yes | Yes |
MedEd (WebService 1.0 API) | Yes | Yes | Yes | Yes |
Objectives (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Partner (WebService 2.0 API) | Yes | No | Yes | Yes |
PlanAccount (WebService 2.0 API) | Yes | Yes | Yes | Yes |
PlanContact (WebService 2.0 API) | Yes | Yes | Yes | Yes |
PlanOpportunity (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Policy (WebService 2.0 API) | Yes | Yes | Yes | Yes |
PolicyHolder (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Portfolio (WebService 2.0 API) | Yes | Yes | Yes | Yes |
PriceList (WebService 2.0 API) | Yes | Yes | Yes | Yes |
PriceListLineItem ( WebService 2.0 API) | Yes | Yes | Yes | Yes |
SPRequest (WebService 2.0 API) | Yes | Yes | Yes | Yes |
SPRequestLineItem (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Vehicle (WebService 2.0 API) | Yes | Yes | Yes | Yes |
Property | ST | Version | Use |
CodePage | T | 10g | The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see
CodePage. |
CommitFrequency | T | 10g, 11g, 12c | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | 10g, 11g, 12c | Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default. |
Directory Alias | T | 10g, 11g, 12c | Allows you to specify the name of the Directory Alias. No default exists for this property. |
DPParallel | T | 10g, 11g, 12c | Sets the direct path parallel attribute to allow parallel loading. The default is false. If you set this property to true, parallel loading is allowed, but it may also slow performance. |
Encoding | T | 11g, 12c | Type of encoding to use with Oracle 11g and 12c target files. Default is OEM. For choices, see
Encoding Reference. |
Synonyms | T | 10g, 11g, 12c | If set to true, allows you to see synonyms. The synonym names appear in the table list along with the tables. Default is false. |
SystemTables | T | 10g, 11g, 12c | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note: This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. |
Views | T | 10g, 11g, 12c | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
Property | S/T | Description |
URL | ST | URL for the web services. For example: http://<rightnowinstance>.rightnow.com/cgi-bin/<rightnowinstance>.cfg/services/soap |
Username | ST | Username for the Oracle Service Cloud account you are trying to access. |
Password | ST | Password for the Oracle Service Cloud account you are trying to access. |
Batch Size | S | Number of source records the connector caches before processing them in a map. Default is zero. |
Show Children | S | A Boolean variable to control whether sub-objects are shown in metadata and in data. • True – Show all the sub-object. • False (default) – Doesn't show any sub-object except ID and Named ID type sub-object. |
Flush Frequency | T | Number of operations the connector caches before writing them to the target. Default is zero. |
Batch Response | T | Sets the path name for a batch response file, which is a reporting mechanism for the connector. The file provides detailed results for each object in a batch where the batch size is greater than one. Obtaining detailed results is useful for doing the following: • Capturing system-generated object IDs for use in future updates. • Correlating an error with its object and having enough information about the error for exception handling and error diagnosis. A batch response entry is generated only for each top-level record. |
Property | ST | Version | Description |
BatchResponse | T | 12, 14 | This property creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases: • Capturing system-generated object IDs for use in future updates. • Correlating an error with its object and have enough information about the error for exception handling and error diagnosis. For more information, see Batch Response File. |
MaxBatch | T | 12, 14 | The MaxBatch property controls how many records are sent in one group for each individual insert, update, or delete operation. If you use a large number in this setting, you may see faster transformation run times. However, larger numbers may increase the need for memory. If you experience memory problems, lower the number. Note Oracle Siebel CRM on Demand enforces a maximum batch value limit of 20. Any higher setting returns the following error: Connection Property: MaxBatch's value is invalid. It is greater than the maximum value allowed (20). Resetting to previous value. |
PageSize | S | 12, 14 | When data records are arranged in blocks and the last record in each block is padded to the end of the block, you must set the page size to remove pad characters to from the file during the data transfer. To set page size, click PageSize to highlight the default value and type the correct page size value for your data. Note: The maximum number of records returned by the PageSize property is 100, a limitation of the Siebel server. If you enter a number greater than 100, you receive an "unable to read record" error 25504. |
UpdateNulls | T | 12, 14 | Allows you to decide whether nulls or empty strings are sent to the server. The following options are available: • Always (default) – Nulls and empty strings are sent to the server to set fields to null (update null and empty string values). • NullOnly – Null values are sent to the server, but empty strings are not sent. This option updates only null values. • Never – Null values and empty strings are not sent to the server This option does not update null values and empty strings. |
UseChildAnd | S | 12, 14 | Allows you to perform a query on parent and child components. If set to true, it returns both parent and child components. If set to false (default), it returns either the parent or the child component. For use cases related to using this property, see Manipulating Parent and Child Relationships. |
Use Case | Task | How to Accomplish the Task |
1 | Retrieve a list of recipients where the campaign name is FreeStuff. | Run the following SQL statement in the source: Select campaignname from Campaign where campaignname="FreeStuff";Select * from Campaign_Recipient |
2 | Insert a campaign name called FreeStuff that contains three recipients. | On the target, create an Upsert action for Campaign and set the campaignname to FreeStuff. Then create three Upsert actions for Campaign_Recipient and set the contactid (this assumes that you already know the value in contactid). |
3 | Update deliverystatus (one recipient) to Received for the campaign FreeStuff. | On the target, create an Upsert action for Campaign and set campaignnid. Create another Upsert action for Campaign_Recipient and set the contactid and deliverystatus to Received. |
4 | Delete one recipient in the campaign FreeStuff. | On the target, create an Update action for Campaign and set campaignnid. Create two Update actions for Campaign_Recipient and set the contactid. Note: Because child record deletion uses the Update action, you cannot delete all child records in the child entity. One record must remain. |
Action | Parameters |
ClearMap | {Target},{Account},{},{} |
Delete Record | {Target},{Account},{Account.child},{},{} |
ClearMap | {Target},{Account_Contact},{},{} |
Delete Record | {Target},{Account_Contact},{Account_Contact.child},{},{} |
Property | S/T | Description |
BadFile | T | Name of the file to which rejected records are written when running the Oracle Loader utility. If you leave value blank, the default file name is tabname.BAD. To change the default name, type a new one in the Value field. |
CodePage | ST | Encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
CommitRows | T | Number of records the Oracle Loader reads before doing a Commit. Default is 50. To change the default, type a new number in the Value field. |
DelimitAllFields | T | Default is false. If set to true, the integration platform inserts delimiters around all text fields. |
Direct | T | Specifies whether or not to write out the control file that includes the Direct option. Default is false. |
DiscFile | T | Name of the file to which discarded records are written when running the Oracle Loader utility. Default file name is tabname.DSC. To change the default, type a different name in the Value field. |
DiscMax | T | Maximum number of discarded records. The default is 50. |
DoublBackSlash | T | Whether or not to write double backslashes (\\) to the Oracle SQL Loader file name. Default is false. |
Errors | T | Maximum number of rejected records. Default is 50. |
Load | T | Maximum number of records to load into the table. Default is zero, which means to load all records. To change the default, type a different number in the Value field. |
LoadOpt | T | Type of load operation to use after the SQL Loader file has been created. Options are insert, append, and replace. Default is insert. |
NullCols | T | Whether or not to write out trailing null columns. The default is true. |
Skip | T | Whether or not the Oracle Bulk Loader utility skips a number of records before starting the load. Default is zero, which means that loading starts with the first record. To change the default, type a different number in the Value field. |
StripLeadingBlanks | ST | Whether or not to strips leading blanks from data fields. • When Oracle SQL Loader is the source connector, leading blanks are not stripped from the data. To remove them, set StripLeadingBlanks to true. • When Oracle SQL Loader is the target connector, leading blanks are stripped when the data is written. To remove them, set StripLeadingBlanks to false. |
StripTrailingBlanks | ST | Whether or not to strip trailing blanks from data fields. • When Oracle SQL Loader is the source connector, trailing blanks are not stripped from the data. To remove them, set StripTrailingBlanks to true. • When Oracle SQL Loader is the target connector, trailing blanks are stripped when it writes data. To remove them, set StripTrailingBlanks to false. |
TableName | T | Name of the Oracle database table to which data is written. No default. |