Connectors : Source and Target Connectors : Source and Target Connectors H-O
 
Share this page                  
Source and Target Connectors H-O
This section provides information about source and target connectors from H to O
HIPAA
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) mandates that health care organizations must adopt transaction and information security standards to support the electronic exchange of administrative and financial health care data for administrative simplification.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
HIPAA Structure
The HIPAA (Health Insurance Portability and Accountability Act of 1996) requires the use of certain standard formats for documentation of medical claims that are related to services and other transactions associated with medical record keeping. Following is the list of HIPAA form titles and the corresponding form number:
Health Claims or Equivalent Encounter Information (Dental, Professional, Institutional)—837
Enrollment and Disenrollment in a Health Plan—834
Eligibility for a Health Plan—270, 271
Health Care Payment and Remittance Advice—835
Health Plan Premium Payments—820
Health Claim Status—276, 277
Referral Certification and Authorization—278
Coordination of Benefits—837
Addenda schemas use the following naming structure:
HIPAA_Health_Care_Claim_Remittance_Advice.X091.A1.X12.4010.835.1.0.ds.jsn
Example
Definitions of each of the naming structure components:
Health Care Payment and Remittance Advice—formal document name
X091—HIPAA designation (describes form as HIPAA-compliant)
A1—Addenda designation; implementation guide
X12—ANSI project designation
4010—version of the document
835—form number of the primary schema
Note:  Before July 2003, addenda schemas used a "0.<n>.ds.jsn" naming structure. Later naming structures use a "1.<n>.ds.jsn" convention.
Version Mismatch Errors
The ANSI standards reflect generic supersets designed to address a wide range of data exchange requirements. You may decide to define your own "subset" and develop rules that describe the data you submit. If you elect to develop your own schema, be aware that many errors can be traced to schema "version mismatch". It is important that all trading partners agree on the specific version of the schema to be used in transactions. Addenda schemas especially, should be examined to verify version match with all parties involved in the transaction.
Note:  To verify the version of a schema, open the schema in a text editor. Check the header information. In the header, notice the date created and date (last) modified fields. These fields offer a convenient method to determine precise version match.
Changes to, or customization of the schema must be communicated among and mutually agreed upon by all trading partners prior to "live" data exchange. A customized or modified schema should be verified by each partner to ensure version match at each point of exchange within the trading group. Complete preparation is the key to minimize production errors.
Schema File Required
You are provided with a template file that contains the structure for your HIPAA file connections. If your HIPAA file structure is different, you may need to modify the template.
The template file is designed to parse source data to generate a target schema. You cannot use this file to parse source data in a map. Doing this will cause an error.
To make changes to a HIPAA template file
1. Import the file into Project Explorer.
2. Open the file in the Schema Editor.
3. Make your changes.
4. Save the file with a different name.
To obtain a template file, contact your sales representative. These template files are standard schemas.
To connect to an HIPAA source or target file, you must set a schema in the HIPAASchemaFile property. You cannot connect directly to the schema. You must select a schema file from your current project that contains the schema you want to use.
HIPAA Message Hierarchy
The following is the HIPAA message schema hierarchy. If you make changes to the default HIPAA schema, make sure you adhere to the HIPAA schema rules defined below to prevent the updated schema from becoming invalid.
Message/Transaction
/download/attachments/24975746/HIPAA%20message%20hierarchy.png?version=1&modificationDate=1487967150731&api=v2
Note:  The schema must include two sections: root_defs that defines the schema structure at the root level and type_defs that defines the record types.
Interchange (ISA -IEA)
(Do not edit) A container of one or more functional groups. The interchange/message container must include a header (ISA) and trailer (IEA). The segments can appear only once in the message. The record type name is always Loop.Function.
Elements:
name - Required. Identifies the unique identifier for the group.
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means multiple functional groups can exist.
sequence - Indicates the position of the loop within the transaction.
rec_type_ref - Identifies the record definition. Examples: ISA, IEA, GS, and GE
Functional Group (GS - GE)
(Do not edit) A collection of messages. The functional group container must include a header (GS) and trailer (GE) for the functional group container and must reside inside the interchange enveloping segment. The record type name is always Loop.Function.
Elements:
name - Required. Identifies the unique identifier for the message
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means multiple transactions can exist in the group.
rec_type_ref - Identifies the record defintion. Examples: ISA, IEA, GS, and GE.
Transaction (ST - SE)
(Do not edit) A container of one or more segments and loops; also referred to as a message. The transaction container must include a header (ST) and trailer (SE) for the transaction container, and reside inside of the functional group. The record type name is always Loop.Message.
Elements:
name - Required. Identifies the unique identifier for the segment or loop.
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means multiple functional groups can exist in the transaction.
rec_type_ref - Identifies the record def inti on. Examples: ISA, IEA, GS, and GE.
Choice
Defines a group of loops or segments in which only one loop or segment can be selected. The record type name always includes Choice as the prefix. For example, Choice.2000
Elements:
name - Required. Identifies the unique identifier for the group of segments or loops.
recognition_rule - Required. Identifies the recognition rule name.
discriminator_field - Required. The fully qualified name in one of referred record or structure in the Choice group. Uses the format: recordname/field. A forward slash (/) separates the record name and field name. For example, HL.0010/E.735.03
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means that multiple function groups can exist.
Loop
Repeating data within the message that is mapped to the JSON field. The record type name always includes the Loop prefix. For example, Loop.1000A
Elements:
name - Required. Identifies the loop name.
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means that multiple function groups can exist.
sequence - Indicates the loop is defined in sequential order.
Segment
A collection of fields that share a particular data type.
Elements:
name - Required. Identifies the unique identifier for the field.
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means that multiple funcation groups can exist.
sequence - Indicates the segment is defined in sequential order.
Composite
Defines a group of elements in a sequence that is mapped to struct_type_ref. Prefixed with C.
Elements:
name - Requires. Identifies the unique identifier for the element.
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means that multiple funcation groups can exist.
sequence -Indicates the composite is defined in sequential order.
Element
The element ID uses the prefix E. For example, E.I12.13.
Elements:
name - Required. Identifies the unique identifier for the element.
max_occurs - Optional. Defines the field requirement: 0=optional, 1=required; if min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group: 0, 1 (default), unbounded. An unbounded minimum occurrence means that multiple funcation groups can exist.
sequence - Indicates the element is defined in sequential order.
type - Indicates the data type for the element: identifier (ID), alphanumeric (AN), date (DT), time (TM), float(R), binary (B), numeric (N0 to Nn, 0…n is the decimal size).
length - Indicates the maximum length for the element: 0, MAX_LONG, and unbounded.
min_length - The minimum length of characters allowed for the field data type.
Below are the steps to create a segment or a loop. You can also add these parts of a schema by duplicating existing parts and making appropriate edits.
To add a segment or loop
1. Add a record type.
2. Highlight the new record type that is added to the bottom of the record type list.
3. Select the Properties tab in the console and edit the default name (if necessary) and add a description.
4. Expand the segment group and define the elements.
5. As a best practice to keep the schema organized, drag and drop the segment to the appropriate location in the schema.
6. Switch to Schema View to verify the hierarchy of the schema is what you want.
7. Select the group that you want to associate with the record type.
8. Add a record type reference, which is the pointer to the record type structure.
9. Edit the properties for the record type reference as needed. Make sure you select the new record type as the Reference Record Type.
10. Highlight the schema root name.
11. Click the Recognition Rules tab in the console.
12. Select an existing rule set or add a rule set.
13. (Optional) Provide a description of the rule set.
14. Add one or more rules with descriptions. Define the recognition rules (repeat the following steps for each rule you add):
Select a discriminator field from the list.
Enter the string to be matched in the Value field.
Optional. Select the Case Sensitive option if you want to match the string exactly.
Select the rule from the list to be evaluated if a record does not match this entry.
15. Save the schema.
Connector-Specific Notes
Data Type Display
The data types that you see under Type for the source or target schema are HIPAA data type names, such as DT, AN, or ID. However, the integration platform reads them as text. To use a DT (DateTime) data type in an expression, treat the data type as text that must be converted to a (DateTime) data type. For more information about converting data types, search for "commonly used functions" in the documentation.
Building Structural Data
Loading and unloading HIPAA structural information takes a few seconds longer when compared with other connectors. After clicking various lists and choosing HIPAA Health Claim adapter files, you may notice a pause of a few seconds. The status text displays the message "Building tree structure, please wait." Other status messages indicate that schemas are getting built.
This limitation affects HIPAA selections at the following locations:
Source or target type
Source or target file
Source or target data browsers
Save Transformation (Save icon or File > Save)]
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
SchemaFile
ST
Allows you to choose a document schema file (.ds.xml) as a source or target. Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
CodeSetFile
ST
A CodeSet file (.dsc.xml) is automatically generated when you save a document schema in Document Schema Designer. They are stored in the same location as document schemas. CodeSets contain CodeTable values and any date-time masks used in the schema as well as segment qualifiers.
QualifiedFieldNames
Validation
ST
None is the default setting. The following validation methods are available:
Composite/Element – Validate maxuse, requirement, min/max length and default value.
Loop/Segment – Validate maxuse and requirement.
Composite/Element+Loop/Segment – Validate both composite/element and loop segment.
QualifiedFieldNames
ST
Qualified Name is the name path of the parents of the segment in the Schema tree. If QualifiedFieldNames is true, Qualified Name is put before a field name. The default is false.
Skip
S
When the Skip property is set to true, segments that do not match those in the applied schema file are not read. If you are reading large HIPAA files and have many mismatches, parsing time increases. To improve performance, set it to False.
SegmentTerminator
T
A character that terminates a segment. Select a Segment Terminator from the list. The default is CR-LF. For your options, see ElementSeparators.
ElementSeparator
T
Select an element separator from the list. The default is *(002A). For your options, see ElementSeparators.
SubElementSeparator
T
The SubElementSeparator supersedes any values mapped in the ISA segment. Allows overrides of values from the command line or through the API. The default is :(003A). For your options, see ElementSeparators.
RepetitionSeparator
T
Separates multiple occurrences of a field. The default is ^(005E). For your options, see ElementSeparators.
LineWrap
T
Sets the type of line wrap in your source file. The default is None.
Some files may consist of a constant stream of data with no line separators. LineWrap "forces" a temporary line-wrapping behavior. If you attempt to connect to your file and receive parse error messages, change the setting to CR-LF, CR, LF, or LF-CR.
This property is available when the SegmentTerminator property is not set to CR or LF.
WrapLength
T
Sets the length of line wrap in your target file. The default is 80. This property is available when the LineWrap property is not set to None.
Element Separators
The following element separators are supported:
CR-LF
STX (0001)
SOT (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS (001C)
GS (001D)
RS (001E)
US (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E) - (the default)
_ (005F)
' (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Checking the Validity of HIPAA EDI Structures
To verify that you are working with valid HIPAA EDI files, check the value of the second record in the file. This should be something like "004010XNNN" where "NNN" is the numeric value that corresponds to the appropriate HIPAA X-code transaction set (for example, "004010X098" for the Professional 837).
Example
ISA*00* *00* *30*34-39495350 *30*11-22233344
*000302*1541*U*00401*000000742*0*T*:
GS*HC*TEST-SND*TEST-RCV*20000302*1541*1*X*004010X098
How HIPAA Connectors Can Help
The integration platform includes the HIPAA (eDoc) connector for helping to speed the creation, parsing, transformation and integration of HIPAA transactions with existing systems. It provides built-in HIPAA transaction parsing logic and compliant target templates for mapping native data (without the need to modify existing applications). The connector provides a tool to develop and test transactions while maintaining data integrity.
The integration platform offers bi-directional translation of the ANSI ASC X12N 837 Health Care Claims (837) transaction, for professional (x098src-24975746), dental (x097src-24975746) and institutional (x096src-24975746) claims to any format required by existing systems. For incoming transactions, the most complex 837 is automatically parsed into a flat presentation for drag-and-drop mapping of hierarchical EDI elements to native target structures or schemas. For outbound 837 transactions, the integration platform provides empty 837 compliant target structures, stored as templates, for mapping. Rules for reconciling data syntax and semantic differences required by the output are constructed through an event driven programming model coupled with a powerful visual RIFL Script Editor. During the entire map process actual "live" data is used, not the metadata, which eases the design process and provides immediate testing and debugging of the map designs.
Tip:  You can purchase document schemas for all other HIPAA transaction sets. Please contact your sales representative for more information.
HIPAA Transaction Set Synopsis
A list of HIPAA transaction sets, that includes Guide ID, Guide Description, Primary Trading Partners and Exceptions/Limitations, follows:
270/271: Health Care Eligibility/Benefit Inquiry and Information Response
GUIDE ID: 004010X092
Guide Description: The Health Care Eligibility/Benefit Inquiry and Information Response Implementation Guide describes the use of the Eligibility, Coverage or Benefit Inquiry (270) Version/Release 3070 transaction set and the Eligibility, Coverage, or Benefit Information (271) Version/Release 3070 transaction set for the following business usage:
Determine if an Information Source organization, such as an insurance company, has a particular subscriber or dependent on file
Determine the details of health care eligibility or benefit information
PRIMARY TRADING PARTNERS: Health care providers, such as hospitals and physicians. Health care payers, such as insurance companies, HMOs, PPOs and state and federal agencies, such as Medicare, Medicaid and CHAMPUS.
EXCEPTIONS/LIMITATIONS: Intended to use for health care eligibility/benefits. It does not provide a history of benefit usage, and is not intended for property and casualty or workers' compensation usage.
Other Considerations
To learn about how HIPAA/EDI terminology differs from other data types used in integration tools, see Property Options.
Valid EDI/HIPAA
The ISA Interchange Control Header is the first segment in all transaction set files. If this segment does not appear first, or is not exactly 105 bytes, the file is not a valid HIPAA/EDI format.
Separators
If you include expressions in your transformation, be sure to include element separators for the segments that are unused (no data in segments that are not required, called optional or situational). This is essential if there are any segments that are required after the unused segment(s). The separators should be used to indicate the location of each segment string. For instance, if you do not have data in the optional NM107 (Name Suffix) segment, you still must put in a placeholder, since there are other segments that are required after the NM107 (Name Suffix) segment.
Optional and Situational Segments
Optional segments, as the name implies, are not required. Situational segments are also optional, however, they are required if another segment is used that requires the situational segment. Do not use optional/situational segments to reset any flags you have set. If that flagged segment is not used, the flag does not operate as you originally planned. Use flags on required segments only.
HIPAA (eDoc)
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) mandates that the health care organizations must have transaction and information security standards to support the electronic exchange of administrative and financial health care data.
In HIPAA regulations, the service provider is responsible for the accuracy and timely submission of claims and other reporting documents. If you are a service provider, then:
Make sure to be conversant in HIPAA requirements.
Participate actively with the vendor to develop solutions that meet these requirements with a minimum impact on the professional practice.
If you are a vendor, then understand:
That the complicated regulations and documentation can be overwhelming to the service provider, whose focus is on the care and treatment of their patients.
Other party's perspective as you have already initiated a business plan.
The HIPAA (eDoc) connector supports the HIPAA 5010 guideline, featuring the use of repeating field elements and is defined in ISA11 as 165.
EDI X12 and HIPAA connectors are not interchangeable as they were in version 9. In version 10, you must use the correct connector type and schema. To decide the connector that must be used, you can use any of the following ways:
See the EDI/HIPAA data file content, ex (GS8=005010X279A1). It states the type. In this case, it is HIPAA 270/271 005010X279A1;
If you use a HIPAA schema, use HIPAA connector. Otherwise, use EDI(X12) connector.
HIPAA Structure
HIPAA (Health Insurance Portability and Accountability Act of 1996) uses certain standard formats for documentation of medical claims related to services and other transactions associated with maintaining the medical records. The following table provides the HIPAA form titles and the corresponding form numbers.
 
HIPAA form Title
Form Number
Health Claims or Equivalent Encounter Information (Dental, Professional, Institutional)
837
Enrollment and Dis-enrollment in a Health Plan
834
Eligibility for a Health Plan
270, 271
Health Care Payment and Remittance Advice
835
Health Plan Premium Payments
820
Health Claim Status
276, 277
Referral Certification and Authorization
278
Coordination of Benefits
837
Addenda schemas use the following naming structure:
HIPAA_Health_Care_Claim_Remittance_Advice.X091.A1.X12.4010.835.1.0.ds.jsn
where,
Health Care Payment and Remittance Advice: Formal document name
X091: HIPAA designation (describes form as HIPAA-compliant)
A1: Addenda designation; implementation guide
X12: ANSI project designation
4010: Version of the document
835: Form number of the primary schema
Note:  Prior to July 2003, addenda schemas used a "0.<n>.ds.jsn" naming structure. Later naming structures use a "1.<n>.ds.jsn" convention.
Version Mismatch Errors
The ANSI standards reflect generic supersets designed to address a wide range of data exchange requirements. You may decide to define your own "subset" and develop rules that describe the data you submit. If you want to develop your own schema, many errors can occur due to the schema “version mismatch”. Hence, all trading partners must agree on the specific version of the schema to be used in transactions. Addenda schemas must be examined to verify version match with all parties involved in the transaction.
To verify the version of a schema, open the schema in a text editor. Check the header information. In the header, note the date created and date (last) modified fields. These fields determine precise version match.
Any schema changes or customizations must be mutually agreed upon by all the trading partners prior to "live" data exchange. A customized or modified schema must be verified by each trading partner to make sure that the version matches at each point of exchange within the trading group.
Schema File Required
You can obtain an HIPAA (eDoc) schema file from HIPAA that contains the structure for your HIPAA (eDoc) file connections. If your HIPAA (eDoc) file structure is different, you may have to modify the template.
The template file is designed to parse source data to generate a target schema. You cannot use this file to parse source data in a map. This causes an error. Instead, perform the following steps to generate a schema from a target connector that you can use in a map.
To make changes to a HIPAA template file
1. Import the template file into Project Explorer.
2. Open the file in the Schema Editor.
3. Make your changes.
4. Save the file with a different name in the required project folder.
To obtain a template file, contact your sales representative. These template files are standard schemas.
To connect to a HIPAA source or target file, you must set a schema in the HIPAASchemaFile property. You cannot connect directly to the schema. You must select a schema file from your current project that contains the schema you want to use.
HIPAA Message Hierarchy
If you make changes to the default HIPAA schema, make sure you adhere to the following HIPAA schema rules to prevent the updated schema from becoming invalid.
/download/attachments/24975746/HIPAA%20message%20hierarchy.png?version=1&modificationDate=1487967150731&api=v2
Note:  The schema must include two sections—root_defs that defines the schema structure at the root level and type_defs that defines the record types.
Interchange (ISA -IEA)
A container of one or more functional groups. The interchange/message container must include a header (ISA) and trailer (IEA). The segments can appear only once in the message. The record type name is always Loop.Function.
Note:  Do not edit Interchange (ISA- IEA).
Elements:
name - Required. Identifies the unique identifier for the group.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded.
An unbounded minimum occurrence means multiple functional groups can exist.
sequence - Indicates the position of the loop within the transaction.
rec_type_ref - Identifies the record definition. For examples, ISA, IEA, GS, and GE.
Functional Group (GS - GE)
A collection of messages. The functional group container must include a header (GS) and trailer (GE) and must reside inside the interchange enveloping segment. The record type name is always Loop.Function.
Note:  Do not edit Functional Group (GS - GE).
 
Elements:
name - Required. Identifies the unique identifier for the message.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple transactions can exist in the group.
rec_type_ref - Identifies the record definition. For examples, ISA, IEA, GS, and GE.
Transaction (ST - SE)
A container of one or more segments and loops. Also, referred to as a message. The transaction container must include a header (ST) and trailer (SE) and reside inside of the functional group. The record type name is always Loop.Message.
Note:  Do not edit Functional Group (ST - SE).
Elements:
name - Required. Identifies the unique identifier for the segment or loop.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple functional groups can exist in the transaction.
rec_type_ref - Identifies the record definition. For examples, ISA, IEA, GS, and GE.
Choice
Defines a group of loops or a group of segments in which only one loop or segment can be selected. The record type name always includes Choice as the prefix. For example, Choice.2000
Elements:
name - Required. Identifies the unique identifier for the group of segments or loops.
recognition_rule - Required. Identifies the recognition rule name.
discriminator_field - Required. The fully qualified name in one of the referred record or structure in the Choice group. Uses the format: recordname/field. A forward slash (/) separates the record name and field name. For example, HL.0010/E.735.03.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple functional groups can exist.
Loop
Repeating data within the message that is mapped to the JSON field. The record type name always contains the Loop prefix. For example, Loop.1000A
Elements:
name - Required. Identifies the loop name.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple functional groups can exist.
sequence - Indicates the loop is defined in a sequential order.
Segment
A collection of fields that shares a particular data type.
Elements:
name - Required. Identifies the unique identifier for the field.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple functional groups can exist.
sequence - Indicates the segment is defined in a sequential order.
Composite
Defines a group of elements in a sequence that is mapped to struct_type_ref. Prefixed with C.
Elements:
name - Requires. Identifies the unique identifier for the element.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple functional groups can exist.
sequence - Indicates the composite is defined in a sequential order.
Element
The element ID uses the prefix E. For example, E.I12.13.
Elements:
name - Required. Identifies the unique identifier for the element.
max_occurs - Optional. Defines the field requirement:
0: optional
1: required
If min_occurs is 0 and max_occurs is 0, the field is not used.
min_occurs - Optional. Identifies the minimum number of times a repeat can occur within a group:
0: Default value
1: unbounded
An unbounded minimum occurrence means multiple functional groups can exist.
sequence - Indicates the element is defined in a sequential order.
type - Indicates the data type for the element:
identifier (ID)
alphanumeric (AN)
date (DT)
time (TM)
float (R)
binary (B)
numeric (N0 to Nn, 0…n is the decimal size).
length - Indicates the maximum length for the element:
0
MAX_LONG
unbounded
min_length - The minimum length of characters allowed for the field data type.
Note:  You cannot edit the min_length property using the Schema Editor. To edit this value, open the schema file in JSON editor and change the value.
Connector-Specific Notes
This section provides information that specific to the connector.
Data Type Display
The data types that are displayed under the Type for source or target schema are HIPAA data type names, such as DT, AN, or ID. However, the integration platform reads them as text. To use a DT (DateTime) data type in an expression, consider the data type as text that must be converted to a (DateTime) data type.
Building Structural Data
Loading and unloading HIPAA structural information takes a few seconds longer than the other connectors. After clicking various lists and choosing HIPAA Health Claim adapter files, you may notice a pause for a few seconds. The status text displays the message "Building tree structure, please wait." Other status messages indicate that schemas are being built.
This limitation affects HIPAA selections at the following locations:
Source or target type
Source or target file
Source or target data browsers
Save Transformation (Save icon or File > Save)]
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
ST
To specify encoding for reading source and writing target data, select a code page translation table.
The default value is OEM.
To use ANSI, select ISO8859-1.
For Windows CodePage 1252, select CP1252.
HIPAASchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
Skip
S
If set to true, segments that do not match those in the specified schema file are not read. If you are reading large HIPAA files and have many mismatches, the parsing time increases. To improve performance, set this option to False.
SegmentTerminator
T
A character that terminates a segment. Select a Segment Terminator from the list. The default is CR-LF. For the options, see Element Separators.
ElementSeparator
T
Select an element separator from the list. The default is *(002A). For the options, see Element Separators.
SubElementSeparator
T
Supersedes any values mapped in the ISA segment. Allows overrides of values from the command line or through the API. The default is :(003A). For the options, see Element Separators.
RepetitionSeparator
T
Separates multiple occurrences of a field. The default is ^(005E). For the options, see Element Separators.
LineWrap
T
Sets the type of line wrap in your source file. The default is None.
Some files may consist of a constant stream of data with no line separators. LineWrap forces a temporary line-wrapping behavior. If you attempt to connect to your file and receive parse error messages, change the setting to CR-LF, CR, LF, or LF-CR.
Note:  This property is available when the SegmentTerminator property is not set to CR or LF.
WrapLength
T
Sets the length of line wrap in the target file. The default is 80.
Note:  This property is available when the LineWrap property is not set to None.
Element Separators
The following element separators are supported:
CR-LF
STX (0001)
SOT (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS (001C)
GS (001D)
RS (001E)
US (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E) - (the default)
_ (005F)
' (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
This connector sets field width in bytes. The number of characters allowed for a field varies. For more details, see Determining Field Width in Characters or Bytes.
Checking Validity of HIPAA EDI Structures
To verify that you are working with valid HIPAA EDI files, check the value of the second record in the file. This value must be similar to "004010XNNN" where "NNN" is the numeric value that corresponds to the appropriate HIPAA X-code transaction set (for example, "004010X098" for the Professional 837).
Example
ISA*00* *00* *30*34-39495350 *30*11-22233344
*000302*1541*U*00401*000000742*0*T*:
GS*HC*TEST-SND*TEST-RCV*20000302*1541*1*X*004010X098
How HIPAA Connectors Can Help
The integration platform includes the HIPAA (eDoc) connector to speed the creation, parsing, transformation, and integration of HIPAA transactions with existing systems. It provides built-in HIPAA transaction parsing logic and compliant target templates for mapping native data (without the need to modify existing applications). The connector provides a tool to develop and test transactions while maintaining data integrity.
The integration platform offers bi-directional translation of the ANSI ASC X12N 837 Health Care Claims (837) transaction, for professional (x098), dental (x097), and institutional (x096) claims to any format required by existing systems. For incoming transactions, the most complex 837 is automatically parsed into a flat presentation for drag-and-drop mapping of hierarchical EDI elements to native target structures or schema. For outbound 837 transactions, the integration platform provides empty 837 compliant target structures, stored as templates, for mapping. Rules for reconciling data syntax and semantic differences required by the output are constructed through an event driven programming model coupled with a powerful visual Script Editor. During the entire map process, actual "live" data is used and not the metadata, which eases the design process and provides immediate testing and debugging of map designs.
Tip:  You can purchase document schemas for all other HIPAA transaction sets. Contact your sales representative for more information.
HIPAA Transaction Set Synopsis
A listing of HIPAA transaction sets, which includes Guide ID, Guide Description, Primary Trading Partners, and Exceptions/Limitations, follows:
270/271: Health Care Eligibility/Benefit Inquiry and Information Response
GUIDE ID: 004010X092
Guide Description: The Health Care Eligibility/Benefit Inquiry and Information Response Implementation Guide describes the use of the Eligibility, Coverage or Benefit Inquiry (270) Version/Release 3070 transaction set and the Eligibility, Coverage, or Benefit Information (271) Version/Release 3070 transaction set for the following business usage:
Determine if an Information Source organization, such as an insurance company, has a particular subscriber or dependent on file.
Determine the details of health care eligibility or benefit information.
PRIMARY TRADING PARTNERS: Health care providers, such as hospitals and physicians. Health care payers, such as insurance companies, HMOs, PPOs and state and federal agencies, such as Medicare, Medicaid and CHAMPUS.
EXCEPTIONS/LIMITATIONS: Intended for use for health care eligibility/benefits. Does not provide a history of benefit usage. Is not intended for property and casualty or workers' compensation usage.
Other Considerations
To learn about how HIPAA/EDI terminology differs from other data types used in integration tools, see Property Options.
Valid EDI/HIPAA
The ISA Interchange Control Header is the first segment in all transaction set files. If this segment does not appear first, or is not exactly 105 bytes, the file is not a valid HIPAA/EDI format.
Separators
If you include expressions in your transformation, make sure that the element separators are included for the segments that are unused (no data in segments that are not required, called optional or situational). This is essential if there are any segments that are required after the unused segment(s). The separators must be used to indicate the location of each segment string. For instance, if you do not have data in the optional NM107 (Name Suffix) segment, you must add a placeholder, since there are other segments that are required after the NM107 (Name Suffix) segment.
Optional and Situational Segments
Optional segments are not required. Situational segments are also optional. However, they are required if another segment is used and that requires the situational segment. Do not use optional or situational segments to reset any flags you have already set. If that flagged segment is not used, the flag does not operate as you originally planned. Use flags on required segments only.
Hitachi Hi-RDB (ODBC)
Hitachi Hi-RDB is a program that the integration platform can connect to with an ODBC 3.x driver.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
The integration platform connects to Hitachi Hi-RDB tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
HL7
Health Level Seven (HL7) is an organization that develops messaging standard specifications that enable disparate health care applications to exchange keys sets of clinical and administrative data. The HL7 standard provides an electronic interchange of clinical, financial and administrative information among independent health care oriented computer systems, such as hospital information systems, clinical laboratory systems, enterprise systems and pharmacy systems.
The HL7 standard provides the layout of messages exchanged between two or more applications. A message is comprised of multiple segments that must be sent in a particular order and which may or may not repeat. Segments are collections of data elements that typically share a common subject. For example, the PID segment contains patient identification data elements such as name, address and social security number. The NK1 next of kin segment contains similar data elements about the patient's next of kin.
HL7 defines which data elements are to be sent and determines the data type and the suggested length of each. It also indicates whether the data element is required or optional and whether it may repeat. Fields must be sent in the order in which they are presented in the Standard. This way, both the sending and receiving systems know what data is expected, in what order and in what format. HL7 also enables one system to query another for relevant information.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Single record type data: Fixed ASCII has only one certainty – every record in the file has exactly the same number of bytes. (If all the single record type data is not exactly the same length, the file is a variable length file and Map Editor cannot read it as Fixed ASCII.
Multirecord Type Files: It is possible to work with variable-length multi-record files (with or without a record separator) using some specific guidelines. For more information, search for the “multiple record type” in the documentation.
Because there are so many variations of fixed ASCII data files, you may need to specify those variations in Target Properties.
Schema File Required
You are provided a template file that contains the structure for your HL7 file connections. If your HL7 file structure is different, you may need to modify the template.
To make changes to an HL7 template file
1. Import the file into Project Explorer.
2. Open the file in the Schema Editor.
3. Make your changes.
4. Save the file with a different name.
To obtain a template file, contact your sales representative. These template files are standard schemas.
To connect to an HL7 source or target file, you must set a schema in the SchemaFile property. You cannot connect directly to the schema. You must select a schema file from your current project that contains the schema you want to use.
Connector-Specific Notes
Defining HL7 Structure
It is important to define the structure of your HL7 files before opening them in the integration platform. This is because HL7 files may be loosely structured and most often need to have additional rules applied to them.
Output Modes
Replace File/Table is the only Output Mode available for HL7 target files.
Real time Messaging Support
The HL7 connector is designed to work with one message at a time. Files containing two or more messages must be split to process them. Each message begins with a MSH segment and the MSH must exist only once in a message as the first segment. HL7 was designed to be a real time messaging interface where discrete messages are generated and routed immediately to their destination. Some users employ a batch mode process, where multiple messages are written to a single file before transport occurs. This is not the intended use for HL7. Split batch files into discrete messages before processing.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
SchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
SegLibFile
ST
Segment library (.dsl.xml) file. This .dsl.xml file is created in Document Schema Designer. The segment library file is used during design time, not run time.
FldLibFile
ST
Field library (.dfl.xml) file. This .dfl.xml file is created in Document Schema Designer. The field library file is used during design time, not run time.
SegmentTerminator
ST
Character that terminates a segment. Select a Segment Terminator from the list. The default is CR(000D). For your options, see Segment Terminators.
Validation
ST
If set to true, the adapter validates:
Field requirement
Field default value (if this value is defined)
CheckSum
Default value is False.
Validation works differently on source and target files. Validation on the source occurs when you click Apply in Source Properties. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source-side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run time.
Unicode
ST
This property is important if your HL7 file is Unicode encoded. The default is false, therefore if your source or target HL7 file is Unicode encoded, change this setting to true.
StuffedLength
ST
Specify carriage return length. This property is not required by the HL7 standard, but is used by Lower Layer Protocols. By default, StuffedLength is zero. HL7 encoding rules do not place any limits on the length of a segment. Many languages or operating systems have terminal-oriented input disciplines that place a limit on the number of characters received before they must receive a carriage return character. To overcome this problem, use this property. If some characters are transmitted without a carriage return, the sending system inserts a carriage return character into the output stream. The receiving system is also counting characters from the last carriage return seen. If the limit is reached, the receiving system knows that the next character is a stuffed carriage return and must be removed from the data.
FieldSeparator
T
Separates two adjacent data fields within a segment.
An HL7 file is presumed by the integration platform to have a pipe | (007C) between each field. To specify a different field separator, click the FieldSeparator arrow to display the list of options. The default is | (007C). For your options, see Segment Terminators.
ComponentSeparator
T
Separates adjacent components of data fields.
The default is ^(005E). For your options, see Segment Terminators.
RepetitionSeparator
T
Separates multiple occurrences of a field. The default is ~(007E). For your options, see Segment Terminators.
EscapeCharacter
T
Escape character for use with any field represented by an ST, TX or FT data type, or for use with the data (fourth) component of the ED data type.
For your options, see Segment Terminators.
SubcomponentSeparator
T
Separates adjacent subcomponents of data fields.
The default is &(0026). For your options, see Segment Terminators.
StuffedLength
T
This property is not required by the HL7 standard, but is used by Lower Layer Protocols. By default, StuffedLength=0. HL7 encoding rules do not place any limits on the length of a segment. Many languages or operating systems have terminal oriented input disciplines that place a limit on the number of characters that can be received before they must receive a carriage return character. To overcome this problem, you may use the StuffedLength property. If a certain number of characters have been transmitted without a carriage return, the sending system inserts a carriage return character into the output stream. The receiving system is also counting characters from the last carriage return seen. If the limit is reached, the receiving system knows that the next character is a stuffed carriage return and must be removed from the data.
Segment Terminators
Options are as follows:
CR-LF
STX (0001)
SOT (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS (001C)
GS (001D)
RS (001E)
US (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
' (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
Data Types
These are the data types of the fields in your HL7 target data table. The following data types are available:
DT
FT
ID
IS
Loop
NM
Segment
SI
ST
TM
TN
TS
TX
If you are writing to a new table, you may use any of these available data types from the list. Click the Type arrow to make a new selection. If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
HL7 (eDoc)
Health Level Seven (HL7) is an organization that develops messaging standard specifications that enable disparate health care applications to exchange key sets of clinical and administrative data. The HL7 standard provides an electronic interchange of clinical, financial, and administrative information among independent health care oriented computer systems, such as hospital information systems, clinical laboratory systems, enterprise systems, and pharmacy systems.
HL7 standard provides a layout of messages exchanged between two or more applications. A message is comprised of multiple segments that must be sent in a particular order which may or may not repeat. Segments are collections of data elements that typically share a common subject. For example, the PID segment contains patient identification data elements such as name, address, and social security number. The NK1 next to kin segment contains similar data elements about the patient's next to kin.
HL7 defines the data elements that must be sent and determines the data type and the suggested length for each data element. Also, it indicates whether the data element is required, optional, or may be repeated. Fields must be sent in the same order as present in the Standard. This way, both sending and receiving systems know the data that is expected, the order, and the format. HL7 also enables one system to query another system for the relevant information.
The HL7 connector sets field width in bytes. The number of characters allowed for a field varies. For more details, see Determining Field Width in Characters or Bytes.
Schema File Required
You can obtain a HL7 (eDoc) schema file from HL7 that contains the structure for your HL7 file connections. If your HL7 file structure is different, you may have to modify the template.
To make changes to an HL7 template file
1. Import the file into Project Explorer.
2. Open the file in the Schema Editor.
3. Make your changes.
4. Save the file with a different name in the required project folder.
To obtain a template file, contact your sales representative. These template files are standard schemas.
To connect to an HL7 source or target file, you must set a schema in the SchemaFile property.
Note:  You cannot connect directly to the schema. You must select a schema file from your current project that contains the schema you want to use.
Connector-Specific Notes
This section provides information that is specific to the connector.
Defining HL7 Structure
It is important to define the structure of your HL7 files before opening them in the integration platform. This is because HL7 files may be loosely structured and most often must have additional rules applied to them.
Real time Messaging Support
The HL7 connector works with one message at a time. Files containing two or more messages must be split to process them. Each message begins with a MSH segment and the MSH must exist only once in a message as the first segment. HL7 is a real time messaging interface, where discrete messages are generated and routed immediately to their destination. Some users employ a batch mode process, where multiple messages are written to a single file before transport occurs. This is not the intended use for HL7. Split batch files into discrete messages before processing.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
ST
To specify encoding for reading source and writing target data, select a code page translation table.
Default is OEM.
To use ANSI, select ISO8859-1.
For Windows CodePage 1252, select CP1252.
SchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema that you want to use.
SegmentTerminator
ST
Character that terminates a segment. Select a Segment Terminator from the list. The default is CR(000D). For other options, see Segment Terminators.
StuffedLength
ST
Specify carriage return length. The default value is zero.
HL7 encoding rules does not have any limitation on the segment length. Many languages or operating systems have terminal-oriented input disciplines that sets a limit on the number of characters received before they receive a carriage return character. To overcome this problem, use this property.
If some characters are transmitted without a carriage return, the sending system inserts a carriage return character into the output stream. The receiving system is also counting characters from the last carriage return seen. If the limit is reached, the receiving system knows that the next character is a stuffed carriage return and must be removed from the data.
Note:  This property is not required by the HL7 standard, but is used by Lower Layer Protocols.
FieldSeparator
T
Separates two adjacent data fields within a segment.
The integration platform presumes that a HL7 (eDoc) file has a pipe | (007C) between each field. The default is | (007C). For the options, see Segment Terminators.
ComponentSeparator
T
Separates adjacent components of data fields.
The default is ^(005E). For the options, see Segment Terminators.
RepetitionSeparator
T
Separates multiple occurrences of a field. The default is ~(007E). For the options, see Segment Terminators.
EscapeCharacter
T
Escape character to use with any field represented by ST, TX, or FT data type, or to use with the data (fourth) component of the ED data type.
For the options, see Segment Terminators.
SubcomponentSeparator
T
Separates adjacent subcomponents of data fields.
The default is &(0026). For the options, see Segment Terminators.
Segment Terminators
Options are as follows:
CR-LF
STX (0001)
SOT (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS (001C)
GS (001D)
RS (001E)
US (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
' (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
HTML
HTML is one of the data publishing file formats used in many web browser applications, such as Internet Explorer. The integration platform can write HTML files in specific structured formats.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
To read HTML data sources
HTML is not available as a source connector. However, you can read HTML data sources by using one of three methods:
To connect to a web site page with HTML Source code: For details on the procedure, search for the word "URI" in the documentation.
To read an HTML file, you may choose a file-based connector, such as ASCII (Fixed), ASCII (Delimited), Excel 97, or XML. For procedural information, see the specific sections for those connector types.
See Also
Intermediate Methods to Connect to Data Formats
Connector-Specific Notes
When HTML is your target connector, the maximum number of fields that can be transformed is 255 fields.
HTML is not supported as a source connector.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Index
T
Setting this option to true (the default) causes the integration platform to create a list of hypertext-linked keys at the beginning of the HTML document you are writing. The keys are the data in the first field of your target schema. Currently, a limit of 16,000 key entries is allowed.
After the HTML document is created, you can open the document in your browser and click the key entries to jump down to the appropriate block of information in the body of the document.
Mode
T
The mode options allow you to select the format in which the integration platform writes out the HTML file. The following options are available:
List (default) – Writes out the information in a list format. Each field is written to a new line in the document (long text wraps).
Label – Writes out the field name (in bold) and the data for each record. A colon is inserted between the field name and the data. Each field is written to a new line in the document (long text wraps).
Table – Writes out the data in a 2-dimensional matrix format. This format is supported in HTML 3. This format is particularly appropriate for data publishing projects involving such items as part numbers and price lists.
TableBorder
T
Turns the table border on or off. Off (false) is the default. If set to true, a border is generated around the table (set the border attribute to 1 in the Table element). This property only works if the Mode property is set to Table (see Mode above).
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in an HTML file is Text.
IBM DB2 9.7
IBM DB2 9.7 is a source and target connector that connects directly to an IBM DB2 9.7 database, or connects with an ODBC 3.5 driver. Version 9.7 is supported.
This connector specifies field width in number of bytes, not characters. For details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Connectivity Pointers
Make sure the IBM DB2 9.7 client driver is installed and configured properly on your workstation.
If possible, create query statements using the source database application so that the native application can check syntax and validity. You can then copy the query statement and paste it in the Query statement field.
Connector-Specific Notes
This connector does not support Unicode. Use the ODBC 3.5 connector to enable Unicode support in an IBM DB2 9.7 database.
Property Options
You can set the following source (S) and target (T) properties in IBM DB2 9.7 connections.
Property
ST
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
BulkOperations
T
Specify whether or not to instruct the integration platform to use bulk operations for faster insert. If set to true, an insert statement or bulk add is executed for each record. This allows you to maximize speed. The default is false, the slower setting.
CommitFrequency
T
Controls how often data is committed to the database. The default is zero, which means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce a significant number of transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see ConstraintDDL Example.
CursorType
S
Type of cursor to use for retrieving records from the source table. Available options are forward only, static, and dynamic. The default setting is forward only. For more details, see Cursors.
DriverCompletion
ST
Allows you to control the driver prompt for information.
The available options:
prompt - asks the user all information.
complete (default) - asks the user for information they forgot to enter.
complete required - asks the user only for information required to complete the connection.
no prompt - does not ask the user for any information.
DriverOptions
ST
Specify valid ODBC string connections options. There is no default.
Encoding
ST
Type of encoding to use with source and target files.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, double quotes ("), and a single quote (').
MaxDataLength
ST
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based on your available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
ModifyDriverOptions
ST
Modify driver options with information returned from connector.
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
PrimaryKey
T
Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in Replace mode. This property has no default.
To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
SystemTables
ST
If set to true, this property allows you to see all tables in the database that were created by the DBA. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
ST
Allows you to specify an isolation level when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The isolation levels:
read_uncommitted – Permits P1, P2, and P3.
read_committed – Permits P2 and P3. Does not permit P1.
repeatable_read – Permits P3. Does not permit P1 and P2.
serializable – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to IBM DB2 database ODBC documentation.
Views
ST
If set to true (default), allows you to see table views. View names appear in the table list along with table names.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors for advanced users to construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit Where when you enter the clause. This property has no default.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
UseCursors
T
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
ConstraintDDL Example
In these example statements, we create two indices on the table called "mytable". The first index does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
ConstraintDDL is executed only if replace mode is used for the target. If there are errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. To ignore DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism to specify DDL in the native SQL of the DBMS. Any statement preceded by an "@" is sent to the DBMS.
The following is a DDL statement for creating a primary key for the table mytable.
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key in these cases, use native SQL.
Data Types
The following data types are supported:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
IBM DB2 9.7 Multimode
DB2 can be a source or target connection either directly or through an ODBC driver. You still need to set up an ODBC data source to make the connection. For more information, see the procedure under ODBC 3.5. Also see IBM DB2 9.7.
The integration platform allows for concurrent writes to multiple target tables. Multimode connections allow you to perform table drop and table insert operations directly on your target database.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
In multimode targets, modifications to column names, data types, and sizes are not permitted.
Property Options
You can set the following target properties for IBM DB2 9.7 Multimode connections.
Property
Use
AutoCommit
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
DriverCompletion
Allows you to control the driver prompt for information.
The available options:
prompt - asks the user all information.
complete (default) - asks the user for information they forgot to enter.
complete required - asks the user only for information required to complete the connection.
no prompt - does not ask the user for any information.
DriverOptions
Specify valid ODBC connect string options. There is no default.
Encoding
Allows you to select the type of encoding used with your target file.
IdentifierQuotes
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '.
MaxDataLength
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
ModifyDriverOptions
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
SQL Log
SQL log file. The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
SQL Output
Allows you to specify bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster, as bind variables are used.
Select from the following:
Target Only (default) - Use bound mode, which uses bind variables. SQL statements are sent to the target only.
Target Only (Unbound Mode) - Use unbound mode. Does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log file specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and the SQL log file specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SystemTables
If set to true, this property allows you to see all tables in the database that were created by the DBA. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
Allows you to specify an isolation level to use when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The isolation levels:
read_uncommitted – Permits P1, P2, and P3.
For further details about TransactionIsolation levels, refer to IBM DB2 database ODBC documentation.
read_committed – Permits P2 and P3. Does not permit P1.
repeatable_read – Permits P3. Does not permit P1 and P2.
serializable – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation.
UpdateNullFields
Null values are sent to the database when inserting or updating records. The default is true.
If you select false, null values are not sent to the database when you insert or update records. This forces the connector to operate in unbound mode, which may cause slower performance.
Views
If set to true (default), allows you to see table views. View names appear in the table list along with table names.
Data Types
The following data types are supported for data fields:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
IBM DB2 10
IBM DB2 10 is a source or target connector that connects directly to an IBM DB2 10 database, or connects with an ODBC 3.5 driver. Version 10 is supported.
This connector specifies field width in number of bytes, not characters. For details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Connectivity Pointers
Make sure the IBM DB2 10 client driver is installed and configured properly on your workstation.
IBM provides two drivers:
libdb2.so when SQLLEN is 32 bits
libdb2o.so when SQLLEN is 64 bits.
The Linux ODBC version of the connectors considers SQLLEN as 64 bits. Therefore, the odbc.ini file must use the libdb2o.so driver.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query statement box.
Connector-Specific Notes
This connector does not support Unicode. Use the ODBC 3.5 connector to enable Unicode support in an IBM DB2 10 database.
Property Options
You can set the following source (S) and target (T) properties in IBM DB2 10 connections.
Property
ST
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
BulkOperations
T
Specify whether or not to instruct the integration platform to use bulk operations for faster insert. If set to true, an insert statement or bulk add is executed for each record. This allows you to maximize speed. The default is false, the slower setting.
CommitFrequency
T
Controls how often data is committed to the database. The default is zero, which means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce a significant number of transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see ConstraintDDL Example.
CursorType
S
Type of cursor to use for retrieving records from the source table. Available options are forward only, static, and dynamic. The default setting is forward only. For more details, see Cursors.
DriverCompletion
ST
Allows you to control the driver prompt for information.
The available options:
prompt - asks the user all information.
complete (default) - asks the user for information they forgot to enter.
complete required - asks the user only for information required to complete the connection.
no prompt - does not ask the user for any information.
DriverOptions
ST
Specify valid ODBC string connection options. There is no default.
Encoding
ST
Type of encoding to use with source and target files.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, double quotes ("), and a single quote (').
MaxDataLength
ST
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based on your available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
ModifyDriverOptions
ST
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
PrimaryKey
T
Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default.
To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
SystemTables
ST
If set to true, this property allows you to see all tables in the database that were created by the DBA. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
ST
Allows you to specify an isolation level to use when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The isolation levels:
read_uncommitted – Permits P1, P2, and P3.
read_committed – Permits P2 and P3. Does not permit P1.
repeatable_read – Permits P3. Does not permit P1 and P2.
serializable – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation.
Views
ST
If set to true (default), allows you to see table views. View names appear in the table list along with table names.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit Where when you enter the clause. This property has no default.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
ConstraintDDL Example
In these example statements, we create two indices on the table called "mytable". The first index does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
ConstraintDDL is executed only if replace mode is used for the target. If there are errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. To ignore DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism to specify DDL in the native SQL of the DBMS. Any statement preceded by an "@" is sent to the DBMS.
The following is a DDL statement for creating a primary key for the table mytable.
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key in these cases, use native SQL.
Data Types
The following data types are supported:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
IBM DB2 10 Multimode
DB2 can be a source or target connection either directly or through an ODBC driver. You still need to set up an ODBC data source to make the connection. For more information, see the procedure under ODBC 3.5. Also see IBM DB2 10.
The integration platform allows for concurrent writes to multiple target tables. Multimode connections allow you to perform table drop and table insert operations directly on your target database.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
In multimode targets, modifications to column names, data types, and sizes are not permitted.
Property Options
You can set the following target properties for IBM DB2 10 Multimode connections.
Properties are not present in GUI but authored in document.
Property
Use
AutoCommit
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
DriverCompletion
Allows you to control the driver prompt for information. The options are Prompt, Complete (default), Complete Required, and No Prompt.
Prompt option: Asks the user all information.
Complete option: Asks the user for information they forgot to enter.
Complete Required option: Asks the user only for information required to complete the connection.
No Prompt option: Does not ask the user for information.
DriverOptions
Specify valid ODBC connect string options. There is no default.
Encoding
Allows you to select the type of encoding used with your target file.
IdentifierQuotes
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '.
MaxDataLength
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
ModifyDriverOptions
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
SQL Log
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
SQL Output
Allows you to specify bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster, as bind variables are used.
Select from the following:
Target Only (default) - Use bound mode, which uses bind variables. SQL statements are sent to the target only.
Target Only (Unbound Mode) - Use unbound mode. Does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log file specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and the SQL log file specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SystemTables
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
Allows you to specify an isolation level to use when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The isolation levels:
read_uncommitted – Permits P1, P2, and P3.
For further details about TransactionIsolation levels, refer to IBM DB2 database ODBC documentation.
read_committed – Permits P2 and P3. Does not permit P1.
repeatable_read – Permits P3. Does not permit P1 and P2.
serializable – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation.
UpdateNullFields
Null values are sent to the database when inserting or updating records. The default is true.
If you select false, null values are not sent to the database when you insert or update records. This forces the connector to operate in unbound mode, which may cause slower performance.
Views
If set to true (default), allows you to see table views. View names appear in the table list along with table names.
Data Types
The following data types are supported for data fields:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
IBM DB2 Loader
The integation platform can write DB2 Loader files. The files can be imported quickly into DB2 using the DB2 Loader utility.
DB2 Loader is available as a source connector. It appears on the source connector list primarily for reading and verifying Loader files created by the integration platform and is useful when you do not have the application to check the files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following source (S) and target (T) properties.
Specifying source properties is important for you if:
Your source connector requires you to define Source Properties.
Upon viewing your source data, you learn that the field names and data appear incorrectly, either in the wrong record, field, or containing the wrong data.
Property
ST
Description
MessageFile
T
You can designate a message file to write errors to when loading a database. To select the message file, navigate to the drive, directory path, and file. Then click OK.
Null Indicators
T
Generate Null indicators. The default is true.
OtherOptions
T
Use the OtherOptions Property to enter additional text options. For the OtherOptions property, enter options in the text box, and click OK.
SaveCount
T
The SaveCount property is set to prevent total loss of data from a failed Load operation. You set SaveCount to create a consistency point at a specified number of rows that are loaded then if the LOAD operation fails, the rows that were saved prior to the failure are not rolled back. The benefit of specifying this parameter is only realized in a recover situation where you can restart the load from a consistency point. The default for SaveCount is zero, indicating no consistency points.
System Type
T
Type of DB2 system. The default is "other" which includes Linux, DOS, and Windows. The other option is MVS, which is MVS (EBCDIC) DB2.
WarnCount
T
Use WarnCount to set the number of warnings returned by the utility before a load operation is forced to terminate. The load operation stops after the Warncount number is reached. By using the WarnCount you can fix problems that occur early in the process without waiting for the entire Load to complete. The default is set to zero.
Control File
ST
File containing load commands. There is no default.
RecordSeparator
ST
A DB2 Loader target file is presumed to have a carriage return-line feed (CR-LF) between records (default setting). To specify some other characters for a record separator, click Record Separator and select the desired record separator in the list. The choices are carriage return-line feed (CR-LF), line feed (LF), and no record separator (None).
Note:  When the unload file is from a mainframe system, the data is in EBCDIC and there is no record separator. In these cases, the record separator is saved and loaded as None. When the unload file is not from a mainframe system, the record separator property can be set and saves and loads correctly.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The only data type available is Text.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Target Schema
The Target Schema icon is active in the toolbar when DB2 Loader is the target connector.
The following options are applicable for DB2 Loader files.
Target Field Name
These are field names that appear in your target data table.
You may enter a name for each field.
Data Types
The only data type available is Text.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
IBM DB2 Universal Database 9.5
IBM DB2 Universal Database is a source or target connector that connects directly to IBM DB2 Universal Database, or connects with an ODBC 3.5 driver. Version 9.5 is supported.
This property is set in number of bytes, not characters. For details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Connectivity Pointers
Make sure the IBM DB2 9.5 client driver is installed and configured properly on your workstation.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement box.
Connector-Specific Notes
This connector does not support Unicode. Use the ODBC 3.5 connector to enable Unicode support in an IBM DB2 Universal Database.
Property Options
You can set the following source (S) and target (T) properties in IBM DB2 Universal Database connections.
Property
ST
Version
Use
AutoCommit
T
9.5
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
BulkOperations
T
9.5
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
CommitFrequency
T
9.5
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
ST
9.5
Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see ConstraintDDL Example.
CursorType
S
9.5
Type of cursor to use for retrieving records from the source table. Available options are Forward Only, Static and Dynamic. The default setting is Forward Only. For more details, see CursorType.
DriverCompletion
ST
9.5
Allows you to control the driver prompt for information. The options available are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option asks the user all information. The Complete option asks the user for information they forgot to enter. The Complete Required option asks the user only for information required to complete the connection. The No Prompt option does not ask the user for any information.
DriverOptions
ST
9.5
Enter any valid ODBC connect string options here. There is no default.
Encoding
ST
9.5
Type of encoding to use with source and target files.
IdentifierQuotes
ST
9.5
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, ", and '.
MaxDataLength
ST
9.5
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
Maximum Array Size
T
9.5
Maximum number of records fetched or inserted with each cursor operation. The default is 1.
ModifyDriverOptions
ST
9.5
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
PrimaryKey
T
9.5
Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default.
To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
SystemTables
ST
9.5
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
ST
9.5
Allows you to specify any one of five isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The four isolation levels are as follows:
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation.
Views
ST
9.5
If set to true (default), allows you to see views. View names appear in the table list along with table names.
WhereStmt
S
9.5
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit Where when you enter the clause. This property has no default.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table
UseCursors
T
9.5
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
ArraySize
T
9.5
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
CursorType
The CursorType option specifies the type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only. For more information, see Cursors.
ConstraintDDL Example
In these example statements, we create two indices on the table called "mytable". The first index does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
ConstraintDDL is executed only if replace mode is used for the target. If there are errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. To ignore DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism to specify DDL in the native SQL of the DBMS. Any statement preceded by an "@" is sent to the DBMS.
The following is a DDL statement for creating a primary key for the table mytable.
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key in these cases, use native SQL.
Data Types
The following data types are supported:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
Length
The length of fields stored in your data file.
IBM DB2 Universal Database 9.5 Multimode
DB2 can be a source or target connection either directly or through an ODBC driver. You still need to set up an ODBC data source to make the connection. For more information, see the procedure under ODBC 3.5. Also see IBM DB2 Universal Database 9.5.
The integration platform allows for concurrent writes to multiple target tables. Multimode connections allow you to perform table drop and table insert operations directly on your target database.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
In multimode targets, modifications to column names, data types, and sizes are not permitted.
Property Options
You can set the following target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
CommitFrequency
T
Controls how often data is committed to the database. By default, CommitFrequency is zero, which means that data is committed at the end of the transformation. When doing large transformations, this is not practical as it may fill up the transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full, but limits the ability to restart the transformation.
ConstraintDDL
T
Specify additional SQL data definition language statements to be executed after their target table is created. This is similar to support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. This property has no default. For an example, see ConstraintDDL Example.
DriverCompletion
T
Allows you to control the driver prompt for information. The options are Prompt, Complete (default), Complete Required, and No Prompt.
Prompt option: Asks the user all information.
Complete option: Asks the user for information they forgot to enter.
Complete Required option: Asks the user only for information required to complete the connection.
No Prompt option: Does not ask the user for information.
DriverOptions
T
Enter any valid ODBC connect string options here. There is no default.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '.
Maximum Array Size
T
Maximum number of records fetched or inserted with each cursor operation. The default is 1.
MaxDataLength
T
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
ModifyDriverOptions
T
Allows you to store the ODBC connection. Default is true. If you select false, you are prompted for connection information each time you run the transformation.
Encoding
T
Allows you to select the type of encoding used with your source and target files.
PrimaryKey
T
Specify a list of field names used to create a primary key. Field names are delimited by commas. If this property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. This property has no default.
To use this property, your ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
SystemTables
T
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
T
Allows you to specify any one of five isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The four isolation levels are as follows:
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to Microsoft ODBC SDK documentation.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Views
T
If set to true (default), allows you to see views. View names appear in the table list along with table names.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
UseCursors
T
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
ConstraintDDL Example
In these example statements, we create two indices on the table called "mytable". The first index does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
ConstraintDDL is executed only if replace mode is used for the target. If there are errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. To ignore DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism to specify DDL in the native SQL of the DBMS. Any statement preceded by an "@" is sent to the DBMS.
The following is a DDL statement for creating a primary key for the table mytable.
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key in these cases, use native SQL.
Data Types
The following data types are supported for data fields:
bigint
blob
char
character
char () for bit data
date
datetime
decimal
float
integer
long raw
long varchar
number
numeric
raw
real
rowid
smallint
time
timestamp
varchar
varchar () for bit data
varchar2
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
IBM Informix Dynamic Server 10
IBM Informix Dynamic Server 10 is a relational database management system. The integration platform can read and write to IBM Informix Dynamic 10 data tables. This connector uses the ODBC 3.5 standard.
You must have IBM Informix Dynamic Server 10 client installed on your system.
This connector sets field width in bytes. What varies is the number of characters that fit into a given field. For details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it into the Query Statement field.
Connector-Specific Notes
No known limitations.
Properties
You can set the following source (S) and target (T) properties for IBM Informix Dynamic Server 10 files.
Property
ST
Description
CodePage
ST
Translation table that determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
CommitFrequency
T
Controls how often data is committed to the database. By default, CommitFrequency is zero, which means that data is committed at the end of the transformation. When doing large transformations, this is not practical as it may fill up the transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full, but limits the ability to restart the transformation.
ConstraintDDL
T
Allows you to specify some additional SQL data definition language statements to be executed after the target table is created. This is similar to the support provided for SQL pass-thru in the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, the following statements create two indices on the table mytable. The first index does not allow duplicates and the index values are stored in ascending order. The second is a compound index on Field2 and Field3.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
ConstraintDDL is executed only if the replace mode is used for the target. If there are errors, they are written to the error and event log file. During transformation, the Transformation Error dialog box appears. You can ignore the DDL errors and continue the transformation.
ConstraintDDL also supports an escape mechanism that allows you to specify DDL in the native SQL of the database management system. Any statement that is preceded by an ampersand (@) is sent straight to the DBMS.
The following statement is a DDL statement for creating a primary key for the table mytable.
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. To create a primary key, use native SQL.
Note:  This option only works in REPLACE mode.
CursorType
S
The type of cursor to use for retrieving records from the source table. The choices are Forward Only, Static, and Dynamic. The default setting is Forward Only. For more information, see Cursors.
DataTypes
S
Data types of the fields in the target table
MaxDataLength ST
ST
The maximum data length for long data types. The default is one megabyte, or 1,048,576 bytes. You can reset this number as needed, based on the available memory and target requirements.
Some ODBC drivers can only handle limited data lengths. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, then the integration platform has preset the default for that particular ODBC driver. Do not set the MaxDataLength property any higher when this happens.
PrimaryKey
ST
Allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
When using the PrimaryKey property, the ODBC driver must support integrity enhancement facility (IEF). Only advanced ODBC drivers support this.
Synonyms
ST
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
ST
Allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The four isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit P1, P2, or P3.
For details about transaction isolation levels, see the Microsoft ODBC SDK documentation.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors so that advanced users can construct the WHERE clause of the SQL query themselves and avoid the need to write a lengthy query statement. For example, to instruct the SQL database server to filter the data based on a condition before it is sent to the integration platform, omit WHERE when you enter the clause. No default exists for this property.
Note:  When the source connection is a SELECT statement, do not apply the WhereStmt property. Instead, include the WHERE clause in your SELECT statements. This property enables data filtering when you select a table.
UseCursors
T
Allows you to turn cursor support on or off. The default is False. When UseCursor is true and the specified ODBC driver does not support cursor inserts, the SQL INSERT mode of adding records is used instead.
For exports, cursor support is supposed to enhance the performance of record insertion. This appears to be the case for desktop databases but not for database servers, which execute prepared queries about as quickly as they handle cursor inserts.
Some drivers require that the target table be indexed to enable positioned updates (cursors). The PrimaryKey and ConstraintDDL properties address this issue for the ODBC export (see details above).
Views
ST
If set to true (default), allows you to see views. View names appear in the table list along with table names.
IBM Informix Dynamic Server 11
IBM Informix Dynamic Server 11 is a relational database management system. You must have it installed on your system.
The integration platform can write to IBM Informix Dynamic Server 11 target files. It is not available as a source connector. This connector uses the ODBC 3.5 standard.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Target Schema Modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Query Statement Support: The target format must support query statements (such as Oracle and SQL Server statements) for transformations to work with this connector.
Auto-Increment Fields: If the target table contains an auto-increment column, you may get an error at run time. If this occurs, delete the field that is causing the problem.
Property Options
You can set the following target (T) properties for IBM Informix Dynamic Server 11.
Property
ST
Description
Data Types
T
The only available data type is Text.
Length
T
Specifies field lengths for data. If you need to change field lengths, reset them in the schema.
MaxDataLength
T
The maximum data length for long data types. Default is one megabyte, or 1,048,576 bytes. You can reset this number as need based on the amount of memory available and the target requirements.
SQL Log
T
The default is sql.log, which is in the default installation directory. To use a different log, either browse to select the file or enter the path and file name.
SQL Output
T
Allows you to select bound or unbound mode and whether to write SQL statements to a SQL log. SQL Output has these options:
Target Only (the default) - Uses bound mode, which uses bind variables and is faster. SQL statements are sent to the target and not to the SQL log specified by the SQL Log property.
Target Only - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log.
Target and SQL Log - Sends SQL statements to the target and to the SQL log file.
SQL Log Only - Sends SQL statements to the SQL log file only.
Synonyms
T
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
 
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
UpdateNullFields
T
When set to True (the default) null values are sent to the database during insertion or updating of records. When false, null values are not sent to the database and UpdateNullFields forces the connector to operate in unbound mode, which may result in slower performance.
Views
T
When set to True (default) you can see views. The view names appear in the table list along with the tables.
Informix (ODBC 3.x)
Informix SQL databases such as Informix Online can be reached using ODBC 3.x. See ODBC 3.x.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Note:  The integration platform connects to Informix tables with ODBC 3.x. For the procedure, and information about the property options, and source and target schemas, see ODBC 3.x.
Connector-Specific Notes
If you attempt to connect to an Informix database in the SQL Query Builder, you receive a Microsoft run time error:
Run-time error [Informix][Informix ODBC Driver][Informix]Environment variable INFORMIXSERVER must be set. [Microsoft][ODBC Driver Manager] Driver's SQLSetConnectAttr failed.
Solution: Use the Microsoft OLEDB provider for ODBC drivers for the connection to Informix. Then use this connection in the SQL Query Builder.
Informix-Online DS
This connector provides DSN-less source and target connections to the Informix-Online Dynamic Server relational database management system.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Informix Client Installation
Install the following Informix Client SDK version on same system with DataConnect:
Windows: CSDK 3.0
Solaris: CSDK 2.9
Connection strings from the Informix client configuration file can be referenced in the DriverOptions connector property as documented below under the Procedure topic. The following table lists standard Informix connection keywords.
Keyword
Short Version
driver
driver
dsn
dsn
filedsn
filedsn
uid
uid
database
db
host
host
pwd
pwd
server
srvr
service
serv
protocol
pro
client_locale
cloc
db_locale
dloc
translationdll
tdll
translationoption
topt
connectdatabase
condb
exclusive
xcl
cursorbehavior
curb
savefile
savefile
options
opt
description
desc
enablescrollablecursors
scur
enableinsertcursors
icur
optimizeautocommit
oac
optofc
optofc
needodbctypesonly
odtyp
reportkeysetcursors
rkc
fetchbuffersize
fbc
describedecimalfloatpoint
ddfp
donotuselvarchar
dnl
reportcharcolaswidecharcol
rcwc
Simulated Database Connections
If you try to make a live connection to a database that you do not have direct access to, Map Editor displays a message that the connection is simulated. The simulated mode may be unable to display all property options. To see all property options, install and configure the database client before connecting in the Map Editor.
Limitations
No known limitations.
Property Options
 
Property
S/T
Description
CursorType
S
The CursorType option specifies the type of cursor to use for fetching records from the source table. The choices are Forward Only, Static, and Dynamic. The default setting is Forward Only. For more information, see Cursors.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement in the Query Statement text box. This might be used to get the SQL database server to filter the data based on a condition before it is sent to Map Designer. Omit "WHERE" when you enter the clause. No default exists for this property.
Note:  Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, you should include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table.
CommitFrequency
T
This allows you to control how often data is committed to the database. By default, CommitFrequency is zero. This means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. If you are doing large transformations, this is not practical, since it may fill up your transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify some additional SQL data definition language statements to be executed after their target table is created. This is similar to the support provided for SQL Pass-thru in the SQL import connectors. Each line must be a valid DDL statement.
For example, you could have the statements
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table mytable. The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, you can continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an @ is sent straight to the DBMS.
The statement
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable.
Note:  This option works only in REPLACE mode.
UseCursors
T
The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified driver does not support cursor inserts, the SQL INSERT mode of adding records is used.
For exports, cursor support is supposed to enhance the performance of inserting records. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts.
DriverOptions
ST
Enter Informix client connection information, such as database, server, and protocol, in the format shown in this example:
Database=sysutils;Server=ol_qadb1;Protocol=onsoctcp
DriverCompletion
ST
Allows you to control whether or not the driver prompts you for information. The options available are prompt, complete, complete required, no prompt. The Prompt option prompts the user for every individual bit of information. Complete is the default option. This option prompts the user for any information that they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information.
MaxDataLength
ST
The maximum data length for long data types. Default is 1 MB, 1048576. You can reset this number as you choose based on your available memory capabilities and Target requirements.
Some drivers have limitations concerning the maximum data length they can handle. If you choose a Source or target connector and the default setting is not 1 MB, then Map Designer has preset the default in respect for the capabilities of that particular driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
PrimaryKey
ST
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
There is one additional requirement for using the PrimaryKey property. The driver must support integrity enhancement facility (IEF). Only the more advanced drivers support this.
Synonyms
ST
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
The SystemTables property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation
ST
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table. The default for version 3.x, 3.5 and IBM DB2 Universal Database is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The five isolation levels are:
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit any of P1, P2, and P3.
VERSIONING – Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency.
Views
ST
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is false.
CodePage
ST
The code page translation table tells Map Designer which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. The following code pages are available:
ANSI
OEM
0037 US (EBCDIC)
0273 Germany (EBCDIC)
0277 Norway (EBCDIC)
0278 Sweden (EBCDIC)
0280 Italy (EBCDIC)
0284 Spain (EBCDIC)
0285 UK (EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium (EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
Informix-Online DS Multimode
This connector provides DSN-less source and target connections to the Informix-Online Dynamic Server relational database management system.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
This connector is a multimode connector. The main distinction between single and multimode connectors is in the use of event actions:
Single mode connectors – Use event actions to access one target file or table at a time in a transformation.
Multimode connectors – Use multimode event actions to access multiple target tables in the same transformation, each time the transformation run.
Use the event actions intended for multimode connectors to enable access to multiple target tables.
Informix Client Installation
Install the following Informix Client SDK version on same system with DataConnect
Windows: CSDK 3.0
Solaris: CSDK 2.9
Connection strings from the Informix client configuration file can be referenced in the DriverOptions connector property as documented below under the Procedure topic. The following table lists standard Informix connection keywords.
Keyword
Short Version
driver
driver
dsn
dsn
filedsn
filedsn
uid
uid
database
db
host
host
pwd
pwd
server
srvr
service
serv
protocol
pro
client_locale
cloc
db_locale
dloc
translationdll
tdll
translationoption
topt
connectdatabase
condb
exclusive
xcl
cursorbehavior
curb
savefile
savefile
options
opt
description
desc
enablescrollablecursors
scur
enableinsertcursors
icur
optimizeautocommit
oac
optofc
optofc
needodbctypesonly
odtyp
reportkeysetcursors
rkc
fetchbuffersize
fbc
describedecimalfloatpoint
ddfp
donotuselvarchar
dnl
reportcharcolaswidecharcol
rcwc
Simulated Database Connections
If you try to make a live connection to a database that you do not have direct access to, Map Editor displays a message that the connection is simulated. The simulated mode may be unable to display all property options. To see all property options, install and configure the database client before connecting in the Map Editor.
Limitations
Target Schema modification - In multimode targets, modifications to column names, data types, and sizes are not permitted.
Query Statement Support - Your target format must be able to support query statements for transformations written with this connector to work, such as Access, Oracle, SQL Server, to name a few.
Auto-Increment Fields - If your target table contains an auto-increment column, you may get an error at run time. If this occurs, you must delete the field that is causing the problem from the record.
Property Options
 
Property
S/T
Description
MaxDataLength
T
The maximum data length for long data types. Default is 1 MB: 1,048,576. You can reset this number as you choose based on your available memory capabilities and Target requirements.
SQL Log
T
The default is sql.log in the default installation directory. If you want to use a different log, click once in the value cell, then select the ellipsis to browse to the location of the file, or enter the path and file name.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
SQL Output
T
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output:
Target Only – Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound Mode) – Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log – Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only – Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
T
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
The SystemTables property is applicable only if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
Views
T
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Data Type
The only available data type is text.
Integration Log File
The Data Junction Log File connector gives you the ability to read the log file generated by the Map Editor. A script file created in the Extract Schema Designer is used for the source file structure. The log file information can then be written to a specified target file or table.
This connector is also called Integration Log File.
If you are interested in parsing log files that contain process errors and messages, see Process and Transformation LogFile Parser. With this connector, there are more available schema data types, and the ability to parse process logs as well as logs for transformations.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Data Junction Log File is not available as a target connector.
Property Options
 
Property
S/T
Description
ByteOrder
S
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian, and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ProgramVariables
S
Type the program variable.
ReportReadingScriptFile
S
The script file used for the source structure. The default file is djwinlog.djp.
Encoding
S
Select the type of encoding used with source and target files. The default encoding is OEM.
Shift-JIS encoding is meaningful only in Japanese operating systems.
StartOffSet
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
JSON Connector
JavaScript Object Notation (JSON) is a text-based open standard designed for human readable data interchange. It is derived from the JavaScript scripting language for representing simple data structures and associative arrays, called objects. Despite its relationship with JavaScript, it is language-independent with parsers available for many languages. The official Internet media type for JSON is application/json. The JSON filename extension is .json. The JSON format is often used for serializing and transmitting structured data over a network connection. It is used primarily to transmit data between a server and web application, serving as an alternative to XML. "JSON." Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/wiki/JSON (accessed May 11, 2017).
Note:  Normally, schemas for target multimode connectors are not provided. However, for JSON connector, if the target file exists, then the UI connects using a source JSON Connection and then the schema is copied to the target.
JSON is built on two structures:
A collection of name/value pairs. In various languages, this is realized as an object, record, struct, dictionary, hash table, keyed list, or associative array.
An ordered list of values. In most languages, this is realized as an array, vector, list, or sequence.
Connectivity Requirements
The JSON connector allows you to store and connect to your source and target data using the following methods:
Source and target files can be stored on the local file system where DataConnect is installed, and then accessed through shared storage locations.
Source and target files can be stored in an external system and accessed through HTTP(S) requests.
Property Options
 
Property
S/T
Description
Batch Size
S
Number of source records the connector caches before processing them in a map. Default is 100.
Read Nulls
S
If set to True, the connector reads null values in the source file.
If set to False, null values in the source file are skipped. The null field is added to the schema, but it is skipped when the map is run.
Default is True.
Read Empty Tokens
S
If set to True, the connector reads empty array ([]) and empty object ({}) values in the source file.
If set to False, empty array ([]) and empty object ({}) values in the source file are skipped. The schema is updated with a reference to a new record and the JSON INPUT field holds the {} and [] characters, respectively. The characters in the JSON INPUT field are not mapped to the target.
Default is True.
Data Types
The supported data types correspond to the fields in the target data table. If you are appending data to an existing table, then the data type of each field uses the data type in the selected table by default.
JSON connector supports the following data types:
Base64Binary
Boolean
Byte
Char
Date
DateTime
Decimal
Double
Float
HexBinary
Integer
Long
Short
String
Time
Additional Information
Batch Response
A batch response file contains a group of entries for each connector used in a batch operation. Each action in the operation generates a batch response entry. These entries often provide detailed information, such as errors, that can be used for troubleshooting.
Example
<BatchResponse:batchResult xmlns:BatchResponse="http://www.actian.com/BatchResponse">
<BatchResponse:results>
</BatchResponse:results>
<BatchResponse:componentInfo>
<BatchResponse:returnCode>0</BatchResponse:returnCode>
<BatchResponse:componentName>JSON-DMS</BatchResponse:componentName>
</BatchResponse:componentInfo>
</BatchResponse:batchResult>
Schema Considerations
The JSON connector automatically builds the schema when the datasets are created (provided there is valid JSON data in the file) based on the following assumptions:
The Root record of the structured schema is always named “ROOT”, regardless if the starting type is an Object or an Array.
If the Root record is an Array, all elements of the array must be either an object or an array. The JSON syntax does not contain names for these objects; however, DataConnect requires names for these objects or arrays. If the field value is an Object, the connector will name the field “Object” in the structured schema.
For example:
[
 {
  "field":"value" }
]
will look like this in Schema Designer:
  Root (type = Root_._type)
    Object (type = object_._type)
      field (type = String)
If a field value is an array, the connector will name the field "Array" in the structured schema.
For example:
{
 "field":[
   {
 "field2" : "value"
   }
      ]
}
will look like this in Schema Designer:
  Root (type = Root_._type)
    field (type = field_._type)
      Array (type = Array_._type)
        Object (type = Object_._type)
          field2 (type = String)
Note:  The target side of the connector omits the names of these objects to preserve the JSON syntax.
JSON Syntax allows for records with the same name on the same level in the JSON hierarchy. If the connector detects a record with the same name on the same hierarchy level, it will aggregate all the fields on both records into one record.
For example:
{
   "record":{
      "field1":true,
      "field2":123
   },
   "record":{
      "field3":"string"
   }
}
will look like this in Schema Designer:
  Root (type = Root_._type)
    record (type = record_._type)
      record_._iteration_1 (type = record_._iteration_1_._type)
        record_._iteration_2 (type = record_._iteration_2_._type)
          field1 (type = Boolean)
Note:  When records with the same name that appear on both the same level and different levels, the connector will aggregate all fields on the same level and treat any fields on other levels as new records.
Each record in the schema has a string field named “JSON_INPUT”. The source schema uses this field to represent empty objects or empty arrays that are defined in the source file. If the JSON_INPUT field is mapped on the target side, the connector will treat all data in the field as Custom JSON.
Custom JSON
If the JSON_INPUT field is mapped on the target side, the field is a string field and the connector will treat the mapped string as Custom JSON. There are some considerations you need to take into account when using the JSON_INPUT field:
The JSON connector will not validate the data in the JSON_INPUT field. You are responsible for providing syntactically correct JSON.
Warning!  The JSON connector may output invalid JSON if you do not validate the JSON when mapping the Custom JSON field.
If the JSON_INPUT field is mapped on a record, all child records of this record will be skipped, even if they are mapped. The remaining fields on the record will be skipped as well (even if they are mapped).
Since the JSON_INPUT field is a string, you must escape any string contained in the custom JSON data field. The following sample shows proper string syntax when mapping to the JSON_INPUT field:
User input to JSON_INPUT field:"{""Field"":""Value"", ""Field2"":12}""Connector output for JSON_INPUT:{"Field":"Value", "Field2":12}
In this example, Field, Value, and Field2 are strings. The value of 12 is an integer.
LDIF
Lightweight Directory Interchange Format (LDIF) is a data interchange format for Lightweight Directory Access Protocol (LDAP), a directory service database. The LDAP database has ObjectClass entries with standard address book information, such as employee names, e-mail addresses, phone and fax numbers, etc.
LDIF files are standard text files (ASCII) that the integration platform can write.
Property Options
See ASCII (Fixed) target property options
Lotus 123
Lotus 123 is a spreadsheet application. With the Lotus 123 connector, you can read and write Lotus 123 DOS and the following Windows releases:
r1A
r2.01
r2.2
r3
r4
It is important to remember that the integration platform transforms field and record (column and row) oriented data. Title sections, formulas, macros, and display formatting are not transformed. Column headings in a single row can be transformed as field names.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write eight Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
Some basic limitations exist when transforming data to a spreadsheet.
Field Length
Lotus 123 has a limitation of 255 characters in a single cell. If you try to transform data to Lotus 123 containing more than 255 characters in any field, that data is truncated at the 255th character.
The integration platform does not automatically truncate data in a field to 255 characters when transforming to a spreadsheet. You may either change the field length to 255 in the target schema grid or you may leave it at the default value. If you set it to greater than 255 characters, when you open the spreadsheet in Lotus 123, the data is truncated.
Field length in Lotus 123 is actually a measure of the column width of a cell and not a measure of character length. When you view the length that is read in the the integration platform, you are seeing the column width set in the Lotus 123 application rounded to the nearest whole number. If you have truncated data, set the length to a higher value in the schema.
Number of Fields
Lotus 123 has a limitation of 254 fields. If you try to transform data to Lotus 123 that has more than 254 fields, the data is truncated at the 254th field.
Number of Records
Lotus 123 has a limitation of 8192 records. If you try to transform data to Lotus 123 that has more than 8192 records, the data is truncated at the 8192nd record.
Property Options
These are properties that you may need to specify for your source or target data.
Property
Use
Auto Type Text Fields
Automatic styling changes the way Lotus 123 data is read or written. For example, when Delimited ASCII is the source connector, all data is "Text" (except when you change the Delimited ASCII Source Property AutomaticStyling to true, or when you change the "Type" from Text to a numeric data type in the Delimited ASCII source schema grid. For details on these options, see the ASCII Delimited section.) When that data is transformed to a spreadsheet, the user may need the fields that contain numbers to be formatted as Numeric (to perform calculations in the spreadsheet).
The default for this option is true. To transform the number fields as Text in the Target Lotus 123 file, change the setting to false.
Header Record Row
For Lotus 123 source files, this is the row number of any column headings. For Lotus 123 target files, this sets the row number where the integration platform writes column headings.
If there are no column headings involved in the data you access multiple tables in the same transformation, each time that transformation runs, the value should be left to the default setting of 0. If the column heading is the first record (row 1), change the value to 1.
Worksheet
For versions of Lotus 123 that support multiple worksheets within one file (often referred to as 3D spreadsheets), this option allows you to specify which of those worksheets that the integration platform uses for the transformation. If you want the integration platform to read the first worksheet, leave the default value of "0". To read the second sheet, change the setting to "1"; for the third sheet, change the setting to "2" and so on.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data is transformed to Text in a spreadsheet, unless the user has selected true for the Auto Type Text Fields option in Target Properties.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!
A field size shorter than the default may truncate data.
The integration platform does not automatically truncate data in a field to 255 characters when transforming to a spreadsheet. You may either change the field length to 255 in the schema or you may leave it at the default value. If you set it to greater than 255 characters, when you open the spreadsheet in Lotus 123, the data is truncated.
Lotus Notes Structured Text
Lotus Notes Structured Text is a special file format that can be imported into a Lotus Notes database. With this connector, the integration platform can write to a Lotus Notes Structured Text target file.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
Lotus Notes Structured Text is a legacy format that was developed before there was an official Lotus Notes API. Use this connector to connect to Lotus Notes data earlier than version 4.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in a Lotus Notes Structured Text file is Text.
MAILERS+4 (dBASE)
MAILERS+4 is a database application that uses a dBASE format. If the data file references a memo field file, the memo file must exist for the connection to occur. The primary data file usually has a .dbf extension and the memo file usually has a .dbt extension. MAILERS+4 files are structured; for example, both the data and the file structure are stored inside the primary data file. Using this dBASE connector, the integration platform reads and writes MAILERS+4 data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Property Options
See dBASE IV Property Options.
Microsoft Dynamics AX 4
Microsoft Dynamics AX is an enterprise resource planning software product. This connector reads and writes data from and to Dynamics AX 4 as a web service.
Limitations
Upserts are not supported at this time.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Version
Description
BatchResponse
T
4
Optional. Creates a Batch Response file, which is a reporting mechanism for the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases:
You need to capture system-generated object IDs for use in future updates.
You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis.
For more information, see "Batch Response File".
BatchSize
S
4
Optional. Number of source records the connector caches before processing them in a map. Default is zero.
Flush Frequency
T
4
Leave setting as default, since you cannot change this setting.
Page Size
S
4
The number of records stored in memory.
DestinationEndpoint
ST
4
The receiver of the message.
SourceEndpoint
ST
4
The sender of the message.
SourceEndpointUser
ST
4
The user who sends the message. The endpoint user is defined by the Windows account of the endpoint user in a Domain\User format.
ShowChildren
ST
4
Determines whether to add the child record types to the list of available record types.
False – List only the parent-level record types.
True – Include child record types.
The following table lists all Microsoft Dynamics AX 4 entities and the supported and unsupported actions.
Entity Name
Create
Read
Update
Delete
ASN
No
Yes
No
No
Address
Yes
Yes
Yes
Yes
CashDisc
No
Yes
No
No
ChartofAccounts
Yes
Yes
No
No
ConfigTable
No
Yes
No
No
ContactPersons
Yes
Yes
No
Yes
Customer
Yes
Yes
Yes
Yes
CustomerGroups
No
Yes
No
No
DeliveryModes
No
Yes
No
No
Dimensions
Yes
Yes
No
No
ExchangeRates
No
Yes
No
No
InventColor
No
Yes
No
No
InventDimCombination
No
Yes
No
No
InventItemGroup
No
Yes
No
No
InventLocation
No
Yes
No
No
InventSize
No
Yes
No
No
InventoryOnhand
No
Yes
No
No
InventoryTransactions
No
Yes
No
No
Item
Yes
Yes
No
No
PaymentTerms
No
Yes
No
No
PurchaseRequisition
No
Yes
No
No
SalesInvoice
No
Yes
No
No
SalesOrder
Yes
Yes
No
No
Unit
No
Yes
No
No
UnitConvert
No
Yes
No
No
Debugging Tips
See the Microsoft Dynamics AX exception log, a valuable resource to consult when something does not work as expected, under Basic > Periodic > Application Integration Framework > Exceptions.
Source Schema
To view the structured schema of a Microsoft Dynamics AX 4 source table, select the Source Schema icon in the toolbar.
Target Schema
To set up the structured schema of a Microsoft Dynamics AX 4 target table, select the Target Schema icon in the toolbar.
Type
These are the data types of the fields in a table. If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
The data types displayed for the fields of tables available in Microsoft Dynamics AX are the defaults specified by the web service interface.
boolean
byte
date
datetime
decimal
double
float
int
short
string
Note:  To insert or update a date value in a Microsoft Dynamics CRM application: On the application server, open the Microsoft Dynamics Exceptions Console. If there are any exception errors related to date fields, enter zeroes for the hour, minute, and second values.
Microsoft Dynamics CRM 4.0
Microsoft Dynamics CRM 4.0 is a customer relationship management (CRM) software package that is available in two deployment styles: client-server and online. A client-server deployment is done on-premise within an organization and made available to users within an intranet or on the Internet. An online or cloud-based deployment using Microsoft Dynamics CRM allows users to subscribe to and access an instance from anywhere on the Internet. Both types of deployments expose the application functionalities through a web services interface.
This connector allows users to retrieve, insert, update, and delete data from both Microsoft Dynamics CRM on-premise and online instances.
Microsoft Dynamics CRM Versions Supported
This connector supports the following versions:
Microsoft Dynamics CRM 4.0 (on-premise and online)
Connector-Specific Notes
There are two requirements for the Microsoft Dynamics CRM 4.0 connector:
Version 3.0 or above of the .NET Framework must be installed on the system running DataConnect
The .dll files associated with the connector must be registered
The DataConnect installer automatically registers Microsoft CRM .dll files for .NET Framework version 3.0 and above.
To verify the version installed on your system, select the Microsoft CRM entry in the Add/Remove Programs section of the Windows Control Panel or look in the .NET Framework directory (for example, C:/Windows/Microsoft.NET/Framework). If you have a .NET Framework version other than 3.0 or above, use the following steps to register the Microsoft CRM 4.0 .dll files.
Note:  You must be logged in as Administrator to run the batch file to register the Microsoft CRM .dll files.
To register Microsoft CRM 4.0 .dll files
1. Install or ensure the latest version of Microsoft.NET framework is installed (version 3.0 or later).
2. Run the registerDJMsCrmComObj4.bat file located in the <installdir>\runtime\di9 directory.
A message stating that the the operation is displayed.
Connection Information
Note:  If when selecting the Microsoft Dynamics CRM 4.0 connector in Map Designer you receive an error stating the connector may not be licensed, you may need an updated license file. Please contact your account executive.
To establish a connection to Microsoft Dynamics CRM 4.0, you must provide the following information:
Server
Enter one of the following:
Server Name
IP Address of the Server, or
Local host (if DataConnect and the Microsoft Dynamics CRM server are on the same machine)
User ID
Username for connecting to the Microsoft Dynamics CRM Server.
For On Premise - Specify the domain name with backslash followed by username. [DomainName\Username]
For Online – Specify the Email-id in username field.
Password
Password for connecting to the Microsoft Dynamics CRM Server.
Organization
On-premise – Specify the company organization name.
Online – The organization name will be fetched automatically for the given server and username.
Properties
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Authentication
ST
The authentication protocol for your source or target data:
Default - Uses Windows cached credentials. If necessary, add or edit your user name and password information in the Manage Passwords section of the Windows Control Panel.
Basic
Digest
NTLM
Anonymous
ServiceType
ST
Select the service type for connecting Microsoft Dynamics CRM options:
On-premise (default)
Online
Update Nulls
T
Select this option if you want to allow null values from source rows to overwrite existing data in the target rows during an update operation. The available options are Always (the default), Null only, and Never nulls
Batch Response
T
In delete operations, one batch response entry exists for every delete action.
In insert and update operations, one batch response entry exists for every insert or update action.
The return code for individual operation will show the result of the operation execution. If the operation is success then the return code will be 0.
For failed operations, the error Description tag will show the detailed error message. This will include any messages returned from Microsoft Dynamics CRM 4.0 as well as errors generated by DataConnect. When the return code is 0 (success), then this tag will return the Entity's unique ID for which the operation was successful.
MaxBatch
S
Controls how many records the server is asked to send at one time (read operations only). If you use a large number in this setting, you may see faster transformation run times. However, larger numbers may increase the need for memory. If you experience memory problems, lower the number.
The default is 500. To change this property, overtype the default with a new value. The maximum setting is 2000. The recommendation for inserts is 500 maximum and for queries, the maximum can be 2000.
WhereStmt
S
Provides a pass-through mechanism where advanced users can construct the Where clause of the query themselves. It can be used as an alternative to writing a lengthy query statement in the Query Statement text box. This might be used to get the CRM server to filter the data based on a condition before it is sent to Map Designer. Omit WHERE when you enter the clause. There is no default for this setting. Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, you should include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table.
Operations
The following operations are supported by the Microsoft Dynamics CRM Connector.
Create (C)
Read (R)
Update (U)
Delete (D)
Entity Support
The Microsoft Dynamics CRM Connector supports standard and custom entities.
Data Types
These are the data types of the fields in your target data table. If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default. The following data types are available in Microsoft Dynamics CRM 4.0:
Boolean
Big Decimal
Double
Float
Integer
String
Batch Response
A batch response file contains a group of entries for each connector used in a batch operation. Each action in the operation generates a batch response entry. These entries often give detailed information, such as errors, that can be used for troubleshooting.
Sample XML
In delete operations, one batch response entry exists for every delete action.
In insert and update operations, one batch response entry exists for every insert or update action.
The return code for an individual operation will show the result of the operation execution. If the operation is successful, the return code will be 0.
For failed operations, the error Description tag will show the detailed error message. This will include the message returned from Microsoft Dynamics CRM 4.0 as well as errors generated by DataConnect. When the return code is 0 (success), then this tag will return the Entity's unique ID for which the operation was successful.
<BatchResponse: batchResult xmlns:BatchResponse="http://www.pervasive.com/BatchResponse">
<BatchResponse: results>
<BatchResponse: result>
<BatchResponse: returnCode>0</BatchResponse: returnCode>
<BatchResponse:objectContainerName>PhoneCall</BatchResponse:objectContainerName>
<BatchResponse: objectID>eb770db8-b581-e011-9673-1cc1de9bf0ef</BatchResponse: objectID>
<BatchResponse:updateOperation>UPDATE</BatchResponse:updateOperation>
</BatchResponse: result>
<BatchResponse: result>
<BatchResponse: returnCode>0</BatchResponse: returnCode>
<BatchResponse: errorDescription>13</BatchResponse: errorDescription>
<BatchResponse:objectContainerName>PhoneCall</BatchResponse:objectContainerName>
<BatchResponse:updateOperation>UPDATE</BatchResponse:updateOperation>
</BatchResponse: result>
</BatchResponse: results>
<BatchResponse: componentInfo>
<BatchResponse: returnCode>0</BatchResponse: returnCode>
<BatchResponse: componentName> MSCrm </BatchResponse: componentName>
</BatchResponse: componentInfo>
</BatchResponse: batchResult>
FilterXML
This option is used to apply filter conditions limiting the fields and rows fetched from source for the selected table.
Sample XML
<fetch mapping='logical'> <entity name='contact'><all-attributes/></entity></fetch>
<fetch mapping='logical'> <entity name='contact'><attribute name = 'contactid'/></entity></fetch>
Additional Information
Field Width
Microsoft Dynamics CRM 4.0 does not return the field size for all entities. In the case that the field width is not available, the field width will default to 40. If a value, such a text string appears to be truncated, you must manually adjust the field length.
Source Keys and Sorting and Source Filters
All sorting and filtering is done client side. All records will be read before any filtering or sorting is done.
Supported Authentications
The current version of Microsoft Dynamics CRM 4.0 supports Windows Live token-based authentication claims based authentication and NTLM authentication. This connector supports Windows Live token based authentication and NTLM authentication since claims based authentication is limited to .NET based application access.
Microsoft Dynamics GP 9 and 10
Microsoft Dynamics GP is an accounting software package that runs on top of a Microsoft SQL Server database. Before its purchase by Microsoft, it was called Great Plains Software. This connector provides a means of retrieving and manipulating data in the database back end of a Microsoft Dynamics GP system. It supports connections to versions 9 and 10.
These connectors rely on Microsoft Dynamics GP eConnect, which uses SQL stored procedures to implement business logic rules, validate data for Great Plains, and ensure good performance.
These connectors set field width in bytes. What varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Before You Begin
The following are requirements before you can use the connector.
Install a Microsoft Dynamics GP 9 or 10 compliant web service. For details, see the MSDN library.
Version 9 only: To avoid receiving an ISO 4217 currency code error, install web server software update 924546, which you can download from the Microsoft Dynamics GP knowledge base.
Connectivity Pointers
If you have problems with data truncation, see the Microsoft Dynamics GP Web Services Reference to determine the length of each key field. Then you can manually change the field sizes using the integration platform. You can download this document from the Learn tab of the Microsoft Dynamics GP Developer Center home page.
Connector-Specific Notes
Date fields. Date fields sent from Microsoft Dynamics GP contain a time value, even though the time value is not relevant for these fields. For example, assume that the web service response contains <DateCreate>2008-07-30 00:00:00</DateCreated>. If you map this field to an ASCII Delimited file with no date conversions, the following data is returned: 7/30/2008 12:00:00 am.
CRUD Operations. The Microsoft interface to Microsoft Dynamics GP data does not support all fields in a Dynamics GP system, and some fields are available only for read access. The create, delete, get, and update methods of the Microsoft interface provide insight into which CRUD operations this connector can perform.
Error Messages. This connector sometimes returns error messages from the originating server that are not useful when troubleshooting server-side issues. Microsoft Dynamics GP offers information that may be more helpful in these situations.
Query Statements. Query statements are not supported in this connector.
For more information on the above limitations, see the Microsoft Dynamics GP Web Services Reference. You can download this document from the Learn tab of the Microsoft Dynamics GP Developer Center home page.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Version
Description
BatchResponse
T
9, 10
Optional. Creates a BatchResponse file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1.
Obtaining detailed results is useful in the following cases:
You need to capture system-generated object IDs for use in future updates.
You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis.
For more details, see Batch Response File.
BatchSize
S
9, 10
Optional. Number of source records cached before processing them. Default is zero.
CultureName
ST
9, 10
Optional. Specifies the culture (locale) of the user making the web service call. Enter a culture name.
CurrencyType
ST
9, 10
Optional. Specifies how currency information is used for the web service call. This property is used only for documents that support multicurrency. If multicurrency is not supported, Local is used.
Transactional (default) – The originating currency amount is used.
Local – The local currency amount is used.
Domain
ST
9, 10
Required. Enter the domain name to use for NTLM authentication.
FlushFrequency
T
9, 10
Optional. Number of operations the connector caches before writing them to the target. Default is zero.
IgnoreRecordErrors
S
10
Returns an error when retrieving any record if set to false (default). If true, records that cause errors are ignored.
OrganizationKey
ST
9, 10
Required. Specifies the unique identifier of the company where the operation should be executed. The web service determines the appropriate company database to use.
PreValidate
T
9, 10
Optional. Analyzes the data before sending it to the server, which can be time-consuming because this validation is not optimized. Default is false.
RoleKey
ST
9, 10
Optional. Permissions that allow access to specific areas of Microsoft Dynamics GP. If this property is supplied, the role is used to choose the policy instance and appropriate behavior options for the web service call. If you do not supply a role key, the web service attempts to find a role for the user and company specified in the context object. If only one role is found, that role is used. If more than one role is found, or no roles are found, the default role is used.
For more information, see the Microsoft Dynamics GP Web Services Reference. You can download this document from the Learn tab of the Microsoft Dynamics GP Developer Center home page.
ShowChildren
ST
9, 10
Optional. Determines whether to add the child record types to the list of available record types.
False (default in target) – List only the parent-level record types.
True (default in source) – Include child record types, allowing you to write data to them.
Note:  On the target, if you want to manipulate any child record types, set this option to true.
Data Types
These are the data types of the fields in a table.
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
The data types displayed for the fields of tables available in Microsoft Dynamics GP are the defaults specified by web service interface.
bigdecimal
biginteger
boolean
byte
bytes
character
date
double
float
integer
long
record
short
string
text
Additional Considerations
The following sections discuss various options supported in this connector.
Viewing Hierarchical Data
Understanding Record Names
Data That Can Be Flattened
Data That Represents a Choice
Data That Represents a List of Hierarchical Data
Viewing Hierarchical Data
The hierarchical data that is delivered from GP is unrolled into a series of flat records. Source child records are associated with their appropriate parents by ordering of the data.
The hierarchical data contained within a top-level record type, such as Customer, falls into three categories:
Data that can be flattened
Data that represents a choice
Data that represents a list of more hierarchical data
Understanding Record Names
Record names follow a certain pattern. For instance, if the record name does not start with an open bracket, it is a top-level parent. Otherwise, it uses the following pattern:
[parent!child1!child2!...childN]child
where child1 is a child of parent1, child2 is a child of child1, and so on.
For example, sales orders can contain multiple lines and each line can contain multiple freight taxes. In this case, the connector has the following record types:
SalesOrder
[SalesOrder]Lines
[SalesOrder!Lines]FreightTaxes
Data That Can Be Flattened
Hierarchical data can be flattened into a number of fields contained within its parent. This data is not exposed as a separate record type, and is broken down into a series of fields that are attached to its parent.
For example, the Customer record type contains the following fields:
CashGLAccountKey_IdCashGLAccountKey_IsEncrypted
The GP back-end sends a Customer record type containing a CashGLAccountKey with two child fields:
Id
IsEncrypted
The depth of flattening correlates directly to the number of underscores present in the field name.
Data That Represents a Choice
Sections of hierarchical data can sometimes differ when data is delivered. The connector creates record types to deal with this. There is a marker record type with the suffix _C, which is a flag saying that the following record type represents the data that was actually delivered. Each of these record types contain a number of record types, which represent the available choices.
In the GLTransaction record type, each GLTransaction can contain [GLTransaction]Lines records.
Each [GLTransaction]Lines instance contains the record type, [GLTransaction!Lines]DebitAmount_C.
Each [GLTransaction!Lines]DebitAmount_C can be followed by either a [GLTransaction!Lines!DebitAmount_C]MoneyAmount or a [GLTransaction!Lines!DebitAmount_C]Quantity.
Assume that you received the following from GP:
<gltransaction>
  <id>100</id>
  <line><num>1</num><money_amount><value>100</value></money_amount></line>
  <line><num>2</num><quantity><number>5</number></quantity></line>
</gltransaction>
The connector produces the following record series:
GLTransaction: id = 1
[GLTransaction]Lines: num = 1
[GLTransaction!Lines]DebitAmount_C
[GLTransaction!Lines!DebitAmount_C]MoneyAmount: value = 100
[GLTransaction]Lines: num = 2
[GLTransaction!Lines]DebitAmount_C
[GLTransaction!Lines!DebitAmount_C]Quantity: value = 5
Data That Represents a List of Hierarchical Data
Some record types contain multiple instances of a grouping of fields. For instance, a GLTransaction can contain many Lines that are treated as record types and are given prefixes that indicate the parents.
For example, assume GP delivers two sales orders and that each order contains two lines. The connector produces the following records:
SalesOrder: field = value, ...
[SalesOrder]Lines: line_field = value, ...
[SalesOrder] Lines
SalesOrder
[SalesOrder]Lines
[SalesOrder]Lines
The record type is [SalesOrder]Lines as opposed to [SalesOrder]Line because of variable names. Remember that a record of type [SalesOrder]Lines represents one line on a sales order.
Microsoft IIS Extended Logfile Format
The Microsoft Internet Information Server (IIS) produces a log file called the IIS Extended Logfile Format file. With this connector, the integration platform can read IIS Extended Logfile Format source files but cannot write them. Therefore, this log file format is not available as a target.
The log file format includes the information in the Microsoft IIS standard log file format, plus user agent and referrer data for Web, Gopher and FTP servers that run under the Microsoft Internet Information Server.
The following is an example of the format:
153.36.62.27, -, 3/1/96, 0:00:00, W3SVC, WWW, 198.105.232.4, 3565, 245, 2357, 200, 0, GET, /MSOffice/Images/button7a.PNG, Mozilla/1.22 (compatible; MSIE 5.0; Windows XP), http://www.microsoft.com/msoffice/
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
The integration platform can read Microsoft IIS Extended Logfile Format files but cannot write them. Therefore, this log file format is not available as a target.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AlternateFieldSeparator
S
Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some of the fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave the setting at None (the default).
The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, and pipe (|). To select a separator from the list, click AlternateFieldSeparator. If you have an alternate field separator other than one from the list, you can type it here.
If the alternate field separator is not one of the choices in the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk, type an asterisk from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
AutomaticStyling
S
Automatic styling changes the way Microsoft IIS Extended Logfile data is read or written. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, the integration platform automatically detects and formats particular data types, such as numeric and date fields.
During the transformation process, autostyling insures that a date field in the source file is formatted as a date field in the target file, as opposed to character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling as False, so leading zeros in some zip codes in the eastern United States are not deleted.
FieldEndDelimiter
S
The default delimiter is None (to read from or write to a file with no delimiters). If your source file contains separators or delimiters, you must specify them by clicking the Source Properties icon in the toolbar and choosing the correct separators and delimiters for your source data file. If you do not know what the separators or delimiters are in your source file, search for "hex browser" in the documentation.
FieldSeparator
S
The integration platform that Microsoft IIS Extended Logfile Format files use a space between each field. To specify some other field separator, click FieldSeparator to display the list of options. The options are comma (default), tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, Ctrl+R, a pipe (|), and no field separator. If you have or need an alternate field separator other than one from the list, you can type it here.
If the field separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
FieldStartDelimiter
S
The default delimiter is None (to read from or write to a file with no delimiters). If your source file contains separators or delimiters, you must specify them by clicking the Source Properties icon in the toolbar and choosing the correct separators and delimiters for your source data file. If you do not know what the separators or delimiters are in your source file, search for "hex browser" in the documentation.
Header
S
In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false.
RecordFieldCount
S
If your Microsoft IIS Extended Logfile Format data file has field separators, but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero.
RecordSeparator
S
A Microsoft IIS Logfile Extended Format file is presumed to have a carriage return-line feed (CR-LF) between records. To use a different character, select from the choices in the list. To use a separator other than one in the list, enter it here.
If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is 106. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
StripLeadingBlanks
S
Allows you to determine if leading blanks are stripped from all data fields. The default is false; leading blanks are not stripped from Microsoft IIS Extended Logfile Format data. To remove them, set StripLeadingBlanks to true.
StripTrailingBlanks
S
Determine if trailing blanks are stripped from the data fields. By default, the integration platform does not strip trailing blanks in Microsoft IIS Extended Logfile Format data. To remove them, set StripTrailingBlanks Current to true.
StyleSampleSize
S
Set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated.
To change the value, highlight the default value, and enter a new one.
CodePage
S
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Microsoft SQL Azure
The Microsoft SQL Azure Database is a cloud-based platform that uses SQL Server technologies. The relational database management service leverages the Microsoft distribution data center, which provides enterprise-level security and scalability.
The Microsoft SQL Azure connector allows you to create, read, update, merge, and delete data in Microsoft SQL Azure database tables.
Connector-Specific Notes
To connect to a data table in a Microsoft SQL Azure database, you must have SQL Server 2008 Native Client installed on your server.
To connect to a Microsoft SQL Azure database, enter the name of the server hosting the client software, your user ID, password (if needed), and database name.
The name of the server must be the fully qualified instance name in the format: instancename.database.windows.net
where instancename is the unique name of your instance.
For example: tcp:myinstancename.database.windows.net
Note:  The Microsoft SQL Azure Multimode connector makes a direct connection to the Microsoft SQL Azure ODBC driver. No DSN is required.
You can write data to an existing Microsoft SQL Azure target either in Replace or Append mode. The Replace output mode removes and recreates the target file or table and destroys any data contained in that file or table. The Append output mode adds data to your tables.
The WhereStmt property, which allows you to filter table data, is supported with only the Table/View option.
You can use an EZScript to auto-generate a GUID for a variable declared as the uniqueidentifier data type. See Generating a Random Unique Identifier.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Autocommit
T
Allows automatic commitment of changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If set to true, you cannot roll back changes. Default is false.
BulkOperations
T
Determines if an insert statement is executed for each record or a bulk add is executed for each record. Default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, set to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Specifies additional SQL data definition language statements to be executed after the target table is created. This is similar to the support provided for SQL passthrough in the SQL import connectors. Each line must be a valid SQL DDL statement. No default exists for this property. This option works only in Replace mode.
CursorType
S
Type of cursor to use for retrieving records from source table. Choices are Forward Only, Static, and Dynamic. Default is Forward Only. For more information, see Cursors.
Encoding
ST
Character encoding used with XML source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Default is OEM.
IdentifierQuotes
ST
Quoted identifiers are used to make the SQL statement parsable and to distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following Select statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes two tables, Accounts and Receivable, and returns a syntax error that they are not separated by a comma.
The integration platform provides four options for IdentifierQuotes: Default, None, " and '. Default is the default.
MaxDataLength
ST
Maximum data length for long data types. Default is 1 MB. You can reset this number based on available memory and target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default in respect for the capabilities of that particular ODBC driver, and it is not recommended to set MaxDataLength any higher.
PrimaryKey
T
Sets a list of field names used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL Create statement when the connector is in replace mode. No default exists for this property.
This property has one additional requirement for use. The driver must support integrity enhancement facility (IEF).
SystemTables
ST
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Transactionisolation
ST
Allows setting of the transaction isolation level for reading or writing tables.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following isolation levels are supported. Default is Serializable.
Read_Uncommitted – Permits P1, P2, and P3.
Read_Committed – Permits P2 and P3. Does not permit P1.
Repeatable_Read – Permits P3. Does not permit P1 and P2.
Serializable (default)– Does not permit any of P1, P2, and P3
For more information, see the Microsoft ODBC SDK documentation.
UseCursors
T
Turns cursor support on and off. Default is false. If set to true and the specified driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, insert speed shows no noticeable change. These servers execute prepared queries as quickly as they handle cursor inserts.
Some drivers require that the target table be indexed, and if not, then positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
Views
ST
If set to true (default), allows you to see views. View names appear in the table list along with table names.
WhereStmt
S
Provides a passthrough mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. Omit WHERE when you enter the clause. No default exists for this property.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
Data Types
The following data types (the types of the fields in a table) are supported.
Note:  If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
bigint
bigint identity
binary
bit
char
date
datetime
datetime2
datetimeoffset
decimal
decimal() identity
float
image
int
int identity
money
nchar
ntext
nvarchar
numeric
numeric() identity
real
smalldatetime
smallint
smallint identity
smallmoney
sql_variant
sysname
text
time
timestamp
tinyint
tinyint identity
uniqueidentifier
varbinary
varchar
xml
Microsoft SQL Azure Multimode
Microsoft SQL Azure Database is a cloud-based relational database management service that is based on SQL Server database technologies. Microsoft provides the physical administration of the database management service through its distribution data center.
You can use the Microsoft SQL Azure Multimode connector to write Microsoft SQL Azure database tables. It is not available as a source connector.
Connector-Specific Notes
Connectivity
To connect to a data table in a Microsoft SQL Azure database, you must have SQL Server 2008 Native Client installed on your server.
You can use an EZScript to auto-generate a GUID for a variable declared as the uniqueidentifier data type. See Generating a Random Unique Identifier.
To connect to a Microsoft SQL Azure database, enter the name of the server that is hosting the software, your user ID, password (if needed), and database name.
The name of the server must be the fully qualified instance name in the format: instancename.database.windows.net
where instancename is the unique name of your instance.
For example: tcp:myinstancename.database.windows.net
Note:  The Microsoft SQL Azure Multimode connector makes a direct connection to the Microsoft SQL Azure ODBC driver. No DSN is required.
Target schema modification
Modification to column names, data types, and sizes are not permitted.
Query statement support
Microsoft SQL Azure supports most Transact-SQL statements. For a complete list of unsupported Transact-SQL statements, go to http://msdn.microsoft.com/en-us/library/ee336281.aspx.
The target format must support query statements for transformation to work with this connector.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Autocommit
T
Allows automatic commitment of changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If set to true, you cannot roll back changes. Default is false.
Encoding
T
Character encoding used with XML source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Default is OEM.
MaxDataLength
T
Maximum data length for long data types. Default is 1 MB. You can reset this number based on available memory and target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default in respect for the capabilities of that particular ODBC driver, and it is not recommended to set MaxDataLength any higher.
Default is 1048576
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file or enter the path and file name.
SQL Output
T
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output:
Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SystemTables
T
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Transactionisolation
T
Allows setting of the transaction isolation level for reading or writing tables.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following isolation levels are supported. Default is Serializable.
Read_Uncommitted – Permits P1, P2, and P3.
Read_Committed – Permits P2 and P3. Does not permit P1.
Repeatable_Read – Permits P3. Does not permit P1 and P2.
Serializable (default)– Does not permit any of P1, P2, and P3
For more information, see the Microsoft ODBC SDK documentation.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Views
T
If set to true (default), allows you to see views. View names appear in the table list along with table names.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. Options for IdentifierQuotes are Default, None, " and '.
Data Types
The following data types (the types of the fields in a table) are supported:
bigint
bigint identity
binary
bit
char
date
datetime
datetime2
datetimeoffset
decimal
decimal() identity
float
image
int
int identity
money
nchar
ntext
nvarchar
numeric
numeric() identity
real
smalldatetime
smallint
smallint identity
smallmoney
sql_variant
sysname
text
time
timestamp
tinyint
tinyint identity
uniqueidentifier
varbinary
varchar
xml
MUMPS (ODBC)
Massachusetts General Hospital Utility Multi-Programming System (MUMPS), is a programming language that provides tools to support database management systems. MUMPS was originally only used by hospitals to access medical records. Now MUMPS is also used by other entities (such as banks, stock exchanges, travel agencies) where multiple users need to access the same databases at the same time.
Because MUMPS values are stored as text strings, you can share the same MUMPS database between disparate architectures, such as VAX, DEC Alpha, SUN, IBM PC and HP workstations.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Note:  The integration platform reads MUMPS tables using ODBC 3.x. For the procedure, and information about the property options, limitations, and source schemas, see ODBC 3.x.
MySQL (ODBC 3.5)
MySQL is an open-source, relational database management system. The integration platform connects to MySQL databases using the ODBC 3.5 connector.
For ODBC 3.5 property and source and target schema information, see ODBC 3.5.
Connector-Specific Notes
Case-Sensitive Names in Statements - Case sensitivity is controlled by the operating system. Case should be considered in the naming of databases and tables. Database and table names are case-insensitive in Windows and case-sensitive in Linux. Regardless of the operating system, case consistency must be observed within a statement. For example, the following query fails because it refers to a table that is named "my_table" as "MY_TABLE" in the query statement:
SELECT * FROM my_table WHERE MY_TABLE.col=1;
Note:  Support for the native MySQL connector has been discontinued. Use the ODBC 3.5 connector to connect to MySQL databases. For connection information, see ODBC 3.5. For current driver information, see http://www.mysql.com/products/myodbc/faq_1.html and look for MyODBC_3.51.
Allowing Duplicate Values Upon Upsert - When you are using an Upsert action, you may receive an error saying that you cannot insert a duplicate value. To allow duplicate values, do the following:
1. In the Windows Control Panel, open Set up data sources (ODBC).
2. Select your MySQL data source and then click Configure.
3. Click the Details >> button at the bottom of the MySQL Connector/ODBC Data Source Configuration window.
4. Go to the Cursors/Results tab.
5. Check the Return matched rows instead of affected rows check box.
6. Click OK.
Navision Financials (ODBC 3.x)
Navision Financials is a product offering by Navision Software Inc., an international corporation that specializes in business integration tools. Navision offers solutions for Financial, Management, Distribution, Manufacturing, Customer Relationship Management (CRM), eCommerce and Business Intelligence (BI).
The integration platform connects to Navision Financials tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Netezza
Netezza is a data warehouse appliance that integrates a relational database with server and storage hardware. The integration platform connects to Netezza databases using the ODBC 3.5 connector.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
Connector-Specific Notes
Netezza can use three ODBC drivers: 3.0.5, 3.14, and 4.0, with the following limitations and known issues:
Only the 4.0 driver supports national character data types, such as NCHAR and NVARCHAR.
In testing with a Netezza Latin-9 database, the 3.14 driver gives best performance when uploading and downloading.
When using the 4.0 driver, do not enter a password to create the Netezza ODBC data source because a defect in the driver causes the connection to fail.
Properties Options
These are properties that you may need to specify for your source or target data.
Caution!  Changing property option values other than WhereStmt is not supported. Leave the source and target property options set to their defaults.
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows automatic commitment of changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If set to true, you cannot roll back changes. Default is false.
BulkOperations
T
Determines if an insert statement is executed for each record or a bulk add is executed for each record. Default is false, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, set to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Specifies additional SQL data definition language statements to be executed after the target table is created. This is similar to the support provided for SQL passthrough in the SQL import connectors. Each line must be a valid SQL DDL statement. No default exists for this property. This option works only in Replace mode.
CursorType
S
Type of cursor to use for retrieving records from source table. Choices are Forward Only, Static, and Dynamic. Default is Forward Only. For more information, see Cursors.
DriverCompletion
ST
Controls whether the driver prompts the user for information.
Prompt – Prompts for every individual bit of information.
Complete (default) – Prompts only for missing information.
Complete Required – Prompts only for information needed to complete a connection.
No Prompt – Does not prompt for any information.
DriverOptions
ST
Enter any valid ODBC connect string options. No default.
DSNType
ST
The integration platform allows you to specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
Encoding
ST
Character encoding used with XML source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
IdentifierQuotes
ST
Quoted identifiers are used to make the SQL statement parsable and to distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following Select statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes two tables, Accounts and Receivable, and returns a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and '.
MaxDataLength
ST
Maximum data length for long data types. Default is 1 MB. You can reset this number based on available memory and target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default in respect for the capabilities of that particular ODBC driver, and it is not recommended to set MaxDataLength any higher.
ModifyDriverOptions
ST
Allows storing of the ODBC connection. Default is true. If set to false, prompts for connection information each time the transformation runs.
PrimaryKey
ST
Sets a list of field names used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL Create statement when the connector is in replace mode. No default exists for this property. Present only in Target GUI not in Source GUI.
This property has one additional requirement for use. The driver must support integrity enhancement facility (IEF).
Synonyms
ST
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
ST
Allows setting of the transaction isolation level for reading or writing tables.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following isolation levels are supported. Default is Serializable.
Read_Uncommitted – Permits P1, P2, and P3.
Read_Committed – Permits P2 and P3. Does not permit P1.
Repeatable_Read – Permits P3. Does not permit P1 and P2.
Serializable – Does not permit any of P1, P2, and P3
Versioning – Provides Serializable transactions but does so without significant impact on concurrency.
For more information, see the Microsoft ODBC SDK documentation.
UseCursors
T
Turns cursor support on and off. Default is false. If set to true and the specified driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, insert speed shows no noticeable change. These servers execute prepared queries as quickly as they handle cursor inserts.
Some drivers require that the target table be indexed, and if not, then positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
Views
ST
If set to true (default), allows you to see views. View names appear in the table list along with table names.
WhereStmt
S
Provides a passthrough mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. Omit WHERE when you enter the clause. No default exists for this property.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
ConstraintDDL Example
The following example illustrates use of the ConstraintDDL setting.
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements create two indices on the table mytable. The first index does not allow duplicates and index values are stored in ascending order. The second is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if replace mode is used for the target. If there are errors, they are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent straight to the DBMS.
Data Types
This connector supports the major Netezza data types.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
NetSuite
NetSuite is a web-based Customer Relationship Management (CRM) software platform that enables access to information using a Web Services interface. The NetSuite connector supports most entities exposed by the NetSuite Web Service, an open XML-based online platform that provides Simple Object Access Protocol (SOAP) access. This version of NetSuite connector supports WSDL 2018_1_0.
The NetSuite connector supports the following authentication methods to connect to NetSuite:
User ID/password-based authentication - Specify the following to connect to NetSuite:
Account ID
Email
Password
Application ID
Token-based authentication - Specify the following to connect to NetSuite:
Account
TokenId
TokenSecret
ConsumerKey
ConsumerSecret
For record entities, all CRUD operations (including Upsert) are supported. For Regular List names, only read operation is supported.
The Account information that is specified connects you to the required NetSuite instance (sandbox or production instance).
Prerequisites
Before using NetSuite connection, the following must be met:
NetSuite connector must be installed.
Require log in credentials for User ID and Password-based authentication or Token-based authentication. Also, make sure that users have permission to access entities.
NetSuite Connector Properties
You can set the following source (S) and target (T) properties.
Property
S/T
Description
BatchResponse
T
Path of the batch response file, which provides detailed results for each object in a batch. These results are useful for the following:
Capturing system-generated object IDs for use in future updates.
Associating errors with objects for exception handling and error diagnosis.
A batch response entry is generated for each top-level record. For example, when a Customer record is inserted with an Addressbooklist, only one entry appears in the batch response. For more information about Batch Response File, see Batch Response File.
CustomizationLevel
ST
Specify the allowed level of NetSuite customizations:
Full (default) - Use this option if using custom record types. Also, provides access to custom fields defined for the standard record types.
CustomFieldsOnly - Use this option if using custom fields defined for the standard record types. Custom fields are requested based on your requirements. For example, the connector does not retrieve custom fields for sales orders if retrieving only customer records.
None - Use this option when custom record types and custom fields are not defined for the standard record types. It improves performance since this option does not require generating A PI calls at startup.
Note:  This property is applicable only when the DataType property is set to Record.
DataType
S
Specify how the connector obtains allowable values for the Table list:
Record (default) – Queries the normal record types such as Customer, Contact, SalesOrder, Your CustomRecordType.
Dynamic – Queries using NetSuite saved searches.
CustomList – Queries the NetSuite custom, user-defined lists of ID and value pairs.
DisableMandatoryCustomFieldValidation
T
Enable or disable validation for the required custom fields:
Default (default) – Connector uses default settings in NetSuite.
True – Data for a custom field (required for NetSuite UI submissions) is not required when submitting a web services request.
False - Connector returns an error when data for a required custom field is not provided.
DisableSystemNotesCustomFields
T
Enable or disable system-generated notes when importing data from custom fields:
Default - Connector uses the default setting in NetSuite
True - Enable system-generated notes when importing data from custom fields
False - Disable system-generated notes when importing data from custom fields
Note:  When importing large records, you may consider disabling system notes to improve performance. For more information, see the disableSystemNotesForCustomFields property in the NetSuite documentation.
FlushFrequency
T
Number of operations the connector caches before writing to the target. Default is zero.
IgnoreReadOnlyFields
T
Specify whether read-only fields are ignored:
Default - Connector uses the default setting in NetSuite
True - Connector ignores read-only fields.
False - Connector does not ignore read-only fields and enables error messages as required.
NestedChildren
ST
Set the connector to retrieve records (parents) and their associated lists (children):
True – Connector retrieves records (parents) and their associated lists (children).
False (default) - Connector retrieves records (parents) only. This can significantly improve performance.
Note:  This property is applicable only when the DataType property is set to Record.
PageSize
S
Indicates the number of records returned for a page in each web service. It is only applicable when the DataType is selected as Record.
Ideally, if DataType is selected as Dynamic or CustomList, the PageSize property should not be displayed. But, this is currently displayed for the connector.
The default value is 50.
Note:  This property is intended for advanced users who have to fine-tune performance. For more information, see the NetSuite documentation.
Role
ST
Specify the NetSuite role ID. A role specifies the permissions required to access different parts of NetSuite. If role ID is not provided, the connector uses the user's default role. For more information, see the NetSuite documentation.
SavedSearchID
ST
The ID of a NetSuite saved search. The SavedSearchId and the Table selection determine the selected saved search..
TreatWarningAsError
T
Controls whether warning messages generated by NetSuite are treated same as errors, which causes a request to be rejected. The options are:
Default - Connector uses the default setting in NetSuite.
True - Warning messages generated by NetSuite are treated same as errors.
False - Warning messages generated by NetSuite are not treated same as errors.
UpdateNulls
T
The NetSuite target may contain empty or null value. The UpdateNulls property controls whether these values are sent to NetSuite:
True - Connector sends null values and empty strings to NetSuite server.
False (the default) - Connector does not send null values and empty strings to NetSuite server..
Note:  The setting for this property affects only parent-level record types since child record types ignores the null values and empty strings.
UseUILabel
ST
When creating a custom field, you must specify a name (label). Also, the system automatically generates an ID. The UseUILabel option allows to display either label or custom ID for a custom field. The available options are:
False (default) - Displays labels for custom fields.
True - Displays custom IDs for custom fields.
Data Types
The following are the data types for the fields in your target database table. If you are appending data to an existing table, then the data type of each field uses the data type in the selected table by default.
Boolean
Date
Double
Integer
Long
String
Limitations
The following are the limitations of the NetSuite connector:
Concurrent Connections - NetSuite allows finite number of concurrent connections for an account only if that account has the concurrency feature enabled. If the account does not have this feature, attempts to use the connector concurrently randomly fails. The concurrency feature is controlled by NetSuite and not the connector.
nvalid Query Error Messaging - The NetSuite connector does not support detailed error messaging for invalid Map Editor queries.
ExternalId Ignored - NetSuite does not support searching transaction type entities such as Sales Order, by external ID.
Additional Information
This section include the information for the following that you must know about the NetSuite connector.
Using Map Editor
To use Map Editor with NetSuite, you must enable the API for your organization within NetSuite.
Entity Names
Entity names change with each version of the API. Custom entities are also supported.
Field Width
The NetSuite connector sets field width in bytes. The number of characters that fit into a given field varies. For more details, see Determining Field Width in Characters or Bytes.
Custom Field and Record Type Support
The NetSuite connector supports custom fields and record types. There is no distinction in the way custom and non-custom fields and record types appear in the connector. For example, if you define a custom record type called MyCustomRecordType, the NetSuite connector adds MyCustomRecordType to its record type list. You can work with the custom record type similar to the Customer type.
You can insert, update, and delete records of custom record types. You can also retrieve records representing instances of custom record types. However, you cannot add or change existing custom fields or add new custom record types.
The NetSuite connector supports the following custom fields.
Custom FieldType
NetSuite Display Name
Boolean
Check box
Date
Date
Time of Day
Double
Decimal Number
Currency
Percent
Long
Integer Number
MultiSelect
Multiple Select
Select
List/Record
Image
String
Free-Form Text
Text Area
Phone Number
E-mail Address
HyperLink
Rich Text
Help
Long Text
Password
Each NetSuite field has a unique identifier. All field types, except MultiSelect and Select, have field names that match the unique identifiers.
Select field - Represents an item in a list. Each item is represented by an Internal ID. The connector creates a field name consisting of the "_InternalId" identifier appended to it. For example, CUSTFIELD10_InternalId. You must specify an internal ID for each Select field.
MultiSelect field - Allow you to select multiple items. For any MultiSelect field, the connector creates a field name consisting of its identifier with "_Values" appended. You must specify a comma-delimited list of internal IDs for MultiSelect fields.
Parent and Child Records
NetSuite allows you to perform actions on child records only within the context of the parent records. The NetSuite connector uses sequential ordering of records to merge children with their respective parents. For example, you submit the following for inserts (in the exact order):
1. Customer 1
2. CustomerAddressbookList
3. Customer Address 1
4. Customer Address 2
5. Customer 2
6. CustomerAddressbookList
7. Customer Address 3
The connector submits two customers to NetSuite. Customer Address 1 and Customer Address 2 are inserted with Customer 1 and Customer Address 3 is inserted with Customer 2. NetSuite returns a response with two entries and this is similar to invoking the addList operation with two customers. The number of batch response entries equals the number of parent records submitted. In this example, the batch response has two entries.
When working with child records:
Insert and update operations at the child level are ignored.
In many cases, you cannot update or add to child records without recreating the original list. For these child records, the ReplaceAll field is ignored. Even though it is required to resend the original information to update or add child records, there are a few exceptions such as SalesOrderItems. You can update or add to these child records by providing an identifier such as a line number or internal ID. In such cases, you can use the parent ReplaceAll${child_name} field to control whether the submitted child records replace the original list or append to it.
The Target option UpdateNulls has no impact on child fields. Null values for child fields are always ignored.
Filtering Records from NetSuite
This section provides information for the following:
Accessing a Saved Search
Using a SQL-Like Statement
Accessing a Saved Search
NetSuite allows you to create a saved search that is useful for filtering fields and records, and joining records. You can define saved searches in the NetSuite UI and access them using the connector by performing the following:
1. Set the DataType property to Dynamic.
2. Select the appropriate Table.
3. Enter the SavedSearchId in the space provided.
Using a SQL-Like Statement
You can use a query statement to restrict received fields and records. The following sections provide additional information:
Example SQL Statements
NetSuite-Specific Operators
Tips
SQL-Like Grammar
Example SQL Statements
The following table provides the example SQL statements.
Example SQL
SQL Statement
Simple query with 'where' clause and field list
Select ExternalId, Title, Status, Message, CUSTEVENT4 from PhoneCall where Select ExternalId, Title, Status, Message, CUSTEVENT4 from PhoneCall where
Search With Numeric Filter
Select Probability, LeadSource_InternalId, Status, Pro-jectedTotal from Opportunity where ProjectedTotal not-GreaterThanOrEqualTo 1396.70
Search with String Filter
Select Title, Status, Message from PhoneCall where Title doesNotStartWith 'TC24889'
Search with Date Filter
Select Title, StartDate, EndDate, Status, NoEndDate from CalendarEvent where StartDate notOnOrBefore '2006-07-25'
Simple Subquery
Select InternalId, (select (select Addr1 from 'Addressbook') from 'AddressbookList') from Customer
Multiple 'and' Clauses
Select EntityId, FirstName, LastName, Email from Contact where EntityId contains 'John' and Email startsWith 'John' and LastName is 'Boy'
Address-Contact Primary Joined Search
Select CompanyName, Email from Vendor where Email notEmpty '' and ContactPrimaryJoin(Title is 'Quality')
Customer-SubCustomer Joined Search
Select EntityId, Email from Customer where Email notEmpty '' and SubCustomerJoin(EntityId contains 'subcust')
Query Using a Single Quote in a Field Value
Select EntityId from Contact where Email is JohnO'Doe@company.com
Query Using Double Quotes in a Field Value
Select EntityId from Contact where NickName is `John "the bomb"`
Query Using a Backtick in a Field Value
Select EntityId from Contact where Height is "5`3"
NetSuite-Specific Operators
The following table provides the operators that are specific to NetSuite.
 
Operator Category
Operator
Date-Only Operators
after
before
on
onOrAfter
onOrBefore
notAfter
notBefore
notOn
notOnOrAfter
notOnOrBefore
notWithin
within
String-only Operators
contains
doesNotContain
doesNotStartWith
is
isNot
startsWith
Numeric-Only Operators
between
equalTo
greaterThan
greaterThanOrEqualTo
lessThan
lessThanOrEqualTo
notBetween
notEqualTo
notGreaterThan
notGreaterThanOrEqualTo
notLessThan
notLessThanOrEqualTo
ID/Select Field-Only Operators
anyof
noneof
Tips
When working with NetSuite query statements:
Query field names do not always match field names. You must use the query fields that NetSuite provides.
A joined search specification may contain custom fields.
Reserve
d words, table names, field names, and query field names are case-sensitive.
You cannot query a child directly. You must query through the parent.
You can use single quotes, double quotes, or backticks as field token delimiters. If a field value uses any of these characters, use a different character to use as the token delimiter.
For more information on querying NetSuite, see the NetSuite documentation.
SQL-Like Grammar
Use the following grammar as a guideline for writing NetSuite query statements.
Grammar
Query Statements
query
select field_list from table (where query_terms)?
field_list
('(' child_query ')' ',')* ('' (',' '(' child_query ')')) | (field (',' (field | '(' child_query ')'))*)
child_query
select field_list from table
Field
Label
Table
label | string_literal | string_literal_2 | string_literal_3
query_terms
simple_term | joined_search (and (simple_term | joined_search))
*simple_term
label (operation | label)? value (',' value)*
operation
'=' | '>' | '<' | '<>' | '>=' | '<='
Value
string_literal | decimal | macrodef
joined_search
label '(" simple_term (and simple_term)* ')'
Label
character (character | digit | "_")*
character
['A' - 'Z'] | ['a' - 'z']
Digit
['0' - '9']
Select
'SELECT' | 'Select' | 'select'
From
'FROM' | 'From' | 'from'
And
'AND' | 'And' | 'and'
where
'AND' | 'And' | 'and'
string_literal
'''
( (~[''','\\','\n','\r'])
| ('\\'
( ['n','t','b','r','f','\\','\'','\'']
| ['0'-'7'] ( ['0'-'7'] )?
| ['0'-'3'] ['0'-'7'] ['0'-'7']
)
)
)*
'''
string_literal_2
"`"
 
( (~["`","\\","\n","\r"])
| ("\\"
( ["n","t","b","r","f","`","\\","\'","\""]
| ["0"-"7"] ( ["0"-"7"] )?
| ["0"-"3"] ["0"-"7"] ["0"-"7"]
)
)
)* "`"
string_literal_3
"\""
 
( (~["\"","\\","\n","\r"])
| ("\\"
( ["n","t","b","r","f","`","\\","\'","\""]
| ["0"-"7"] ( ["0"-"7"] )?
| ["0"-"3"] ["0"-"7"] ["0"-"7"]
)
)
)* "\""
decimal
number ('.' number)?
number
digit+
macrodef
'$(' ~[')']+ ')'
NonStop SQL MX (ODBC)
NonStop SQL MX is object relational database management software that supports standard data and multi-media data. It is used in complex query environments such as business intelligence, online analytical processing (OLAP) and operational data store (ODS). The ODBC connector connects to NonStop SQL MX database files.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Connector-Specific Notes
No known limitations at this time.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to rollback changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
This is a pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
DriverCompletion
T
Allows you to control whether or not the driver prompts you for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option prompts the user for every individual bit of information. Complete prompts the user for any information they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information.
DriverOptions
T
Enter any valid ODBC connect string options here. There is no default.
DSNType
T
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. The integration platform provides four options for IdentifierQuotes: Default, None, " and `.
MaxDataLength
T
This is the maximum data length for Long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or Target connector and the default setting is not 1 MB, the integration platform presets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
T
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the Transformation.
PrimaryKey
T
The PrimaryKey property allows the user to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
Synonyms
T
If set to true this property allows you to see synonyms. The alias names appear in the table drop down list along with the tables. Default is false.
SystemTables
T
The SystemTables property is only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table drop down list. The default is false.
TransactionIsolation
T
The Translation Isolation option allows the user to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default for version 2.x is Versioning. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The following lists the five supported isolation levels.
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
VERSIONING - Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
UseCursors
T
The UseCursors property allows you to turn cursor support on and off. The default is false. When set to true and the specified ODBC driver does not support cursor inserts, and the integration platform falls back on the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
Views
T
If set to true this property allows you to see views. The view names appear in the table drop down list along with the tables. Default is true.
Encoding
T
Sets the character encoding for NonStop SQL MX (ODBC) source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
Null
Using the Null connector, you can create transformations even when you do not have access to live source data. On the target side, you can generate test target data files.
When you do not have access to the source table, you can bypass a live data connection by selecting the Null connector as your source type. You can connect to a dummy source file of 100 records (this number can be adjusted in source properties). You can then define the source file as you would any other source file, by adjusting field properties, source properties, data types, and field size.
To use the Null connector to generate test data, simply specify the number of records you want to test in source properties. Then you can map the source to the dummy target file and run the transformation. A Null target with 100 records is created (default) under the field name Record Number.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
No known limitations at this time.
Property Options
You can set the following source (S) and target (T) properties.
Propety
S/T
Description
Record Count
S
The number of records supplied or accepted. Default is 100.
OData
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
The Open Data Protocol (OData) is a web protocol that Microsoft has released as a set of open extensions and conventions under the Open Specification Promise (OSP) for the creation of Representational State Transfer (REST)-based data services. The protocol is based on Web technologies such as JavaScript Object Notation (JSON), Atom Publishing Protocol (ATOM), and HTTP to allow people on different platforms to query and update data that is accessible from a variety sources, including services and applications.
You can use the OData connector to create, read, update, merge, and delete data from a variety of sources such as websites, content management systems, and relational databases.
This connector writes multimode output, so replace/append target mode does not apply.
Querying OData
When you create an OData connection, the Entity field is automatically populated with the available entity types the endpoint supports. The following OData query options are available:
$expands - Expand the fields inline.
$filter - Filter the results.
$format - Format results. The default is ATOM.
$orderby - Sort the results.
$select - The number of fields to return.
$skip - The number of rows to bypass before starting to return results.
$top - The maximum number of items returned in the result set for each page.
For example, if the endpoint http://odata.someservice.com/catalog features a collection of cellular phones with an entity called Name that included the fields Type, Price, and Rank, you could enter the following query in the Entity field to sort the results by Rank:
Name?$orderby=Rank
More information about OData query options is available on the OData website: http://www.odata.org/developers/protocols/uri-conventions#QueryStringOptions
You can also request a specific parent or child record. Using the previous endpoint example, you could enter the following statement in the Entity field to return all cellular phone records with a specific Name:
Name(iPhone)
Connector-Specific Notes
To connect to an OData service you must specify its URL.
The OData connector is a multimode connector, that is, integration platform allows for concurrent writes to the entity (table) defined in the dataset. When using the dataset in a map, use the Insert function to perform create, update, merge, or delete operations. The actual operation that the system performs is defined by the HTTP Method (POST, PUT, MERGE, DELETE) you set in the dataset.
In OData multimode targets, modifications to data types are not permitted.
The OData target schema has a root element called Entries that enables you to create multiple entities at once. You must output the Entries record once per map (not per entity).
To use the OData connector to do update, merge, and delete operations, you must specify the Record Id in the Entity field.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Batch Response
T
Creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than one. This is useful in the following cases:
You need to capture system-generated object IDs for use in future updates
You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis
For more information, see Batch Response File.
Batch Size
S
The maximum number of source records to retrieve from the connector at one time. The default is zero, which means to read all.
FlushFrequency
T
Number of operations the connector caches before writing them to the target. Default is zero.
HTTP Method
T
HTTP Method to be executed.
Methods supported:
POST - Create the entity. Default.
PUT - Replace the entity with the one specified.
MERGE - Update the entity fields that are specified (ignores null fields).
DELETE - Delete the entity.
See the HTTP 1.1 specification for additional details.
Truststore File
ST
The full path to the trust store certificate from the default keystore location.
Truststore Password
ST
The password for the specified trust store.
Data Types
The OData connector supports the following data types.
Binary
Boolean
Byte
DateTime
DateTimeOffset
Decimal
Double
Guid
Int16
Int32
Int64
Single
String
Time
ODBC 3.5
The term ODBC stands for "Open Database Connectivity". This is a standard to allow client-side Windows applications to access various back-end databases through a single common interface. The ODBC 3.5 connector allows you to access data from any application that has a Windows or Unix\AS400-compliant ODBC driver.
Note:  This connector only supports 64-bit ODBC drivers.
In most cases, the integration platform connects to the file formats directly through knowledge of their physical file formats (for example, ASCII, Lotus, xBASE...), or through the native interfaces supplied by vendors who define a particular data format such as Oracle. ODBC provides another method (and sometimes the only method) for connecting to a particular file format.
Tip:  This connector is designed for Unicode ODBC 3.5x-compliant data sources. If your data source is not a Unicode-based, ODBC 3.5x-compliant data source, use the ODBC 3.x connector. See ODBC 3.x.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
For information on installing ODBC drivers, see Installing an ODBC Driver.
Connectivity Pointers
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
Connector-Specific Notes
Connecting with the Client AS400 ODBC driver can result in slow run times. Other ODBC driver vendors may offer better performance.
Although Excel has a Windows-compliant ODBC driver and "Excel Files" may appear as a data source option when you use the ODBC connector, the integration platform cannot connect with Excel files because they comprise worksheets rather than tables.
SQL Server: The use of Regional settings in ODBC DSN connections to SQL Server databases is not supported in this release. In order for currency, numbers, dates and times to be interpreted correctly, ensure that the Use regional settings when outputting currency, numbers, dates and times checkbox in the SQL Server DSN settings is not checked. Then in the ODBC connector DriverOptions property, type Regional=No.
The unixODBC version of the connectors must use the libdb2o.so driver in the odbc.ini file.
Switching Between iODBC and unixODBC on Linux
The ODBC 3.5 and ODBC 3.x connectors are configured during installation to work with the iODBC driver manager. Users with the unixODBC driver manager must switch to the unixODBC versions.
To switch to the unixODBC connectors, run the following commands as root.
cd <installdir>/runtime/di9
rm -f djodbc3.so djodbc35.so
ln -s djodbc3.unixodbc.so djodbc3.so
ln -s djodbc35.unixodbc.so djodbc35.so
chown diuser.diusers djodbc3.so djodbc35.so
To revert to the iODBC connectors, run the following commands as root.
cd <installdir>/runtime/di9
rm -f djodbc3.so djodbc35.so
ln -s djodbc3.iodbc.so djodbc3.so
ln -s djodbc35.iodbc.so djodbc35.so
Property Options
You can set the following source (S) and target (T) properties.
Note:  A subset of these properties are displayed based on the selected connector.
 
Property
S/T
Description
CursorType
S
The type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only. For more information, see Cursors.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. You might use this to instruct the SQL database server to filter the data based on a particular condition before sending it to the integration platform. Omit Where when you enter the clause. No default exists for this property.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. No default exists for this property.
For example, you could have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent straight to the DBMS.
The statement
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys in these cases, use native SQL.
Note:  This option works only in REPLACE mode.
UseCursors
T
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
DriverCompletion
ST
Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete (default), Complete Required and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information.
DriverOptions
ST
Default is empty for a new map. This property will be automatically populated with ODBC connect string upon successful connection.
For ODBC connectors, the map saved database information in the Driver Option property with real values (not using macros) during the last successful connection. If you want to run the map with a different database server, you have to delete the value for Driver Option and use the correct macros for the connection information. The map repopulates the value of Driver Option when it successfully connects to the new database server.
DSNType
ST
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes that there are two tables, Accounts and Receivable, and returns a syntax error indicating that they are not separated by a comma. The options for IdentifierQuotes are Default, None, " and '.
MaxDataLength
ST
The maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
ST
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the transformation.
PrimaryKey
ST
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
Synonyms
ST
If set to true this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation (v3.x and v3.5 only)
ST
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The following lists the four supported isolation levels.
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
ST
If set to true this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Encoding
ST
Sets the character encoding for ODBC 3.x source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Data Types
The following data types are available:
Binary
Bit
Byte
Char
Counter
Currency
Datetime
Double
Guid
Integer
LongBinary
LongChar
Real
SmallInt
Varbinary
VarChar
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
ODBC 3.5 Mass Insert
The term ODBC stands for "Open Database Connectivity". This is a standard to allow client-side Windows applications to access various back-end databases through a single common interface. The ODBC 3.5 connector allows you to access data from any application that has a Windows or Unix\AS400-compliant ODBC driver.
Note:  This connector only supports 64-bit ODBC drivers.
ODBC 3.5 Mass Insert is a rapid way of inserting records into a relational database. It bypasses the transactional layer of the database and adds information directly to the storage tables. ODBC 3.5 Mass Insert is a useful option if you are dealing with large tables and performance is of paramount importance, but you do not have the option of a native connection.
In most cases, the integration platform connects to the file formats directly through knowledge of their physical file formats (for example, ASCII, Lotus, xBASE...), or through the native interfaces supplied by vendors who define a particular data format such as Oracle. ODBC provides another method (and sometimes the only method) for connecting to a particular file format.
Tip:  This connector is designed for Unicode ODBC 3.5x-compliant data sources. If your data source is not a Unicode-based, ODBC 3.5x-compliant data source, use the ODBC 3.x connector. See ODBC 3.5.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
For information on installing ODBC drivers, see Installing an ODBC Driver.
Connectivity Pointers
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
Connector-Specific Notes
Connecting with the Client AS400 ODBC driver can result in slow run times. Other ODBC driver vendors may offer better performance.
Although Excel has a Windows-compliant ODBC driver and "Excel Files" may appear as a data source option when you use the ODBC connector, the integration platform cannot connect with Excel files because they comprise worksheets rather than tables.
SQL Server: The use of Regional settings in ODBC DSN connections to SQL Server databases is not supported in this release. In order for currency, numbers, dates and times to be interpreted correctly, ensure that the Use regional settings when outputting currency, numbers, dates and times checkbox in the SQL Server DSN settings is not checked. Then in the ODBC connector DriverOptions property, type Regional=No.
Switching Between iODBC and unixODBC on Linux
The ODBC 3.5 and ODBC 3.x connectors are configured during installation to work with the iODBC driver manager. Users with the unixODBC driver manager must switch to the unixODBC versions.
To switch to the unixODBC connectors, run the following commands as root.
cd <installdir>/runtime/di9rm -f djodbc3.so djodbc35.soln -s djodbc3.unixodbc.so djodbc3.soln -s djodbc35.unixodbc.so djodbc35.sochown diuser.diusers djodbc3.so djodbc35.so
To revert to the iODBC connectors, run the following commands as root.
cd <installdir>/runtime/di9rm -f djodbc3.so djodbc35.soln -s djodbc3.iodbc.so djodbc3.soln -s djodbc35.iodbc.so djodbc35.so
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
T
Sets the character encoding for ODBC 3.x Mass Insert source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
ConstraintDDL
T
Allows you to specify some additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, you could have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, they can continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an '@' is sent straight to the DBMS.
The statement:
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys, use native SQL.
Note:  This option works only in REPLACE mode.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
SystemTables
T
The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
Views
T
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Synonyms
T
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
DSNType
T
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is true. To cause an insert statement to be executed for each record, this should be set to false, the slower setting.
DriverCompletion
T
Allows you to control whether or not the driver prompts you for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. Options for IdentifierQuotes are Default, None, " and '.
ModifyDriverOptions
T
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your transformation.
DriverOptions
T
Enter any valid ODBC connect string options here.
MaxDataLength
T
The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
PrimaryKey
T
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
TransactionIsolation
T
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following lists the four supported isolation levels.
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
For more information, see the Microsoft ODBC SDK documentation.
UseCursors
T
The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of the cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL (see details above).
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
Data Types
The following data types are available:
Binary
Bit
Byte
Char
Counter
Currency
Datetime
Double
Guid
Integer
LongBinary
LongChar
Real
SmallInt
Varbinary
VarChar
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
ODBC 3.5 Multimode
In most cases, the integration platform connects to the file formats you are using either directly through knowledge of their physical file formats (for example, ASCII, Lotus, xBASE...), or through the native interfaces supplied by those vendors who define a particular data format such as Oracle. ODBC provides another method (and sometimes the only method) for connecting to a particular file format. The ODBC 3.5 Multimode connector allows you to perform multiple operations (such as table drops and table inserts) directly on your target database.
Note:  This connector only supports 64-bit ODBC drivers.
The integration platform can write to ODBC 3.5 Multimode format as target type. It is not available as a source type.
Tip:  This connector is designed for Unicode data sets. If your data is not Unicode, use the ODBC 3.x Multimode connector. See ODBC 3.x Multimode.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
On the surface, it may appear that this connector would allow you to make a connection to any Source that has an ODBC driver. However, for transformations written with this connector to work, your target format must support SQL statements.
As a general rule, when connecting to a database with ODBC, most applications require the database engine to be up and running at the time the connection is attempted.
Connector-Specific Notes
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Query Statement Support: Your target format must be able to support SQL statements for you access multiple tables in the same transformation, each time that transformation runs written with this connector to work, such as Oracle and SQL Server.
Auto-Increment Fields: If your target table contains an auto-increment column, you may get an error at run time. If this occurs, you must delete the field that is causing the problem from the record.
SQL Server: The use of Regional settings in ODBC DSN connections to SQL Server databases is not supported in this release. In order for currency, numbers, dates and times to be interpreted correctly, ensure that the Use regional settings when outputting currency, numbers, dates and times checkbox in the SQL Server DSN settings is not checked. Then in the ODBC connector DriverOptions property, type Regional=No.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
DriverCompletion
T
Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete, Complete Required, No Prompt. The Prompt option prompts the user for every individual bit of information. Complete prompts the user for any information they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information. The default is Complete.
DriverOptions
T
Enter any valid ODBC connect string options here.
DSNType
T
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92).
For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. IdentifierQuotes has four options: Default, None, ", and '.
MaxDataLength
T
The maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
T
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your you access multiple tables in the same transformation, each time that transformation runs.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
SQL Output
T
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output:
Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
T
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation
T
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The following lists the four supported isolation levels.
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
T
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Encoding
T
Sets the character encoding for ODBC 3.5 source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Data Types
The only available data type is Text.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
ODBC 3.x
The term ODBC stands for Open Database Connectivity. This is a standard promulgated by Microsoft to allow client-side Windows applications to access various backend databases through a single common interface. The ODBC 3.x connector provides method (sometimes the only method) of connecting to a particular file format.
Note:  This connector only supports 64-bit ODBC drivers.
In most cases, the integration platform connects to the file formats directly through knowledge of physical file formats (for example, ASCII, Lotus, xBASE), or through the native interfaces supplied by the vendors who define a particular data format such as Oracle.
The integration platform's ODBC 3.x support allows you to access data from any application that has Windows or Unix\AS400-compliant ODBC driver.
Tip:  This connector is designed for non-Unicode ODBC 3.x data sources with an ODBC version earlier than ODBC 3.5. If your data source is a Unicode-based, ODBC 3.5x-compliant data source, use the ODBC 3.5 connector. See ODBC 3.5.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
We do not sell or distribute ODBC drivers, so you must obtain ODBC drivers from another vendor.
Some basic ODBC drivers are installed automatically with some Windows operating systems. However, the drivers are not configured automatically, so you must use the operating system's ODBC Administrator to configure the driver. The ODBC Admin is called Data Sources (ODBC) and is found in the Control Panel under Administrative Tools.
For information on installing ODBC drivers, see Installing an ODBC Driver.
Connectivity Pointers
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
Connector-Specific Notes
Connecting with the Client AS400 ODBC driver can result in slow run times. Other ODBC driver vendors may offer better performance.
Although Excel has a Windows-compliant ODBC driver and the Excel File option may appear for the data source when you use the ODBC connector, the integration platform cannot connect with Excel files because they consist of worksheets rather than tables.
SQL Server: The use of Regional settings in ODBC DSN connections to SQL Server databases is not supported in this release. In order for currency, numbers, dates and times to be interpreted correctly, ensure that the Use regional settings when outputting currency, numbers, dates and times checkbox in the SQL Server DSN settings is not checked. Then in the ODBC connector DriverOptions property, type Regional=No.
The unixODBC version of the connectors must use the libdb2o.so driver in the odbc.ini file.
Switching Between iODBC and unixODBC on Linux
The ODBC 3.5 and ODBC 3.x connectors are configured during installation to work with the iODBC driver manager. Users with the unixODBC driver manager must switch to the unixODBC versions.
To switch to the unixODBC connectors, run the following commands as root.
cd <installdir>/runtime/di9
rm -f djodbc3.so djodbc35.so
ln -s djodbc3.unixodbc.so djodbc3.so
ln -s djodbc35.unixodbc.so djodbc35.so
chown diuser.diusers djodbc3.so djodbc35.so
To revert to the iODBC connectors, run the following commands as root.
cd <installdir>/runtime/di9
rm -f djodbc3.so djodbc35.so
ln -s djodbc3.iodbc.so djodbc3.so
ln -s djodbc35.iodbc.so djodbc35.so
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CursorType
S
The type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only. For more information, see Cursors.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. You might use this to instruct the SQL database server to filter the data based on a particular condition before sending it to the integration platform. Omit Where when you enter the clause. No default exists for this property.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. To maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. No default exists for this property.
For example, you could have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent straight to the DBMS.
The statement
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys in these cases, use native SQL.
Note:  This option works only in REPLACE mode.
UseCursors
T
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
DriverCompletion
ST
Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete (default), Complete Required and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information.
DriverOptions
ST
Default is empty for a new map. This property will be automatically populated with ODBC connect string upon successful connection.
For ODBC connectors, the map saved database information in the Driver Option property with real values (not using macros) during the last successful connection. If you want to run the map with a different database server, you have to delete the value for Driver Option and use the correct macros for the connection information. The map repopulates the value of Driver Option when it successfully connects to the new database server.
DSNType
ST
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes that there are two tables, Accounts and Receivable, and returns a syntax error indicating that they are not separated by a comma. The options for IdentifierQuotes are Default, None, " and '.
MaxDataLength
ST
The maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) source or target connector and the default setting is not 1 MB, the integration platform presets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
ST
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the transformation.
PrimaryKey
ST
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
Synonyms
ST
If set to true this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation (v3.x and v3.5 only)
ST
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The following lists the four supported isolation levels.
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
ST
If set to true this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Encoding
ST
Sets the character encoding for ODBC 3.x source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Data Types
The following data types are available:
Binary
Bit
Byte
Char
Counter
Currency
Datetime
Double
Guid
Integer
LongBinary
LongChar
Real
SmallInt
Varbinary
VarChar
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
ODBC 3.x Mass Insert
The term ODBC stands for "Open Database Connectivity". This is a standard promulgated by Microsoft to allow client-side Windows applications to access various backend Databases through a single common interface. With the ODBC 3.x Mass Insert connector, the integration platform writes to target tables.
Note:  This connector only supports 64-bit ODBC drivers.
ODBC 3.x Mass Insert is a rapid way of inserting records into a relational database. It bypasses the transactional layer of the database and adds information directly to the storage tables. ODBC 3.x Mass Insert is a useful option if you are dealing with large tables and performance is of paramount importance, but you do not have the option of a native connection.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
SQL Server: The use of Regional settings in ODBC DSN connections to SQL Server databases is not supported in this release. In order for currency, numbers, dates and times to be interpreted correctly, ensure that the Use regional settings when outputting currency, numbers, dates and times checkbox in the SQL Server DSN settings is not checked. Then in the ODBC connector DriverOptions property, type Regional=No.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is true. To cause an insert statement to be executed for each record, this should be set to false, the slower setting.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify some additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, you could have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, they can continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an '@' is sent straight to the DBMS.
The statement:
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys, use native SQL.
Note:  This option works only in REPLACE mode.
DriverCompletion
T
Allows you to control whether or not the driver prompts you for information. The options are Prompt, Complete (default), Complete Required, and No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information.
DriverOptions
T
Enter any valid ODBC connect string options here.
DSNType
T
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. Options for IdentifierQuotes are Default, None, " and '.
MaxDataLength
T
The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
T
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your transformation.
PrimaryKey
T
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
Synonyms
T
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation
T
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following lists the four supported isolation levels.
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
For more information, see the Microsoft ODBC SDK documentation.
UseCursors
T
The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of the cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL (see details above).
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
Views
T
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Encoding
T
Sets the character encoding for ODBC 3.x Mass Insert source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Data Types
The only available data type for an ODBC connection is Text.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
ODBC 3.x Multimode
In most cases, the integration platform connects to the file formats you are using either directly through the physical file formats (for example, ASCII, Lotus, xBASE), or through the native interfaces supplied by vendors who define a particular data format such as Oracle. ODBC provides another method (sometimes the only method) for connecting to a particular file format. The ODBC 3.x multimode and ODBC 3.5 multimode connectors allow you to perform multiple operations (such as table drops and table inserts) directly on target databases.
The integration platform can write to ODBC 3.x Multimode and ODBC 3.5 Multimode formats as target types. They are not available as sources.
For transformations written from this connector to work, your target format must support SQL statements.
Note:  This connector only supports 64-bit ODBC drivers.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Tip:  When connecting to a database with ODBC, most applications require the database engine to be running at the time the connection is attempted.
Connector-Specific Notes
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Query Statement Support: Your target format must be able to support query statements for transformations written with this connector to work, such as Oracle and SQL Server.
Auto-Increment Fields: If your target table contains an auto-increment column, you may get an error at run time. If this occurs, you must delete the field that is causing the problem from the record.
SQL Server: The use of Regional settings in ODBC DSN connections to SQL Server databases is not supported in this release. In order for currency, numbers, dates and times to be interpreted correctly, ensure that the Use regional settings when outputting currency, numbers, dates and times checkbox in the SQL Server DSN settings is not checked. Then in the ODBC connector DriverOptions property, type Regional=No.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
DriverCompletion
T
Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete, Complete Required, No Prompt. The Prompt option prompts you for every individual bit of information. Complete prompts you for any information they forgot to enter. The Complete Required option prompts you only for information that is essential to complete the connection. The No Prompt option does not prompt you for any information. The default is Complete.
DriverOptions
T
Enter any valid ODBC connect string options here.
DSNType
T
You can specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data.
In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes that there are two tables, Accounts and Receivable, and returns a syntax error indicating that they are not separated by a comma. IdentifierQuotes has these options: Default, None, " and '.
MaxDataLength
T
The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, then the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
T
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time that you run your transformation.
SQL Output
T
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. Target Only is the default output:
Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Synonyms
T
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
The SystemTables property is only applicable if you are logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation
T
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The following lists the four supported isolation levels.
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
T
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
Encoding
T
Sets the character encoding for ODBC 3.x source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Data Types
The only available data type is Text.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Oracle 10g, 11g, and 12c
Oracle is a relational database management system. The Oracle 10g, 11g, and 12c connectors enable the integration platform to perform create, read, update, and delete operations on Oracle 10g, 11g, and 12c data tables.
Note:  You must have the Oracle 10g, 11g, or 12c standard 64-bit client installed on your system and you must have access to Oracle. If Oracle is installed on your local PC, the drive and directory path of your Oracle application must be included in your path. The path statement is automatically modified when your Oracle client software is installed. The Oracle Database Manager must be running.
Other Oracle Connectors
You can connect to Oracle interfaces in the following ways:
Oracle Multimode: Perform multiple operations (such as table drops, table inserts) directly on your target database. See Oracle 10g, 11g, and 12c Multimode.
Oracle Direct Path: Deliver incremental loads of detail data to data warehouses and data marts. See Oracle Direct Path 10g, 11g, and 12c.
Oracle SQL Loader: Produce flat loader files that can be uploaded into an Oracle database using the Oracle SQL Loader utility. See Oracle SQL Loader.
Oracle Siebel CRM On Demand 12 and 14: Access data through the OnDemand Web Services interface with this multimode connector. See Oracle Siebel CRM On Demand 12 and 14.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Global Transactions Supported
Oracle 10g, 11g, and 12c connectors support global transactions. When used with shared server mode, you can reference database links in Oracle SQL queries on source tables. For more information, see Oracle Database Global Transaction Support.
Unicode Support
The Oracle 10g, 11g, and 12c connectors include Unicode support for wide characters. To enable double-byte support, change the Regional and Language Options on your system to a double-byte language, such as Japanese.
Connectivity Pointers
The Oracle standard client must be installed to connect to Oracle 10g, 11g, and 12c databases.
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement box.
Connector-Specific Notes
The BINARY_FLOAT and BINARY_DOUBLE data types are not supported on Linux systems.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Version
Use
AllowDistributedTransactions
ST
10g, 11g, 12c
If set to true (default), distributed transaction processing is allowed.
AppendHint
T
10g, 11g, 12c
Applies only to insert operations. The default value is false. If this property is set to true, the hint is used in the Insert statement.
ArraySize
T
10g, 11g, 12c
Number of records fetched or inserted with each cursor operation. Default is zero. Maximum allowed is 100.
Code Page
ST
10g,11g
The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see CodePage.
CommitFrequency
T
10g, 11g, 12c
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
10g, 11g, 12c
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
Directory Alias
T
10g, 11g, 12c
Allows you to specify the name of the Directory Alias. No default exists for this property.
Encoding
ST
11g, 12c
Determines how to store characters in Oracle 11g and 12c source and target files. Default is OEM.
For choices, see Encoding Reference.
ParallelHint
ST
10g, 11g, 12c
Use this property to include Parallel Hints in statements. Supported ParallelHint values are listed below
0 (default) – No hint is included.
1 – The hint is included in statements as: "parallel(table, DEFAULT)". This sets the degree of parallelism to the settings of the initialization parameters.
> 1(greater than 1) – The value is used as a degree of parallelism, with the hint included in statements as: "parallel(table, PROPVAL)".
QueryRewrite
S
12c
Allows connection to enable query rewriting and to set integrity levels. Property behavior and integrity levels are listed below:
None - No altered session is issued and query rewriting is disabled.
Enforced - The session is altered to allow query rewriting. A materialized view is used, provided that it guarantees the same result as using the detail tables. QueryRewrite can use no stale materialized views or trust relationships. This is the default value.
Trusted - The session is altered to allow query rewriting. A materialized is used, provided that it is not stale. However, QueryRewrite may use trusted relationships, such as those declared by dimension objects or constraints that have not been validated.
Stale Tolerated - The session is altered to allow query rewriting. The materialized view is used even when data in detail tables has changed. Note that this property must be set before connecting since the alter sessions occur during connection.
Synonyms
ST
10g, 11g, 12c
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
10g, 11g, 12c
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
UseArrays
T
10g, 11g, 12c
To use arrays, select true. The default is false (no arrays). UseArrays allows this connector to send batches of records (multiple rows/columns contained in one block as a blob array) to an Oracle Target at one time. This reduces some of the function call and network overhead that would occur if each record had to be sent to Oracle one at a time. Performance improvements of about four times occur with array inserts.
Limitation: Array inserts make error handling difficult. If one of the records has invalid data, such as nulls or a duplicate key, you get one error for the entire batch of records. Because of this, using reject files is not suggested when using this property.
Views
ST
10g, 11g, 12c
If set to true (default), allows you to see views. View names appear in the table list along with table names.
WhereStmt
S
10g, 11g, 12c
Provides a pass-through mechanism where advanced users can construct the Where clause of the query themselves. It can be used as an alternative to writing a lengthy query statement. This might be used to get the Oracle server to filter the data based on a condition before it is sent to the integration platform. Omit WHERE when you enter the clause. No default exists for this property.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. The following code pages are available:
ANSI (default)
OEM
0037 US (EBCDIC)
0273 Germany (EBCDIC)
0277 Norway (EBCDIC)
0278 Sweden (EBCDIC)
0280 Italy (EBCDIC)
0284 Spain (EBCDIC)
0285 UK (EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium (EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Data Types
The following data types are supported.
Name
Syntax
Notes
VARCHAR2
(size[BYTE|CHAR])
Variable-length character string having maximum length of size bytes for characters. Maximum size is 4000 bytes or characters and minimum is 1 byte or 1 character. You must specify size for VARCHAR2.
BYTE indicates the column has byte length semantics; CHAR indicates the column has character semantics.
NVARCHAR2
(size)
Variable-length character string having maximum length of size characters. Maximum size is determined by the national character set definition. You must specify size for NVARCHAR2.
NUMBER
(p,s)
Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127.
LONG
N/A
Character data of variable length up to 64000 bytes.
DATE
N/A
Valid date range from January 1, 4712 BC to December 31, 9999 AD.
BINARY_FLOAT
N/A
32-bit floating point number. This data type requires 5 bytes, including the length byte.
BINARY_DOUBLE
N/A
64-bit floating point number. This data type requires 9 bytes, including the length byte.
TIMESTAMP
(fractional_seconds_precision)
Year, month and day values of date, as well as hour, minute and second values of time, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values of fractional_seconds_precision are 0 to 9. The default is 6.
TIMESTAMP with TIMEZONE
(fractional_seconds_precision)
All values of TIMESTAMP as well as TIME ZONE displacement value, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6.
TIMESTAMP with LOCAL TIME ZONE
(fractional_seconds_precision)
All values of TIMESTAMP with TIME ZONE, with the following exceptions:
Data is normalized to the database time zone when it is stored in the database.
When the data is retrieved, users see the data in the session time zone.
INTERVAL YEAR TO MONTH
(year_precision)
Stores a period of time in years and months, where year_precision is the number of digits in the YEAR datetime field. Accepted values are 0 to 9. The default is 2.
INTERVAL DAY TO SECOND
(day_precision) (fractional_seconds_precision)
Stores a period of time in days, hours, minutes and seconds, where
day_precision is the maximum number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2.
fractional_seconds_precision is the number of digits in the fractional part of the SECOND field. Accepted values are 0 to 9. The default is 6.
RAW
(size)
Raw binary data length of size bytes. Maximum size is 2000 bytes. You must specify size for a RAW value.
LONG RAW
N/A
Raw binary data of variable length up to 2 GB.
ROWID
N/A
Base 64 string representing the unique address of a row in its table. This data type is primarily for values returned by the ROWID pseudocolumn.
UROWID
[(size)]
Base 64 string representing the logical address of a row of an index-organized table. The optional size is the size of a column of type UROWID. The maximum size and default is 4000 bytes.
CHAR
(size [BYTE|CHAR])
Fixed-length character data of length of size bytes. Maximum size is 2000 bytes or characters. Default and minimum size is 1 byte. BYTE and CHAR have the same semantics as VARCHAR2.
NCHAR
(size)
Fixed-length character data of length of size characters. Maximum size is determined by the national character set definition, with an upper limit of 2000 bytes. Default and minimum size is 1 character.
CLOB
N/A
A character large object containing single- or multibyte characters. Both fixed and variable-width character sets are supported, both using the database character set. Maximum size is (4 GB - 1 byte) * (database block size).
NCLOB
N/A
A character large object containing Unicode characters. Both fixed-width and variable width character sets are supported, both using the database national character set. Maximum size is (4 GB - 1 byte) * (database block size). Stores national character set data.
BLOB
N/A
A binary large object. Maximum size is (4 GB - 1 byte) * (database block size).
BFILE
N/A
Contains a locator to a large binary file stored outside the database. Enables byte stream I/O access to external LOBs residing on the database server. Maximum size is 4 GB.
Oracle 10g, 11g, and 12c Multimode
Oracle is a relational database management system. With the Oracle 10g, 11g, and 12c Multimode connectors, you can perform create, read, update, and delete operations on Oracle 10g, 11g, and 12c data tables. With this connector, you can write to multiple target tables within a SQL RDBMS.
Multimode connectors allow you to perform multiple operations, such as table drops and table inserts, directly on your target database.
Note:  You must have the Oracle 10g, 11g, or 12c standard 64-bit client installed on your system and you must have access to Oracle. If Oracle is installed on your local PC, the drive and directory path of your Oracle application must be included in your path. The path statement is automatically modified when your Oracle client software is installed. The Oracle Database Manager must be running.
Other Oracle Connectors
You can connect to Oracle interfaces in the following ways:
Oracle Direct Path: Deliver incremental loads of detail data to data warehouses and data marts. See Oracle Direct Path 10g, 11g, and 12c.
Oracle SQL Loader: Produce flat loader files that can be uploaded into an Oracle database using the Oracle SQL Loader utility. See Oracle SQL Loader.
Oracle Siebel CRM On Demand 12 and 14: Access data through the OnDemand Web Services interface with this multimode connector. See Oracle Siebel CRM On Demand 12 and 14.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connectivity Pointers
The Oracle standard client must be installed to connect to Oracle 10g, 11g, and 12c databases.
Connector-Specific Notes
Data Types: BINARY_FLOAT and BINARY_DOUBLE data types are not supported on Linux systems.
Unicode Support: The Oracle 10g, 11g, and 12c multimode connectors include Unicode support for wide characters. To enable double-byte support, change the Regional and Language Options on your system to a double-byte language, such as Japanese.
Property Options
You can set the following target properties in Oracle 10g, 11g, and 12c multimode connections.
Property
Version
Use
AllowDistributedTransactions
10g, 11g, 12c
If set to true (default), distributed transaction processing is allowed.
Code Page
10g
The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see CodePage.
Directory Alias
10g, 11g, 12c
Allows you to specify the name of the Directory Alias. There is no default.
Encoding
11g, 12c
Determines how to store characters in Oracle 11g and 12 c target files. Default is OEM.
For encoding choices, see Encoding Reference.
SQL Log
10g, 11g, 12c
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
SQL Output
10g, 11g, 12c
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used.
Target Only – Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. Default.
Target Only (Unbound Mode) – Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log – Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only – Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
10g, 11g, 12c
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
10g, 11g, 12c
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
UpdateNullFields
10g, 11g, 12c
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Views
10g, 11g, 12c
If set to true (default), allows you to see views. View names appear in the table list along with table names.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. The following code pages are available:
ANSI (default)
OEM
0037 US (EBCDIC)
0273 Germany (EBCDIC)
0277 Norway (EBCDIC)
0278 Sweden (EBCDIC)
0280 Italy (EBCDIC)
0284 Spain (EBCDIC)
0285 UK (EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium (EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Data Types
The following data types are supported for Oracle 10g, 11g, and 12c.
Name
Syntax
Notes
VARCHAR2
(size[BYTE|CHAR])
Variable-length character string having maximum length of size bytes. Maximum size is 4000 bytes and minimum is 1 byte. You must specify size for VARCHAR2.
BYTE indicates the column has byte length semantics.
NVARCHAR2
(size)
Variable-length character string having maximum length of size bytes. Maximum size is determined by the national character set definition. You must specify size for NVARCHAR2.
NUMBER
(p,s)
Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127.
LONG
N/A
Character data of variable length up to 64000 bytes.
DATE
N/A
Valid date range from January 1, 4712 BC to December 31, 9999 AD.
BINARY_FLOAT
N/A
32-bit floating point number. This data type requires 5 bytes, including the length byte.
BINARY_DOUBLE
N/A
64-bit floating point number. This data type requires 9 bytes, including the length byte.
TIMESTAMP
(fractional_seconds_precision)
Year, month and day values of date, as well as hour, minute and second values of time, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values of fractional_seconds_precision are 0 to 9. The default is 6.
TIMESTAMP with TIMEZONE
(fractional_seconds_precision)
All values of TIMESTAMP as well as TIME ZONE displacement value, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6.
TIMESTAMP with LOCAL TIME ZONE
(fractional_seconds_precision)
All values of TIMESTAMP with TIME ZONE, with the following exceptions:
Data is normalized to the database time zone when it is stored in the database.
When the data is retrieved, users see the data in the session time zone.
INTERVAL YEAR TO MONTH
(year_precision)
Stores a period of time in years and months, where year_precision is the number of digits in the YEAR datetime field. Accepted values are 0 to 9. The default is 2.
INTERVAL DAY TO SECOND
(day_precision) (fractional_seconds_precision)
Stores a period of time in days, hours, minutes and seconds, where
day_precision is the maximum number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2.
fractional_seconds_precision is the number of digits in the fractional part of the SECOND field. Accepted values are 0 to 9. The default is 6.
RAW
(size)
Raw binary data of length of size bytes. Maximum size is 2000 bytes. You must specify size for a RAW value.
LONG RAW
N/A
Raw binary data of variable length up to 2 GB.
ROWID
N/A
Base 64 string representing the unique address of a row in its table. This data type is primarily for values returned by the ROWID pseudocolumn.
UROWID
[(size)]
Base 64 string representing the logical address of a row of an index-organized table. The optional size is the size of a column of type UROWID. The maximum size and default is 4000 bytes.
CHAR
(size [BYTE|CHAR])
Fixed-length character data of length of size bytes. Maximum size is 2000 bytes or characters. Default and minimum size is 1 byte. BYTE and CHAR have the same semantics as VARCHAR2.
NCHAR
(size)
Fixed-length character data of length of size characters. Maximum size is determined by the national character set definition, with an upper limit of 2000 bytes. Default and minimum size is 1 character.
CLOB
N/A
A character large object containing single- or multibyte characters. Both fixed and variable-width character sets are supported, both using the database character set. Maximum size is (4 GB - 1 byte) * (database block size).
NCLOB
N/A
A character large object containing Unicode characters. Both fixed-width and variable width character sets are supported, both using the database national character set. Maximum size is (4 GB - 1 byte) * (database block size). Stores national character set data.
BLOB
N/A
A binary large object. Maximum size is (4 GB - 1 byte) * (database block size).
BFILE
N/A
Contains a locator to a large binary file stored outside the database. Enables byte stream I/O access to external LOBs residing on the database server. Maximum size is 4 GB.
Oracle CRM On Demand
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
Oracle CRM On Demand is a web-based CRM software platform that enables its users to access information using a Web Services interface. The Oracle CRM On Demand connector supports most entities exposed by the Oracle CRM On Demand Web Service, an open XML-based online platform that provides Simple Object Access Protocol (SOAP) access.
SOAP relies on XML for its messaging format. This XML-based protocol consists of three parts: an envelope, which defines what is in the message and how to process it, a set of encoding rules for expressing instances of application-defined data-types, and a convention for representing procedure calls and responses.
This connector allows users to retrieve, insert, update, and delete data from/to Oracle CRM On Demand. To use the Design Studio with Oracle CRM On Demand, the API for your organization must be enabled within Oracle CRM On Demand. Entity names change with each version of the API. Custom entities are also supported.
Supported Versions
Oracle CRM On Demand Release 17
Oracle CRM On Demand Release 18
Oracle CRM On Demand Release 19
Oracle CRM On Demand Release 20
Installation Requirements
User must have read/write access to a fully functional Oracle CRM On Demand account. The Hosting Server name (supplied by Oracle), User Sign In ID (Login ID) , and Password for the account are needed for this connector to log into the Oracle CRM On Demand account. The Default Access Profile name of the logged in user is to be provided. This enables the connector with the ability to dynamically know the child objects to which the user has access.
The Oracle CRM On Demand Integration capability for your company must be enabled (an Oracle CRM On Demand Customer Care representative enables the same, on request, at no additional cost). In addition, the Role Management service should be enabled for the Oracle CRM On Demand account.
Using the Administrator role, the Record Type access and the Access Profile for the user's role should be turned ON (feature available in the administration section) for all parent and child entities to be integrated.
For additional information on the above requirements, please refer to the Oracle CRM On Demand documentation.
Connectivity Requirements
Enter the following to connect to Oracle CRM On Demand:
Server - URL for your Oracle CRM On Demand instance. For example: https://secure-<server name>.crmondemand.com
User ID - User ID for logging on to the Oracle On Demand environment. If you are using the Siebel login invoker, you can replace the User ID with a SessionToken of the form <serverUrl>:jsessionid=<sessionid>. If a SessionToken is provided, then leave the Password empty.
Password - (Optional) Password for logging on to the Oracle On Demand environment.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Query Source
S
Table/View (default option): This option lists all the objects available in Oracle CRM On Demand. After selecting this option, click the arrow Next to Source Table/View, to choose the Table or View that you want to access from the list. You must choose at least one table or view to be able to run a transformation with this connector.
Query Statement: This option will allow you to extract data from Oracle CRM On Demand using a Query written in XML format. After selecting this option, click the ellipsis button to open the Query Statement Box to enter the Query XML. For details on writing Query XML statements, see the Web Services On Demand API Calls > Query Page topic located in the Oracle CRM On Demand Documentation. Note that the XML format for Web Service 1.0 and Web Service 2.0 API calls are different from those provided in the CRMOD documentation. You can identify which version (1.0 or 2.0) to use for a given object by referring to the "Entity Names" section of this document.
Query File: Use this option to extract data using a query stored in a text file. After selecting this option, click the ellipsis button to browse and select your query file. Note that this query file should contain the XML required for the Query Page API call as given in the example below.
Note: The XML should start with <ListOfxxxx> ELEMENT, where xxxx is the Parent Entity's name, as seen in the following example:
<ListOfContact>
    <Contact>
        <ContactFirstName>='John'</ContactFirstName>
        <ContactLastName/>
        <ListOfAddress>
           <Address>
               <Country>='Canada'</Country>
           </Address>
        </ListOfAddress>
    </Contact>
</ListOfContact>
Table
T
Select the Table (Oracle CRM On Demand Object). This should be the transformation target object where the data will be written.
Batch Response
 
Select the file name to which the target operation responses are logged.
Notes:
In delete operations, there is one batch response entry for every delete action.
In Insert, Upsert, and Update operations, there is one batch response entry for every Insert or Update action.
The return Code for a single operation shows the result of the operation execution. If the operation executes successfully, then the return Code is 0.
For failed operations, the error Description tag shows a detailed error message (see below). This includes messages from both Oracle CRM On Demand and PDI.
Note:  When the return Code is 0 (execution was successful), the Description tag will return the Entity's unique ID for which the operation was successful.
For example:
<BatchResponse:batchResult xmlns:BatchResponse="http://www.pervasive.com/BatchResponse">
    <BatchResponse:results>
        <BatchResponse:result>
        <BatchResponse:returnCode>34</BatchResponse:returnCode>
        <BatchResponse:errorDescription>No rows retrieved corresponding to the business component 'Campaign'(SBL-EAI-04378)</BatchResponse:errorDescription>
<BatchResponse:objectContainerName>Campaign</BatchResponse:objectContainerName>
     <BatchResponse:updateOperation>DELETE</BatchResponse:updateOperation>
        </BatchResponse:result>
    </BatchResponse:results>
    
    <BatchResponse:componentInfo>
    <BatchResponse:returnCode>0</BatchResponse:returnCode>
    <BatchResponse:componentName>Oracle CRM On Demand</BatchResponse:componentName>
    </BatchResponse:componentInfo>
</BatchResponse:batchResult>
Batch Size
S
The number of records buffered by PDI before processing them to a map and transfering data to the target.
Page Size
ST
The number of records fetched for each Query Page Web Service call. Max Page Size = 100 (this limit is imposed by Oracle CRM On Demand). Default value is 50.
Use Field Display Names
ST
When set to True, the Display Names of the fields are used as Field Names in the transformation.
Display names are stripped from special characters. If a name is found to be a duplicate for an object, then an integer value is appended to the duplicate.
When this property is set to True, the Field Description column shows the actual Field Name.
The default value is False.
UseChildAnd
S
Sets the UseChildAnd property of the Query Page web service call. Refer to the Oracle CRM On Demand documentation for details. The default value is True.
Default Access Profile Name
ST
The default Access Profile name for the Role assigned to the current logged in user.
To get this value, do the following:
1. Log in to CRM OnDemand.
2. Select Admin > UserManagementAndAccessControls > AccessProfiles > RoleManagement > [Edit] [roleName] > Step3-AccessProfiles > DefaultAccessProfile
RecordsPerReq
T
Allows user to set the number of records inserted/updated during each request between 1-20. For example, if the batch size is set to 20 and the 10th record fails, an attempt will be made to run the operation again one record at a time until all of the records in that batch are processed. As a result, the batch response will have individual record level success/failure entry.
Default value is 20.
Supported Entities
This section lists the different Oracle CRM On Demand entities supported by this connector. The information about the supported operations can be obtained from the following table.
Entity Name
Create
Read
Update
Delete
Account (WebService 1.0 API)
Yes
Yes
Yes
Yes
Activity (WebService 1.0 API)
Yes
Yes
Yes
Yes
Asset (WebService 1.0 API)
No
Yes
No
No
Campaign (WebService 1.0 API)
Yes
Yes
Yes
Yes
Contact (WebService 1.0 API)
Yes
Yes
Yes
Yes
CurrentUser (WebService 1.0 API)
No
Yes
No
No
CustomObject1 (WebService 1.0 API)
Yes
Yes
Yes
Yes
CustomObject2 (WebService 1.0 API)
Yes
Yes
Yes
Yes
CustomObject3 (WebService 1.0 API)
Yes
Yes
Yes
Yes
Lead (WebService 1.0 API)
Yes
Yes
Yes
Yes
Opportunity (WebService 1.0 API)
Yes
Yes
Yes
Yes
Product (WebService 1.0 API)
Yes
Yes
Yes
No
ProductCategory (WebService 1.0 API)
Yes
Yes
Yes
Yes
ServiceRequest (WebService 1.0 API)
Yes
Yes
Yes
Yes
Solution (WebService 1.0 API)
Yes
Yes
Yes
Yes
Territory (WebService 1.0 API)
Yes
Yes
Yes
Yes
User (WebService 1.0 API)
No
Yes
No
No
UserGroup (WebService 1.0 API)
Yes
Yes
Yes
Yes
Application (WebService 2.0 API)
Yes
Yes
Yes
Yes
BusinessPlan (WebService 2.0 API)
Yes
Yes
Yes
Yes
Claim (WebService 2.0 API)
Yes
Yes
Yes
Yes
ContactBestTimes (WebService 2.0 API)
Yes
Yes
Yes
Yes
Coverage (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject4 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject5 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject6 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject7 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject8 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject9 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject10 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject11 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject12 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject13 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject14 (WebService 2.0 API)
Yes
Yes
Yes
Yes
CustomObject15 (WebService 2.0 API)
Yes
Yes
Yes
Yes
Damage ( WebService 2.0 API)
Yes
Yes
Yes
Yes
DealRegistration (WebService 2.0 API)
Yes
Yes
Yes
Yes
FinancialAccount (WebService 2.0 API)
Yes
Yes
Yes
Yes
FinancialAccountHolder (WebService 2.0 API)
Yes
Yes
Yes
Yes
FinancialAccountHolding (WebService 2.0 API)
Yes
Yes
Yes
Yes
FinancialPlan (WebService 2.0 API)
Yes
Yes
Yes
Yes
FinancialProduct (WebService 2.0 API)
Yes
Yes
Yes
Yes
FinancialTransaction (WebService 2.0 API)
Yes
Yes
Yes
Yes
Fund (WebService 2.0 API)
Yes
Yes
Yes
Yes
Household (WebService 2.0 API)
Yes
Yes
Yes
Yes
InsuranceProperty (WebService 2.0 API)
Yes
Yes
Yes
Yes
InvolvedParty (WebService 2.0 API)
Yes
Yes
Yes
Yes
MDFRequest (WebService 2.0 API)
Yes
Yes
Yes
Yes
MedEd (WebService 1.0 API)
Yes
Yes
Yes
Yes
Objectives (WebService 2.0 API)
Yes
Yes
Yes
Yes
Partner (WebService 2.0 API)
Yes
No
Yes
Yes
PlanAccount (WebService 2.0 API)
Yes
Yes
Yes
Yes
PlanContact (WebService 2.0 API)
Yes
Yes
Yes
Yes
PlanOpportunity (WebService 2.0 API)
Yes
Yes
Yes
Yes
Policy (WebService 2.0 API)
Yes
Yes
Yes
Yes
PolicyHolder (WebService 2.0 API)
Yes
Yes
Yes
Yes
Portfolio (WebService 2.0 API)
Yes
Yes
Yes
Yes
PriceList (WebService 2.0 API)
Yes
Yes
Yes
Yes
PriceListLineItem ( WebService 2.0 API)
Yes
Yes
Yes
Yes
SPRequest (WebService 2.0 API)
Yes
Yes
Yes
Yes
SPRequestLineItem (WebService 2.0 API)
Yes
Yes
Yes
Yes
Vehicle (WebService 2.0 API)
Yes
Yes
Yes
Yes
Connector Specific Notes
Upsert with Relationships
Upsert is supported only for objects that are covered by Web Services 1.0 API as shown in the entity list. Objects covered by Web Services 2.0 API (for example, Vehicle) are not supported for the Upsert operation.
Attachments
For handling files/documents attached to an Oracle CRM On Demand object, the connector only supports uploading a file as an Attachment to an object (the connector only supports file attachments on the target side). The option to download "Attached files" from Oracle CRM On Demand is not currently supported (the connector does not support reading/downloading the file attachments from Oracle CRM On Demand).
Using Sessions
This connector can be used with the Siebel Login Invoker. When using this invoker, you must replace the Username with the SessionToken and leave the Password field empty. The SessionToken must use the format:
<serverUrl>:jsessionid=<sessionid>
When you use SessionToken, all of the entities are listed in the Tables list; irrespective of whether or not the user has access.
Mapping API
For certain entities, if the mapping API is not supported, then all of the fields are listed irrespective of whether a particular field (custom field ) is enabled for that account or not. Custom fields do support the display name in Map Designer, but only if the Mapping API is supported for that object.
Building Maps
When building maps using Oracle CRM On Demand as a target, refer to the Oracle CRM On Demand help file, "OnDemWebSvcs.pdf," which is at Admin > DownloadDocumentation. This document shows which records you can add as children to other records as well as the supported operations amd Key fields for each entity.
Field Lengths
The field lengths reported in the source schema and Source map tab may be too short for some Oracle CRM On Demand values. If a value such a text string appears to be truncated, you must manually adjust the field length to a larger value.
Non-Accessible Child Entities
If the user does not have access to any child object of a particular entity, they will be unable to see the particular child object displayed in the list of objects available for viewing through the data browser.
Data Type Support
BigDecimal
BigInteger
Boolean
Byte
Bytes
Character
Date
Double
Float
Integer
Record
String
Text
Debugging Tips
Check the Log file and Batch Response File for error messages returned by the Oracle CRM On Demand service.
A "Map . . . Successful" message in the transformation log file means that no serious error occured during the transformation. However, you should also check the batch file for individual record failures.
Oracle Direct Path 10g, 11g, and 12c
Oracle's Direct Path interface is a way for data warehouses and data marts to perform incremental loads of detail data. The Oracle Direct Path 10g, 11g, and 12c connectors enable the integration platform to write to Oracle tables.
Other Oracle Connectors
You can connect to Oracle interfaces in the following ways:
Oracle Multimode: Perform multiple operations (such as table drops, table inserts) directly on your target database. See Oracle 10g, 11g, and 12c Multimode.
Oracle SQL Loader: Produce flat loader files that can be uploaded into an Oracle database using the Oracle SQL Loader utility. See Oracle SQL Loader.
Oracle Siebel CRM On Demand 12 and 14: Access data through the OnDemand Web Services interface with this multimode connector. See Oracle Siebel CRM On Demand 12 and 14.
Note:  You must have the Oracle 10g, 11g, or 12c standard 64-bit client installed on your system and you must have access to Oracle. If Oracle is installed on your local PC, the drive and directory path of your Oracle application must be included in your path. The path statement is automatically modified when your Oracle client software is installed. The Oracle Database Manager must be running.
The Oracle Direct Path connector accesses Oracle 10g, 11g, and 12c databases. The advantage of Oracle Direct Path API is that it is faster than the conventional ways of loading data into an Oracle database. Oracle's SQL loader utility uses the same API included in Oracle. Most of the Direct Path API speed comes from circumventing work done with conventional methods. Four instances make inserts into an Oracle table slow:
Transaction logging
Transaction logs are used to undo changes if something goes wrong. In the case of bulk loads, this is not usually an issue, since bulk loads are generally done to new or empty tables.
Indexing
Indexing speeds up queries to the data in a table, but slows down the process of inserting data. For each record that is inserted, the DBMS adds a new entry in each index in the table. It is often faster to load the data without indexing and to rebuild the indexes as a separate step.
Integrity checking
This process checks for duplicate keys, Null values and other integrity rule violations. These DBMS data validations take time.
Transaction isolation
In these transactions, the DBMS makes changes only visible to users once the work is committed.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
Mass load improves performance. This connector is faster than the standard Oracle connector, however, it works best in mass load. This means that data is committed at the end of the transformations and not for Append mode. Use Replace mode for the fastest run times; if you want to use Append, use one of the other Oracle connectors instead. See Oracle 10g, 11g, and 12c.
Restrictions on Using Direct Path Loads. In addition to the general load conditions described in Oracle’s documentation, the following conditions must be satisfied to use the direct path load method:
Tables are not clustered.
Tables to be loaded do not have any active transactions pending.
To check for this condition, use the Enterprise Manager command MONITOR TABLE to find the object ID for the tables you want to load. Then use the command MONITOR LOCK to see if there are any locks on the table.
No SQL strings are in the control file.
The following features are not available with Direct Path load:
loading object columns
loading LOBs
loading VARRAYs
loading nested tables
specifying OIDs for object tables with system-generated OIDs
specifying SIDs
loading REF columns
loading BFILE columns
physical records (set by the command-line option READSIZE) larger than 64k
Unicode Support for Oracle 10g, 11g, and 12c - These connectors include Unicode support for wide characters. To enable double-byte support, change the Regional and Language Options on your system to a double-byte language, such as Japanese.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Version
Use
CodePage
T
10g
The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see CodePage.
CommitFrequency
T
10g, 11g, 12c
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
10g, 11g, 12c
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
Directory Alias
T
10g, 11g, 12c
Allows you to specify the name of the Directory Alias. No default exists for this property.
DPParallel
T
10g, 11g, 12c
Sets the direct path parallel attribute to allow parallel loading. The default is false. If you set this property to true, parallel loading is allowed, but it may also slow performance.
Encoding
T
11g, 12c
Type of encoding to use with Oracle 11g and 12c target files. Default is OEM.
For choices, see Encoding Reference.
Synonyms
T
10g, 11g, 12c
If set to true, allows you to see synonyms. The synonym names appear in the table list along with the tables. Default is false.
SystemTables
T
10g, 11g, 12c
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Views
T
10g, 11g, 12c
If set to true (default), allows you to see views. View names appear in the table list along with table names.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. The following code pages are available:
ANSI (default)
OEM
0037 US (EBCDIC)
0273 Germany (EBCDIC)
0277 Norway (EBCDIC)
0278 Sweden (EBCDIC)
0280 Italy (EBCDIC)
0284 Spain (EBCDIC)
0285 UK (EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium (EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
Data Types
These data types are available in Oracle Direct Path:
Char
Date
DateTime
LongRaw
LongVarChar
Number
Raw
RowID
VarChar2
Data Types Supported for Direct Path Loading
The following external data types are valid for scalar columns in a direct path load operation:
SQLT_CHR
SQLT_DAT
SQLT_INT
SQLT_UIN
SQLT_FLT
SQLT_BIN
SQLT_NUM
SQLT_PDN
The following external object data types are supported:
SQLT_NTY – column objects (FINAL and NOT FINAL) and SQL string columns
SQLT_REF – REF columns (FINAL and NOT FINAL)
The following table types are supported:
Nested tables
Object tables (FINAL and NOT FINAL)
Oracle Service Cloud
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
Oracle Service Cloud connector was earlier called as RightNow CX. It is a web-based Customer Relationship Management (CRM) software platform that enables users to access information using a Web Services Interface.
Supported API Version
Rightnow CX API 1.3
Installation Requirements
The user must have a functional Oracle Service Cloud account. Also, the Web Service API for your organization must be enabled within the Oracle Service Cloud platform.
Property Options
The following source (S) and target (T) properties can be set for Oracle Service Cloud connections.
Property
S/T
Description
URL
ST
URL for the web services. For example:
http://<rightnowinstance>.rightnow.com/cgi-bin/<rightnowinstance>.cfg/services/soap
Username
ST
Username for the Oracle Service Cloud account you are trying to access.
Password
ST
Password for the Oracle Service Cloud account you are trying to access.
Batch Size
S
Number of source records the connector caches before processing them in a map. Default is zero.
Show Children
S
A Boolean variable to control whether sub-objects are shown in metadata and in data.
True – Show all the sub-object.
False (default) – Doesn't show any sub-object except ID and Named ID type sub-object.
Flush Frequency
T
Number of operations the connector caches before writing them to the target. Default is zero.
Batch Response
T
Sets the path name for a batch response file, which is a reporting mechanism for the connector. The file provides detailed results for each object in a batch where the batch size is greater than one. Obtaining detailed results is useful for doing the following:
Capturing system-generated object IDs for use in future updates.
Correlating an error with its object and having enough information about the error for exception handling and error diagnosis.
A batch response entry is generated only for each top-level record.
Supported Entities
The connector supports both Oracle Service Cloud defined entities and custom entities. It also supports read/write from custom fields in both entity types.
Supported Operations
The Oracle Service Cloud connector supports CRUD operation types on both primary and custom entities.
Data Type Support
The data type in both source and target fields should not be changed. Changing the data type in a target field may cause write failure.
The following data types are used in Oracle Service Cloud:
Boolean
Date
DateTime
Decimal
Integer
Long
String
Connector Specific Notes
Object ID Ignored in Insert operation
When inserting a new object into Oracle Service Cloud, the ID field is ignored and a new ID will be assigned to this object automatically by Oracle Service Cloud.
File Attachment type sub-objects are not supported
Currently, the File Attachment type sub-objects are not supported. This includes FileAttachment, FileAttachmentAnswer, FileAttachmentCommon, and FileAttachmentShared.
Working with custom objects
The Oracle Service Cloud connector supports custom objects. Working with custom objects is similar to working with primary objects. The only difference is the object name used. Primary objects use the name of the class such as "Contact." Custom objects use a combination of the package name and the custom object name to uniquely identify the custom object. The package name and object name are joined by a dot (.). For example, for a custom object package called CO and a custom object named PurchasedProduct, you would reference this object as CO.PurchasedProduct.
RightNow Object Query Language (ROQL)
In a source connection, you can use a query statement to query for objects or fields. The Oracle Service Cloud connector supports RightNow Object Query Language (ROQL) only.
The RightNow Object Query Language (ROQL, pronounced "Rockwell") provides a query subsystem that allows clients of the API to perform SQL-like queries against the Oracle Service Cloud platform. The query language is based on a subset of Object Query Language (OQL), a text-based SQL-like query language that has been extended to handle object notation.
Syntax
ROQL supports both object query and fields query.
Object query:
SELECT primaryObjectReference FROM primaryObject [WHERE conditionExpression] [LIMIT number_of_rows] [OFFSET start_index]
SELECT <Parent Object>.<Parent to Child Relationship Name> FROM <Parent Object> SELECT <Child Object>.<Child to Parent Relationship Name> FROM <Child Object>
Fields query:
SELECT fieldList from primaryObject [WHERE conditionExpression] [LIMIT number_of_rows] [OFFSET start_index]
Special usage
LIMIT - Limit can be used to limit the number of rows that will be returned in the query. For example, the following query will return the first 100 Contact objects with the Contact first name populated.
SELECT Contact.Name.First FROM Contact LIMIT 100;
OFFSET - Offset can be used to specify the offset or start index for the returned results. For example, the following query will return the first 100 contacts beginning at the 101 entry.
Relationship Queries
Relationship queries are defined to help navigate between related objects, serving as a shortcut for performing two selects during query operations. Relationships can chained by adding a dot (.) followed by the appropriate relationship name. For example, the query below starts at Incident, goes to Organization, then goes to Contact to list the last names of all contacts for the organization to which the incident belongs.
Query Custom objects
Custom objects can be queried through Connect Web Services for SOAP in a similar fashion to the primary Oracle Service Cloud objects. The syntax used to query custom object data is as follows: Package.ObjectName in the SELECT clause and Package.ObjectName in the FROM clause. For example, if there were a CO package and an RMA object, the query would be this:
SELECT CO.RMA from CO.RMA;
Examples:
Search all Contact objects:
SELECT Contact FROM Contact;
Search an Incident object from incident with 1D=7:
SELECT Incident FROM Incident I WHERE I.ID = 7;
Search a field in Name sub-object in Contact object:
SELECT C.Name.First FROM Contact C;
Search a field in Name sub-object in Contact object with WHERE clause:
SELECT C.Name.First FROM Contact C WHERE C.Name.First like 'C%';
Oracle Siebel CRM On Demand 12 and 14
Oracle Siebel CRM On Demand accesses data through the OnDemand Web Services interface. This connector connects Oracle Siebel CRM On Demand data to virtually any flat file or database application. The integration platform supports Oracle Siebel CRM On Demand versions 12 and 14.
Note:  Ensure that you have administrative rights to a particular parent or child entity by contacting your system administrator. If you still cannot access the entity, contact Oracle Siebel CRM On Demand Customer Care.
Oracle Siebel CRM On Demand 14
The Oracle Siebel CRM On Demand 14 connector provides all the functionality of the version 12 connector, as well as support for delete, insert, and update child entities, and session tokens.
This is a Multimode connector, that is, the integration platform allows for concurrent writes to multiple target tables (entities). Multimode connections allow you to perform create, read, update, and delete operations directly on a target database.
Other Oracle Connectors
You can connect to Oracle interfaces in the following ways:
Oracle Multimode: Perform multiple operations (such as table drops and table inserts) directly on your target database. See Oracle 10g, 11g, and 12c Multimode.
Oracle Direct Path: Deliver incremental loads of detail data to data warehouses and data marts. See Oracle Direct Path 10g, 11g, and 12c.
Oracle SQL Loader: Produce flat loader files that can be uploaded into an Oracle database using the Oracle SQL Loader utility. See Oracle SQL Loader.
Schema Definitions
Schema definitions are read from a Siebel WSDL file. Record and field names and lengths are retrieved from the WSDL file with the following conventions:
Entity is used as a record name. The first letter is upper case and the remainder is lower case.
Child entity names are qualified with parent entities by using an underscore. In the following example, Activity is the parent entity, while Note is the child entity: Activity_Note
All field names are lower case.
If a field length is not found in the WSDL schema file definition, 30 is the default length.
Custom fields are supported.
Supported Entities for Version 12
Oracle Siebel CRM On Demand 12 and 14 support Create, Retrieve, Update and Delete operations on the following OnDemand parent entities:
Account
Activity (Task, Appointment)
Campaign
CustomObject1
CustomObject2
Contact
Dealer
Household
Lead
Opportunity
Portfolio
ProductCategory
ServiceRequest
Solution
Territory
Vehicle
Oracle Siebel CRM On Demand 12 and 14 support Create, Retrieve, and Update operations on the following OnDemand parent entities:
Product
User
The CurrentUser entity supports the Retrieve (Query) operation.
The connector also supports the child entities associated with the above parent entities. For exceptions and notes about the above entities, see Connector-Specific Notes.
Supported Entities in Version 14
Oracle Siebel CRM On Demand 14 supports all of the above methods and entities, plus the following methods and entities:
Child Entity Methods
InsertChild
UpdateChild
DeleteChild
Activity Entity: Uses new target namespaces urn:crmondemand/ws/activity/partner. The Activity entity can now be customized for both source and target. Version 12 can customize the target only.
Activity Entities - Child
SampleDropped
ProductsDetailed
Solution
Account Entity - Child
AccountContact
Contact Entity - Child
Account
Asset
Interests
Campaign
Opportunity Entity - Child
Partner
Competitor
Team
In addition, the connectors support the parent entity Asset.
Logging SOAP Messages
On the system where the integration engine runs, you can set an environment variable called DJ_SODCRM_DEBUG to capture sent and received SOAP messages in text files. Assign the value msgs to the variable. When integrations run using a SOAP-based connection, messages are appended to the following files:
sent.log – output messages
recv.log – input messages
test.log – other messages generated by the SOAP run-time environment
In a Windows system, these log files are written to InstallDir\Common. In Unix systems, they are written to the directory where the integration engine executes.
Connectivity Pointers
Query statement syntax for this CRM connector is XML-based. For details on writing query statements, refer to the application documentation.
Entity references in SQL statements are case sensitive and must begin with a capital letter. If the entity names are not specified correctly, connection to the server fails.
Using Session Tokens
Session tokens enable you to connect using a session token instead of logging in using a user name and password. This allows you to log in once by an external method, and reuse the session connection information for multiple instances or components. Reusing the session information is convenient and provides better performance than repeatedly sending a user ID and password information for each connection.
To obtain a session ID and server URL for a session token, use the Oracle Siebel Login Invoker. For information on using this component, see its help topic.
A session token consists of a session ID and server URL. The session token must be in the following format:
sid:sessionId:serverURL
where sessionID is a unique ID associated with your session and serverURL is the URL of the web service that processes subsequent URI calls. These values are returned in the loginResponse when you log in using user name and password.
Connector-Specific Notes
Constraints Between Entities. Any underlying constraints between entities are detected in the connectors. For instance, if the system requires that a new Contact be created with a valid Account, then that Account must be passed to the Oracle Siebel CRM On Demand target connector either directly from the source connector or generated at execution time by customer-written script code.
Activity Entity. (Version 12 only) This entity does not support query statements for custom fields since Siebel does not provide a partner WSDL for the entity.
Product Entity. Delete actions against records in the Product entity are not supported.
CurrentUser Entity. Delete actions against records in the CurrentUser entity are not supported.
User Entity. Delete actions against records in the User entity are not supported.
Custom Object 3. Not supported.
Connection Times. When you are connecting to a large Source with many records, it may take a prolonged time to retrieve the records for viewing. If you open the Windows Task Manager, the Status may be "not responding." This is expected behavior. After the records have been read, the Status returns "running" and you can view the records.
Timeout after inactivity. Your Siebel CRM session terminates after five minutes of inactivity. If you receive the following error, it indicates a session timeout:
Internal Error: Session is not available. Aborting.
Before you attempt to continue, disconnect from the server and reconnect.
Access Rights
Records. You are only able to access or modify records in Oracle Siebel CRM On Demand that are available to the User ID that you are using. Depending upon the User ID that you are using and the security model of your Oracle Siebel CRM On Demand database, some records may only be visible to certain users, or may be read-only to certain users.
Entities. Some Oracle Siebel CRM On Demand entities are only visible to a user name, or only modifiable to a user name, depending on the user permissions or on the features enabled for your database.
Fields. Some fields in certain tables are read-only, or can only be set during insert, not during update operations. For example, the Last Modified Date and Last Modified By fields are read-only. They are implicitly set each time a user modifies a record. The integration platform can only reset owner ID fields on read-only tables during insert. To change an owner ID on existing records, you must transfer ownership within the Oracle Siebel CRM On Demand application.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Version
Description
BatchResponse
T
12, 14
This property creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases:
Capturing system-generated object IDs for use in future updates.
Correlating an error with its object and have enough information about the error for exception handling and error diagnosis.
For more information, see Batch Response File.
MaxBatch
T
12, 14
The MaxBatch property controls how many records are sent in one group for each individual insert, update, or delete operation.
If you use a large number in this setting, you may see faster transformation run times. However, larger numbers may increase the need for memory. If you experience memory problems, lower the number.
Note
Oracle Siebel CRM on Demand enforces a maximum batch value limit of 20. Any higher setting returns the following error:
Connection Property: MaxBatch's value is invalid. It is greater than the maximum value allowed (20). Resetting to previous value.
PageSize
S
12, 14
When data records are arranged in blocks and the last record in each block is padded to the end of the block, you must set the page size to remove pad characters to from the file during the data transfer.
To set page size, click PageSize to highlight the default value and type the correct page size value for your data.
Note:  The maximum number of records returned by the PageSize property is 100, a limitation of the Siebel server. If you enter a number greater than 100, you receive an "unable to read record" error 25504.
UpdateNulls
T
12, 14
Allows you to decide whether nulls or empty strings are sent to the server. The following options are available:
Always (default) – Nulls and empty strings are sent to the server to set fields to null (update null and empty string values).
NullOnly – Null values are sent to the server, but empty strings are not sent. This option updates only null values.
Never – Null values and empty strings are not sent to the server This option does not update null values and empty strings.
UseChildAnd
S
12, 14
Allows you to perform a query on parent and child components. If set to true, it returns both parent and child components. If set to false (default), it returns either the parent or the child component. For use cases related to using this property, see Manipulating Parent and Child Relationships.
Manipulating Parent and Child Relationships
The following use cases illustrate using SQL statements, events, and actions to manipulate parent and child relationships.
Use Case
Task
How to Accomplish the Task
1
Retrieve a list of recipients where the campaign name is FreeStuff.
Run the following SQL statement in the source:
Select campaignname from Campaign where campaignname="FreeStuff";Select * from Campaign_Recipient
2
Insert a campaign name called FreeStuff that contains three recipients.
On the target, create an Upsert action for Campaign and set the campaignname to FreeStuff. Then create three Upsert actions for Campaign_Recipient and set the contactid (this assumes that you already know the value in contactid).
3
Update deliverystatus (one recipient) to Received for the campaign FreeStuff.
On the target, create an Upsert action for Campaign and set campaignnid. Create another Upsert action for Campaign_Recipient and set the contactid and deliverystatus to Received.
4
Delete one recipient in the campaign FreeStuff.
On the target, create an Update action for Campaign and set campaignnid. Create two Update actions for Campaign_Recipient and set the contactid.
Note:  Because child record deletion uses the Update action, you cannot delete all child records in the child entity. One record must remain.
Deleting, Inserting, or Updating Child Entities
For the integration platform to recognize child entities, the entity name in the related event action must include the parent record name plus ".child". You must first specify the parent entity, even though only the child entity is to be deleted.
The following example shows how to delete a child entity called Account_contact.
1. Design a map with Null as the source type, Oracle Siebel On Demand 14 as the target type, and connect to Oracle Siebel On Demand.
2. In a Source record event, create an AfterEveryRecord event and add the following events:
Action
Parameters
ClearMap
{Target},{Account},{},{}
Delete Record
{Target},{Account},{Account.child},{},{}
ClearMap
{Target},{Account_Contact},{},{}
Delete Record
{Target},{Account_Contact},{Account_Contact.child},{},{}
3. Clear the map for the Account record layout.
4. Specify a Delete Record action for the Account record layout and Account.child entity.
5. Clear the map for the Account_Contact record layout.
6. Use a Delete Record action to delete the Account_Contact record layout and the Account_Contact.child entity.
7. In the target pane, map the ID for Account called accountid.
8. Run the map, and note that the child entity Account_Contact is deleted.
Use the same syntax to insert or update child entities.
Data Types
The only data type supported for this connector is Text.
Oracle SQL Loader
An Oracle SQL Loader file is a special format to which the integration platform can write using the Oracle SQL Loader connector. This file can be imported into Oracle quickly using the Oracle SQL Loader utility.
Although the Oracle SQL Loader appears as a source connector in the integration platform, its purpose is to read and verifying loader files that the platform has created when the application is not available.
This connector produces the flat loader files that can be uploaded into an Oracle database. It is not necessary to have Oracle client installed and configured on your system to connect to this connector with the integration platform.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
DateValMask: The integration platform does not support milliseconds in DateValMask transformations. Although a mask is provided for them, the integration returns zeros in those places.
Unicode Support: The Oracle SQL Loader connector does not have Unicode support. For Unicode support, use the ODBC 3.5 connector. To have double-byte support in Oracle connectors, change Regional and Language Options on your system to a double-byte language, such as Japanese.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
BadFile
T
Name of the file to which rejected records are written when running the Oracle Loader utility. If you leave value blank, the default file name is tabname.BAD. To change the default name, type a new one in the Value field.
CodePage
ST
Encoding to use for reading and writing data. The default is ANSI, the standard in the US.
CommitRows
T
Number of records the Oracle Loader reads before doing a Commit. Default is 50. To change the default, type a new number in the Value field.
DelimitAllFields
T
Default is false. If set to true, the integration platform inserts delimiters around all text fields.
Direct
T
Specifies whether or not to write out the control file that includes the Direct option. Default is false.
DiscFile
T
Name of the file to which discarded records are written when running the Oracle Loader utility. Default file name is tabname.DSC. To change the default, type a different name in the Value field.
DiscMax
T
Maximum number of discarded records. The default is 50.
DoublBackSlash
T
Whether or not to write double backslashes (\\) to the Oracle SQL Loader file name. Default is false.
Errors
T
Maximum number of rejected records. Default is 50.
Load
T
Maximum number of records to load into the table. Default is zero, which means to load all records. To change the default, type a different number in the Value field.
LoadOpt
T
Type of load operation to use after the SQL Loader file has been created. Options are insert, append, and replace. Default is insert.
NullCols
T
Whether or not to write out trailing null columns. The default is true.
Skip
T
Whether or not the Oracle Bulk Loader utility skips a number of records before starting the load. Default is zero, which means that loading starts with the first record. To change the default, type a different number in the Value field.
StripLeadingBlanks
ST
Whether or not to strips leading blanks from data fields.
When Oracle SQL Loader is the source connector, leading blanks are not stripped from the data. To remove them, set StripLeadingBlanks to true.
When Oracle SQL Loader is the target connector, leading blanks are stripped when the data is written. To remove them, set StripLeadingBlanks to false.
StripTrailingBlanks
ST
Whether or not to strip trailing blanks from data fields.
When Oracle SQL Loader is the source connector, trailing blanks are not stripped from the data. To remove them, set StripTrailingBlanks to true.
When Oracle SQL Loader is the target connector, trailing blanks are stripped when it writes data. To remove them, set StripTrailingBlanks to false.
TableName
T
Name of the Oracle database table to which data is written. No default.
Data Types
Four data types are available:
Char
Date
Float External
Integer External