Connectors : Source and Target Connectors : Source and Target Connectors P-S
 
Share this page                  
Source and Target Connectors P-S
This section provides information about source and target connectors from P to S.
PayChex (DJF) Import
The integration platform can write, but not read, PayChex (DJF) Import files using an underlying ASCII (Fixed) connector. These files can be read by PayChex software.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connector-Specific Notes
No known limitations.
Property Options
You can set the following target (T) properties.
Property
S/T
Description
DatatypeSet
T
This allows you to switch between standard ASCII data types and COBOL display data types.
FieldSeparator
T
Allows you to choose a field separator character for your target file. The default is None. The other choices are comma (,), tab, space, carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), control-R, and pipe ( | ).
If the record separator is not one of the choices in the list and is a printable character, highlight None and then type the correct character. For example, if the separator is an asterisk ( * ), type an asterisk from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
FillFields
T
Writes an ASCII data file where every field is variable length. If this property is set to false, all trailing spaces are removed from each field when the data is written. The default is true. The true setting pads all fields with spaces to the end of the field length to maintain the fixed length of the records.
Ragged Right
T
Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records.
Note:  You must set FillFields to false for the RaggedRight property to work properly. The Ragged Right property has no effect if you set FillFields to true. If you set FillFields to false, then the RaggedRight property determines whether blank fields and fields with only spaces as their data still appears at the end of the record.
RecordSeparator
T
A PayChex (DJF) Import Properties file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters for a record separator, click the RecordSeparator cell and click once. Then click the arrow to the right of the box and click the desired record separator in the list box. The list box choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line and no record separator. To use a separator other than one from the list, you can type it here.
If the record separator is not one of the choices in the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Tab Size
T
This sets the number or spaces represented by a tab character. The default is 0.
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
ANSI
OEM
0037 US (EBCDIC)
0273 Germany (EBCDIC)
0277 Norway (EBCDIC)
0278 Sweden (EBCDIC)
0280 Italy (EBCDIC)
0284 Spain (EBCDIC)
0285 UK (EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium (EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
PayChex Structure
PayChex (DJF) Import files have a predefined structure. One hundred forty-seven fields are defined. The main fields are all 16 characters long with a 1-character separator field in between each one. The data types available vary according to the property options you select. However, all the predefined fields are considered Text.
This is a list of the predefined fields minus the separator fields:
Soc Sec No
Emp Number
First Name
Last Name
Hrly Rate
Level 1-3
Per. Start
Per. End
Department
Rate 1-3
Regular
Ovt Hrs
OT Factor
NonExSick
Exc Sick
Vacation
Holiday
Misc
Expense
Bonus/W
Bonus/P
Bonus/F
1099 Pmt
Chk Num.
Oth Earn1
Oth Earn2
Salary
Adj01 - Adj30
Hour48 - Hour60
Data Types
The following data types are available when the DatatypeSet property is at the default ASCII setting:
Boolean
Name
Number
Text
These data types are available when the DatatypeSet property is set to COBOL:
Display
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Text
Zoned decimal
Zoned leading sign
Zoned trailing sign
Personal Librarian
Personal Librarian is a text database. Records in a text database may contain small or large amounts of text. Fields within each record may also contain small or large amounts of information. With the Personal Librarian connector, the integration platform can write to Personal Librarian 4.11 for Windows. Personal Librarian data files have an .src file extension.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connectivity Pointers
When writing data to Personal Librarian, the integration platform creates two files. One has an .src extension and one has a .def extension. After creating a Personal Librarian database with the integration platform, you must follow a series of steps in Personal Librarian before the database may be opened and searched:
To define your new database
1. Open PL-Admin.
2. Choose New from the File menu and select the .def file created by the integration platform.
3. In the Define Database dialog box, a database name uses the same name as the .src and .def file names by default. You may give the database a new and more meaningful name, if desired. You may also add or remove fields and make some other choices. Refer to the Personal Librarian user documentation for options. When you have made your selections, click OK.
4. A new dialog box opens in which you are asked if you want to add this new database to the Shortlist. Again, refer to the Personal Librarian user documentation to help you determine your answer.
5. In the next dialog box, select Add Fields, and then select the .src file created by the integration platform. Click GO.
6. When Personal Librarian has finished processing the records, click OK.
7. Exit PL-Admin.
8. Open PL 4.11. Your database is ready to open and search.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following target (T) properties.
Property
S/T
Description
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Pivotal Business Server 5.9
Pivotal Business Server is a customer relations management (CRM) system that enables you to access information using a Web Services interface. Using the Pivotal Business Server 5.9 connector, the integration platform can write to Pivotal Business Server tables. The Pivotal connector supports the following operations:
Insert
Update
Upsert
Delete
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes). For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
To connect to the Pivotal Business Server, enter the URL, Pivotal system name and your user ID and password, if needed.
Before attempting to connect, verify that your path environment variable contains both the default installation directory and the Pivotal directory.
The HTTP user ID and password are not required unless the Pivotal system you are connecting to has Basic Authentication enabled. If unsure of the security settings, ask your Pivotal system administrator.
Fields marked Read Only can be mapped, but the value is not used by the Pivotal CRM system.
When performing operations, the connector uses the primary key of the entity in the database, when referring to a field within an entity.
When an entity references another entity, the ID field of the referenced entity, such as Customer_id, is used instead of the entity name.
You must enable the API for your organization within Pivotal. In addition, you must configure security and install the Pivotal Business Server updates. For details, see Pivotal Server Administration.
Connector-Specific Notes
None.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Batch Response
T
Creates a batch response file using a name you provide, which serves as a reporting mechanism for the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases:
Capturing system-generated object IDs for use in future updates.
Correlating an error with its object and have enough information about the error for exception handling and error diagnosis.
For more information, see Batch Response File.
Connection Type
T
Sets the level of integration with the Pivotal CRM system:
Active Form (default)
Table
Debug Directory
T
Specifies a directory to write the debug XML files to during testing of a map. Specify only when the Debug Mode property is set to true.
Note:  Use of the Debug Directory property is for design time, not run time.
Debug Mode
T
Determines whether to produce a series of XML files containing the request and response XML data:
False (default) – Does not create XML files with request and response data.
True – Creates XML files with request and response data. You must enter a value for Debug Directory.
Note:  Use of the Debug Mode property is for design time, not run time.
FlushFrequency
T
Sets the number of operations that the Pivotal connector should buffer before sending to the Pivotal CRM.
0 (default) – Buffers all operations before sending. Error trapping is not supported. You must include a step to process the resulting batch response file and retrieve the operation counts and return values, as needed.
1 – Allows the connector to trap individual, non-fatal errors during a transformation so you can handle them. If errors occur and no error handling is defined, the errors are considered fatal, and the map execution stops.
2 or more – Buffers specified number of operations before sending. Error trapping is not supported.
Note:  The batch response file must be specified in the Batch Response property.
Ignore Nulls
T
Determines whether to ignore null values. Unmapped target fields have a null value. To compensate for this, the Pivotal Business Server connector supports removal of unused fields, and the target field is not sent to the Pivotal System.
False (default) – Sends null values for unmapped fields to the Pivotal CRM.
True – Ignores unmapped fields.
Login Type
T
Specifies the type of login to use. The default value of Active Client is normally sufficient for integrating with the Pivotal CRM system.
Note:  Only change this property if you are an experienced user of Pivotal CRM and are directed to do so by your system administrator.
Time Zone
T
Specifies the time zone to use. The default value of Not Specified is generally adequate. If you require a specific time zone, consult your administrator for which time zone to use.
Target Schema
The Target Schema icon is active in the toolbar when Pivotal Business Server is the target connector.
The following options are applicable for Pivotal Business Server files.
Target Field Name
These are field names that appear in your target data table.
If you are writing data to a new table, you may enter a name for each field.
If you are writing data to an existing table, the field names default to field names in the selected table.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Data Types
The integration platform does not support ID field types. These following data types are supported:
Boolean
Date
Double
Float
Integer
String
Text
Additional Information
Log File
The conversion log file contains the number of successful record operations performed, instead of the specific number of entities updated, deleted, and inserted.
Pivotal Record Relationships
The Pivotal Business Server connector maintains parent-child relationships by using the parent record ID. If the parent record ID is created from the previous data insertion, the connector supports the reuse of the record ID for the child record.
You can reuse record IDs by entering the designated escape sequence:
@xxxx*
The xxxx string represents the record layout name from which the desired record ID is returned. The record layout name is passed as a string value, so the value must be enclosed in double quotes before the designator is recognized by the connector.
Note:  The designator only handles records with single, distinct record IDs. If more than one record ID is available for a record, then the designator does not process the record ID. This causes an error that you must handle.
Unicode Support
The Pivotal Business Server connector supports Unicode.
Check with your system administrator to verify that your Pivotal CRM system has Unicode support enabled.
Creating DJExport Objects for Writing Data
With the Pivotal Business Server connector, you can use a DJExport object to write data into the Pivotal CRM system.
Best practices around using DJExports include the following:
Use the DJImport/DJExport wizard to build connection strings
Use macro values when creating connection strings
Use field names when adding data to the DJExport
Make sure your connection string includes the table name
The following example shows the EZscript needed to create a DJExport object that writes data to Pivotal. DJExport objects can make reference to fields added to an object by using either the field number or the field name. This example illustrates referencing DJExport objects with field names.
'Create a new DJExport object instance
Dim PVTLExport As DJExport
'Create a new DJField object instance
Dim newField As DJField
'Initialize the DJExport object
Set PVTLExport = New DJExport "Pivotal Business Server"
'Create a new dummy field
Set newField = PVTLExport.newField
'Set the DJExport object connection string
connection = "Server=URLHERE;Database=SystemName;Table=Company;FlushFrequency=1;Batch Response=;Connection Type=ACTIVE FORM;Login Type=Active Client;Time Zone=Not Specified;Ignore Nulls=True;Debug Mode=False;Debug Directory= "
'Set the connection string
PVTLExport.ConnectString = connection
'Add data values to export object by field name - only use one field reference type
PVTLExport.Records(0).Fields("Account_Manager_Id") = "0000000000000001"
PVTLExport.Records(0).Fields("Address_1") = "123456 Test Ave"
PVTLExport.Records(0).Fields("City") = "TestCity"
PVTLExport.Records(0).Fields("Company_Name") = "TestCompany"
PVTLExport.Records(0).Fields("Country") = "US"
PVTLExport.Records(0).Fields("Currency_Id") = "0000000000000001"
PVTLExport.Records(0).Fields("Phone") = "1234567890"
PVTLExport.Records(0).Fields("State_") = "ST"
PVTLExport.Records(0).Fields("Type") = "Partner"
PVTLExport.Records(0).Fields("Zip") = "12345"
'Execute the import object
PVTLExport.PutRecord
'Destroy the export object
Set PVTLExport = nothing
Note:  Adding the table is a manual process, as the DJExport object does not implicitly connect to a single table. Modify the connection string property by using EZscript to concatenate the table value to the end of the connection string.
PostgreSQL
PostgreSQL is an open-source object-relational database management system. It is derived from the original POSTGRES database management system, a research prototype. PostgreSQL retains the data model and data types of POSTGRES and replaces the PostQuel query language with an extended subset of SQL. The PostgreSQL Global Development Group website includes PostgreSQL documentation, FAQ, news, and a user forum.
Note:  You must have the PostgreSQL 64-bit client installed on the system and you must have access to PostgreSQL. If it is installed on your local PC, the drive and directory path of the PostgreSQL application must be included in your path. The path statement is automatically modified when your PostgreSQL client software is installed.
This connector specifies field width in characters, which means the width of a field is that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Other PostgreSQL Connectors
You can connect PostgreSQL interface in multimode to the target database. See PostgreSQL Multimode.
Unicode Support
PostgreSQL connectors include Unicode support for wide characters. To enable double-byte support, change the Regional and Language Options on your system to a double-byte language, such as Japanese.
Connectivity Pointers
The PostgreSQL client must be installed in your system to connect to PostgreSQL databases.
If the database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement box.
Connector-Specific Notes
The BINARY_FLOAT and BINARY_DOUBLE data types are not supported on Linux systems.
Limitations
No known limitations at this time.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
ST
Sets the character encoding for XML source and target files.
For encoding choices, see Unicode 4.0 Character Encodings.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It can be used as an alternative to writing a lengthy query statement in the Query Statement text box. This may be used to get the SQL database server to filter the data based on a condition before it is sent to Map Designer. Exclude "WHERE" when you enter the clause. There is no default value for this property.
Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, you should include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table.
SystemTables
ST
The SystemTables property is only applicable if the user is logged into the database as the Database Administrator (DBA). Only the DBA has access to SystemTables. If set to true, this property allows you to view all the tables created by the DBA. The system table names appear in the table list. The default value is false.
Views
ST
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. The default value is true.
Synonyms
ST
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. The default value is false.
CommitFrequency
T
This allows you to control how often the data is committed to the database. By default, this value is zero. This means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. If you are performing large transformations, this is not practical, since it may fill up the transaction logs. If the CommitFrequency is set to a non-zero value, the connector does a database commit after the specified number of records have been inserted or updated in the table. This ensures the transaction log is not full but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. There is no default value for this property.
For example, you can have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements creates two indices on the table "mytable". The first statement does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. During the transformation, the transformation error dialog box appears. If you want to ignore the DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent directly to the DBMS.
The following statement is an Access DDL statement for creating a primary key for the table mytable:
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
The Access ODBC driver does not support the extensions to SQL required to create a primary key with the ODBC variant of the SQL CREATE statement. The only way you can create the primary key for an Access table is with the native SQL.
This option works only in REPLACE mode.
Datatypes
The following data types are supported:
bigint
bit
bit varying
bytea
char
date
double precision
integer
numeric
oid
real
smallint
text
timestamp
varchar
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
PostgreSQL Multimode
PostgreSQL is an open-source object-relational database management system. It is derived from the original POSTGRES database management system, a research prototype. PostgreSQL retains the data model and data types of POSTGRES and replaces the PostQuel query language with an extended subset of SQL. The PostgreSQL Global Development Group website includes PostgreSQL documentation, FAQ, news, and a user forum.
Note:  You must have the PostgreSQL 64-bit client installed on the system and you must have access to PostgreSQL. If it is installed on your local PC, the drive and directory path of the PostgreSQL application must be included in your path. The path statement is automatically modified when your PostgreSQL client software is installed.
This multimode connector specifies field width in characters, which means the width of a field is the specified number of characters. To determine the number of bytes of the field, both the encoding and the particular characters must be examined. For example, see the following notes:
In UTF-8, a single character may be encoded into one, two, three, four, or five bytes. Thus a five-character field is written as at least five bytes and at most 25 bytes.
UCS-2 is a double-byte character set; characters always take two bytes and UCS-2 can only represent the first Unicode plane.
UTF-16 represents most existing characters as two bytes. However, characters that do not appear in the first Unicode plane take up four bytes. Currently UTF-16 is treated as UCS-2.
Shift-JIS is a multibyte character set. In Shift-JIS, a character takes either 1 or 2 bytes, depending on the character. Thus, a five-character wide field takes from 5 to 10 bytes.
The main distinction between single and multimode connectors is in the use of event actions:
Single mode connectors - Use event actions to access one target file or table at a time in a transformation.
Multimode connectors - Use multimode event actions to access multiple target tables in the same transformation, each time the transformation runs.
Unicode Support
PostgreSQL connectors include Unicode support for wide characters. To enable double-byte support, change the Regional and Language Options on your system to a double-byte language, such as Japanese.
Connectivity Pointers
The PostgreSQL client must be installed in your system to connect to PostgreSQL databases.
If the database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement box.
Connector-Specific Notes
The BINARY_FLOAT and BINARY_DOUBLE data types are not supported on Linux systems.
Limitations
DateValMask Function: Map Designer does not support milliseconds in DateValMask date conversions. Although a mask is provided for them, Map Designer always returns zeros in those places.
Property Options
You can set the following source (S) and target (T) properties.
Property Name
Description
Encoding
Sets the character encoding for XML source and target files.
For encoding choices, see Unicode 4.0 Character Encodings.
Shift-JIS encoding is used only in Japanese operating systems.
SQL Output
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. The bound mode is faster, since bind variables are used. The options available are:
Target Only - Uses bound mode that uses bind variables. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property. This is the default value.
Target Only (Unbound mode) - Uses unbound mode that does not use bind variables. The literal SQL statement is sent to the database engine. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SQL Log
Specify the SQL log file name. By default, it is sql.log in the default installation directory. To specify a different log file name, click once in the box and then click the search icon. Browse to the appropriate location and the file.
UpdateNullFields
If set to true, null values are sent to the database when inserting or updating records. The default value is true.
If you select false:
Null values are not sent to the database when you insert or update records.
Forces the connector to operate in unbound mode, which may cause slower performance.
SystemTables
This property is only applicable if the user is logged in to the database as the Database Administrator (DBA).
If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
Views
If set to true, this property allows you to see views. The view names appear in the table list along with the tables. The default value is true.
Synonyms
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. The default value is false.
Datatypes
The following data types are supported:
bigint
bit
bit varying
bytea
char
date
double precision
integer
numeric
oid
real
smallint
text
timestamp
varchar
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
PostgreSQL (ODBC 3.x)
PostgreSQL is an open source object-relational database management system (DBMS). It has been derived from the original POSTGRES database management system, a DBMS research prototype. PostgreSQL retains the powerful data model and rich data types of POSTGRES and replaces the PostQuel query language with an extended subset of SQL. The PostgreSQL Global Development Group website includes PostgreSQL documentation, FAQ, news, and a user forum.
The integration platform reads and writes PostgreSQL (ODBC 3.x) data.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
PostgreSQL ODBC Driver Download
Go to the PostgreSQL web site and click the Downloads link to download the appropriate ODBC driver. After the download is complete, go back to the web site and click the Documentation link for instructions on configuring the driver. (In addition to the expected documentation on their product, PostgreSQL also posts notes from users of PostgreSQL that are invaluable.)
Installing Drivers and Troubleshooting Connections
Installing an ODBC Driver
Oracle Connectivity Guidelines
Note:  The integration platform connects to PostgreSQL tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
Process and Transformation LogFile Parser
This connector uses a .cxl script as the default extract source schema to parse log files. You can parse log files that contain transformation or process errors and messages. The log files can contain a single run of a transformation or process or multiple runs (of transformations, processes, or a combination of both).
Users are often interested in manipulating the log file data in various ways. The following are examples of possible use cases:
Load the contents of the log file into a database
Create reports with portions of the log file
Get a different view of the log
Create custom logs from the main log
Mine the log file for specific values, and then use those values in a transformation or a process step
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
Here are some things to remember when using the LogFile Parser connector:
The Extract Schema is already selected for you. This default extract script file is called ProcessTransformationLogFile.cxl.
Records do not necessarily appear in the order read by the integration platform. For a complete list of the record types and fields, see Record Types.
Since the log file parser parses multiple logs that are appended to one another, there may or may not be steps of a certain type in a run of a different process. This means that logic may be needed to write out record types at the proper places in the target. If not, then you could have records that are by themselves or you could even associate records that do not belong together.
In a process, the last thing to be read is the Process_Execution_Stats. The log file format for the LogFile Parser map example is shown below:
Example: Log File Format for the LogFile Parser Map
Process Start1
  SQL Step1 Begin
  SQL Step1 End
Process Stop1
Process Start2
  SQL Step2 Begin
  SQL Step2 End
Process Stop2
Process Start3
  SQL Step3 End
Process Stop3
Here is how it is read into the map:
  SQL Step1 Begin
  SQL Step2 End
Process1 Information
  SQL Step2 Begin
  SQL Step2 End
Process2 Information
  SQL Step3 End
Process3 Information
Here is the XML structure we want to create for the example:
<Information>
  <Process_Information>
    <SQL_Step>
    <SQL_Step>
  </Process_Information>
  <Process_Information>
    <SQL_Step>
    <SQL_Step>
  </Process_Information>
  <Process_Information>
    <SQL_Step>
  </Process_Information>
</Information>
Connector-Specific Notes
No known limitations at this time.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
ByteOrder
S
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel and DEC Alpha machines and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ProgramVariables
S
Allows you to set or override program variable values. There is no default.
ReportReadingScriptFile
S
The script file, usually with a .cxl extension. It only appears here for verification purposes. The default .cxl file is ProcessTransformationLogFile.cxl.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For details, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
Encoding
S
Sets the character encoding for source and target files.
Note:  Shift-JIS encoding is used only in Japanese operating systems. UCS-2 is no longer a valid encoding name, but you may use UCS2 by changing the string value in the XML file.
Record Types
The 29 record types that are supported in this connector represent data collected from the log for the following process and transformation elements:
Processes
Process steps
Process sessions
Transformations
Transformation totals
Record Types for Process Steps
The following steps are specific to running a process. The listed record types are NOT part of a transformation-only log file.
Record Type Name
Field Name for Record Type
Description
Start_Step_End
 
Represents the data for a successful or unsuccessful Start Step in a process log file
 
Start_Step_Status
Status of the step after the run (for example, complete, failed)
 
Start_Step_Begin_Time
Time the Start Step begins
 
Start_Step_End_Time
Time the Start Step ends
 
Error_Description
Description of the error (if an error occurred)
 
Error_Time
The time the error occurred (if an error occurred)
 
Start_Step_Last_Error
Return code from the step
 
Error_Number
Error number associated with the error (if an error occurred)
Stop_Step_End
 
Represents the data for a successful or unsuccessful Stop Step in a process log file
 
Stop_Step_Status
Status of the step after the run (complete, failed, etc)
 
Stop_Step_Begin_Time
Time at the start of the Stop step
 
Stop_Step_End_Time
Time at the end of the Stop step
 
Error_Description
Description of the error (if an error occurred)
 
Error_Time
Time the error occurred (if an error occurred)
 
Stop_Step_Last_Return
Return Code from the step
SQL_Step
 
Represents a SQL Step in the process starting and stopping in the log file
 
SQL_Step_Action
Begin (Start) or End (Stop) for the step
 
Error_Time
Time error occurred (if an error occurred)
 
SQL_Step_Name
Name of the step in the process
 
Error_Number
Error number associated with the error (if an error occurred)
 
SQL_Step_Time
Time the SQL_Step_Action occurred
 
Error_Description
Description of error (if an error occurred)
 
SQL_Step_Status
Failed, successful, etc.
 
SQL_Step_Last_Return
Last return code returned from the Step
DTS_Step_End
 
Represents a DTS step run in the process log file
 
DTS_Step_Status
Status of the step after the run (complete, failed, etc)
 
DTS_Step_Name
Name of the step in the process
 
DTS_Step_End_Time
Time at the end of the DTS step
 
DTS_Step_Begin_Time
Time at the start of the DTS step
 
DTS_Step_Last_Return
Last Return code from the step
Decision_Step_End
 
Represents a Decision Step completing in the process log file
 
Decision_Step_Result
True or False
 
Decision_Step_Name
Name of the Decision step
 
Decision_Step_End_Time
Step end time
 
Decision_Step_Begin_Time
Step begin time
Application_Step_End
 
Represents an Executable (Exe) Step running in the process log file
 
Application_Step_Status
Status of the step after the run (for example, complete, failed)
 
Application_Step_Name
Name of the step in the process
 
Application_Step_End_Time
Time at the end of the Application step
 
Application_Step_Begin_Time
Time at the start of the Application step
 
Application_Step_Last_Return
The last return code from the Step
SubProject_Step_End
 
Represents a Sub-Process step running in a process log file
 
SubProject_Step_Status
Status of the step after the run (for example, complete, failed)
 
SubProject_Step_Name
Name of the step in the process
 
SubProject_Step_End_Time
Time at the end of the step
 
SubProject_Step_Begin_Time
Time at the start of the step
 
SubProject_Step_Last_Return
Last return code from the step
Queue_Step_End
 
Represents the execution of a Queue Step in the process log file
 
Queue_Step_Status
Status of the step after the run (for example, complete, failed)
 
Queue_Step_Name
Name of the step in the process
 
Error_Description
Description of the error (if an error occurred)
 
Queue_Step_End_Time
Time at the end of the Queue step
 
Queue_Step_Start_Time
Time at the start of the Queue step
 
Error_Time
Time error occurred (if an error occurred)
 
Queue_Step_Last_Error
Last error code returned from the step
 
Error_Number
Error number associated with the error (if an error occurred)
Scripting_Step_End
 
Represents the execution of a Scripting step in a process log file
 
Scripting_Step_Status
Status of the step after the run (for example, complete, failed)
 
Scripting_Step_Name
Name of the step in the process
 
Error_Description
Description of the error (if an error occurred)
 
Scripting_Step_End_Time
Time at the end of the Scripting step
 
Scripting_Step_Begin_Time
Time at the start of the Scripting step
 
Error_Time
Time error occurred (if an error occurred)
 
Scripting_Step_Last_Error
Last error code returned from the step
 
Error_Number
Error number associated with the error (if an error occurred)
Invoker_Step_End
 
Represents running the Invoker Step in the process log file
 
Invoker_Step_Status
Status of the step after the run (for example, complete, failed)
 
Invoker_Step_Name
Name of the step in the process
 
Error_Time
Time the error occurred (if an error occurred)
 
Invoker_Step_End_Time
Time at the end of the Invoker step
 
Invoker_Step_Begin_Time
Time at the start of the Invoker step
 
Error_Number
Error number associated with the error (if an error occurred)
 
Error_Description
Description of the error (if an error occurred)
DocValidator_Step_End
 
Represents a Document validator Step running in the process log file
 
DocValidator_Step_Status
Status of the step after the run (for example, complete, failed)
 
Error_Time
Time the error occurred (if an error occurred)
 
DocValidator_Step_Info
Information about the step
 
DocValidator_Step_Error_Description
Error Description Text in the log about the Step
 
DocValidator_Step_Name
Name of the step in the process
 
DocValidator_Step_End_Time
Time at the end of the DocValidator step
 
Error_Number
Error number associated with the error (if an error occurred)
 
DocValidator_Step_Error_Number
Error number associated with the DocValidator step
 
DocValidator_Step_Begin_Time
Time the DocValidator step starts
 
Error_Description
Description of the error (if an error occurred). In this case, possibly more error description
XSLTTransformer_Step_End
 
Represents an XSLT Step running in the process log file
 
XSLTTransformer_Step_Status
Status of the step after the run (complete, failed, Error Descriptions, etc)
 
XSLTTransformer_Step_Name
Name of the step in the process
 
XSLTTransformer_Step_End_Time
Time at the end of the XSLT Transformer step
 
XSLTTransformer_Step_Begin_Time
Time at the start of the XSLT Transformer step
 
XSLTTransformer_Step_Last_Error
Last Error code from the Step
Project_Execution_Stats
 
Represents the statistics from a process running in the log file. In a process, this is the last record type to be read
 
Project_Execution_Time
Amount of time it took to execute the process
 
Serial_Number
Serial Number for the Product
 
Project_File
Process file that was run
 
Project_Execution_Status
Status of the step after the run (complete, failed, etc)
 
Project_Execution_Begin_Time
Time at the start of the process execution
 
Product
Product that ran the process
 
Prepare_Map_Time
Time to prepare any maps that were executed in the process run
 
Owner
Owner of the Product
 
Organization
Company that owns the product
 
Build_Version
Build version of the product (for example, 8.12.1.14 2006061416)
 
Project_Execution_Finish_Time
Time at the end of the execution of the process
 
Project_Init_Start_Time
Initialization time
 
Project_Execution_End_Time
Time at the end of the execution of the process
Transformer_Step_End
 
Represents the running of a Transformer Step in the process log file
 
Transformer_Step_End_Time
Time at the end of the Transformer step
 
Transformer_Step_Last_Error
Last error code from the step
 
Transformer_Step_Status
Status of the step after the run (for example, complete, failed)
 
Transformer_Step_Name
Name of the transformer step in the process
 
Transformer_Step_Begin_Time
Time at the start of the Transformer step
Skipped
 
Represents any steps in the process that are disabled and are therefore skipped over during the run
 
Skipped_Time
Time to skip the step
 
Skipped_Step
Type of step skipped
 
Skipped_Step_Name
Name of the skipped step
Iterator_Step_End
 
Represents an Iterator Step run in the process log file
 
Iterator_Step_Status
Status of the step after the run (for example, complete, failed)
 
Iterator_Step_Name
Name of the step in the process
 
Error_Description
Description of the error (if an error occurred)
 
Iterator_Step_End_Time
Time at the end of the Iterator step
 
Iterator_Step_Begin_Time
Time at the start of the Iterator step
 
Error_Time
Time the error occurred (if an error occurred)
 
Iterator_Step_Last_Error
Last error code from the Step
 
Error_Number
Error number associated with the error (if an error occurred)
Aggregator_Step_End
 
Represents the running of an Aggregator Step in the process log file
 
Aggregator_Step_Status
Status of the step after the run (for example, complete, failed, etc)
 
Error_Description
Description of the error (if an error occurred)
 
Aggregator_Step_Name
Name of the step in the process
 
Aggregator_Step_End_Time
Time at the end of the Aggregator step
 
Error_Time
Time the error occurred (if an error occurred)
 
Aggregator_Step_Begin_Time
Step End Time
 
Aggregator_Step_Last_Error
Last error code from the Step
 
Error_Number
The error number associated with the error (if an error occurred)
Transformation_Step_End
 
Represents the running of a transformation step in the process log file
 
Transformation_Step_Name
Name of the step in the process
 
Transfrmation_Step_Begin_Time
Start Time
 
Transformation_Step_End_Status
Status of the step after the run (for example, complete, failed)
 
Transformation_Step_End_Time
Time at the end of the transformation step
 
Error_Description
Description of the error (if an error occurred)
 
Error_Time
Time the error occurred (if an error occurred)
 
Transformation_Step_Last_Error
Last error returned from the step
 
Error_Number
Error number associated with the error (if an error occurred)
Record Types for Process Sessions
These are sessions that are specific to running a process. They are not be part of a tranformation-only log file or a process log file that does not use process sessions.
Record Type Name
Field Name
Description
Queue_Session
 
Represents a Queue session that was used in the process
 
Queue_Session_Action
Begin (Start) or End (Stop) for the session
 
Error_Number
Error number associated with the error (if an error occurred)
 
Queue_Session_Name
Name of the session in the process
 
Error_Description
Description of the error (if there was an error
 
Queue_Session_Time
Time the Queue_Session_Action occurred
 
Error_Time
Time the error occurred (if an error occurred)
 
Queue_Session_Last_Return
Last return code from the session.
SQL_Session
 
Represents a SQL Session used in the process
 
SQL_Session_Action
Begin (Start) or End (Stop) for the session
 
Error_Time
Time the error occurred (if an error occurred)
 
SQL_Session_Name
Name of the Session in the process
 
Error_Number
Error number associated with the error (if an error occurred)
 
SQL_Session_Time
Time the SQL_Session_Action occurred
 
Error_Description
Description of the error (if an error occurred)
 
SQL_Session_Last_Return
Last return code from the session
Invoker_Session
 
Represents an Invoker Session used in the process
 
Invoker_Session_Action
Begin (Start) or End (Stop) for the session
 
Invoker_Session_Name
Name of the session in the process
 
Invoker_Session_Time
Time the Invoker_Session_Action occurred
 
Invoker_Session_Last_Return
Last return code from the Session
Aggregator_Session
 
Represents an Aggregator session used in the process
 
Aggregator_Session_Action
Begin (Start) or End (Stop) for the session
 
Aggregator_Session_Name
Name of the session in the process
 
Aggregator_Session_Time
Time the Aggregator_Session_Action occurred
 
Aggregator_Session_Last_Return
Last return code from the session
Iterator_Session
 
Represents an Iterator session that was used in the process
 
Iterator_Session_Action
Begin (Start) or End (Stop) for the session
 
Iterator_Session_Name
Name of the session in the process
 
Iterator_Session_Time
Time the Iterator_Session_Action occurred
 
Iterator_Session_Last_Return
Last return code from the session
Transformer_Session
 
Represents a Transformer session used in the process
 
Transformer_Session_Action
Begin (Start) or End (Stop) for the session
 
Transformer_Session_Name
Name of the session in the process
 
Transformer_Session_Time
Time the Transformer_Session_Action occurred
 
Transformer_Session_Last_Return
Last return code from the session
Record Types for Process and Transformation Messages
Messages are collected separately from other steps and sessions. This is because they may or may not have a correlation to the step that is precedes or follows.
The following errors are collected with the step or session that caused the error because the error can reliably be associated with that step or session.
Record Type Name
Field Name
Description
Debug_Messages
 
Represents a Debug message in the log file
 
Debug_Message
Text of the debug message
 
Debug_Message_Time
Time the message was logged
Information_Messages
 
Represents an Informational message logged
 
Information_Time
Time the informational message was logged
 
Information_Message
Text of the message that was logged
Warning_Message
 
Represents a warning message written to the log file
 
Warning_Message
Text written to the log
 
Warning_Message_Last_Return
Last return code from the logged message
 
Warning_Message_Time
Time the message was logged
 
Warning_Message_Info
More information about the warning
Record Types for Transformation-Only Runs
This information will be logged for a stand-alone transformation, a transformation in a process, or both. It is the collected information about the transformation-specific run.
Record Type Name
Field Name
Description
Transformation_Totals
 
Represents the information about a transformation that was run stand-alone or as a transformation in a process step.
 
Total_records_Buffered
Total number of records buffered.
 
Execution_Time
Time to run the transformation.
 
Execution_Begin_Time
Time at the beginning of the transformation.
 
Execution_End_Status
Time at the end of the transformation.
 
Map
Map and location of the transformation being run. xmldb:ref:/// .tf.xml
 
Error_Description
Description of the error (if an error occurred)
 
Prepare_Map_Time
Time to compile and prepare the map for running
 
Total_Error_Count
Total number of errors in the transformation
 
Total_records_Deleted
Number of records deleted in the transformation
 
Total_records_Discarded
Number of records discarded in the transformation
 
Total_Records_Inserted
Number of records inserted in the transformation
 
Records_Read
Number of records read in the transformation
 
Total_Rejected
Number of records rejected in the transformation
 
Total_records_Updated
Number of records updated in the transformation
 
Total_Records_Written
Number of records written in the transformation
 
Total_Records_Rejected
Number of records rejected in the transformation
 
Map_Version
Version number of the map
 
Execution_End_Time
Time at the end of the execution of the transformation.
 
Error_Time
Time the error occurred (if an error occurred)
 
Execution_Init_Time
Time spent in initialization of transformation
 
Transformation_Last_Error
Last error message
 
Error_Number
The error number associated with the error (if an error occurred)
 
Organization
Organization that owns the product
 
Owner
Owner of the product
 
Product
Product that ran the process
 
Serial_Number
Serial number of the product
 
Build_Version
Build version of the product (example, 8.12.1.29 2006020116)
Transformation_Last_Error_Final
 
Represents an error that has occurred in a stand-alone transformation (not a transformation called in a process step). See the related note below this table.
 
Transformation_Last_Error_Final_Time
Last time the error was logged
 
Last_Error_Code
Next to last error code that was logged
 
Transformation_Last_Error_Final_Info
Last error message
 
Transformation_Last_Error_Time
Next to last error occurred and was logged
 
Transformation_Last_Error_Info
Next to Last information about the last error message
Transformation_Last_Error_Final record type
The Transformation_Last_Error_Final record type picks up the last two errors that are logged for a transformation only execution. This record type is logged only for transformations that contain errors.
Two more error messages are logged at the end of the transformation run if a transformation is run stand-alone and there is an error. This information appears in the Transformation_Totals layout if the transformation is run in a process. To find out if there is any data in this layout, do one of the following:
Check the Error_Number in the Transformation_Totals
Use the event similar to any other multirecord layout that may or may not have records of a particular type
Progress (ODBC 3.x)
Progress is an RDBMS database application that you can connect to through ODBC.
Installing Drivers and Troubleshooting Connections
See the following topics:
Installing an ODBC Driver
Oracle Connectivity Guidelines
Note:  The integration platform connects to Progress tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
RBase (ODBC 3.x)
RBase is a database application that you can connect to with ODBC 3.x. With this connector, the integration platform can read from or write to RBase with ODBC 3.x.
To connect to a RBase database, you must have three data files in one directory. The file name in each of the three files is the database name. The file extensions are RB1, RB2 and RB3. For example, if you have a database called TEST, the three files you must have are TEST.RB1, TEST.RB2 and TEST.RB3.
The integration platform supports RBase through the RBase ODBC 3.x drivers. If you have the RBase application installed on your system, you should already have the files needed to transform to or from RBase (either installed on your system or available on the RBase installation disks).
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
See ODBC 3.x.
Property Options
See ODBC 3.x.
Data Types
The only data type available for Rbase is text.
Red Brick
Red Brick is a high-end relational database management system. With this connector, you can read and write Red Brick data tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, you could have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table: "mytable". The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement that is preceded by an "@" is sent straight to the DBMS.
The statement
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys, use native SQL.
CursorType
S
Specifies the type of cursor to use for fetching records from the source table. The choices available are Forward Only, Static and Dynamic. Default is Forward Only. For more information, see Cursors.
DSNType
ST
The integration platform allows you to specify data sources or drivers to connect to in the Data Source Name list. DSNs are listed in the ODBC Data Source Administrator of Windows Administrative Tools in the Control Panel. The options are User & System, System, User, Driver, and File. Default is User & System.
DriverCompletion
ST
Allows you to control whether or not the driver prompts you for information. The options available are Prompt, Complete (default), Complete Required and No Prompt. The Prompt option prompts the user for every individual bit of information. Complete prompts the user for any information they forgot to enter. The Complete Required option prompts the user only for information that is essential to complete the connection. The No Prompt option does not prompt the user for any information.
DriverOptions
ST
Enter any valid ODBC connect string options.
Encoding
ST
Allows you to select the type of encoding used with source or target files.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the data file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. IdentifierQuotes has four options: Default, None, ", and '.
MaxDataLength
ST
The maximum data length for long data types. Default is 1 MB. You can reset this number as you choose based on your available memory capabilities and Target requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an Application (ODBC) Source or target connector and the default setting is not 1 MB, the integration platform sets the default for the capabilities of that particular ODBC driver. It is not recommended that the MaxDataLength property be set any higher under those conditions.
ModifyDriverOptions
ST
Allows you to store the ODBC connection. The default is true. If you set this to false, you are prompted for your connection information each time you run the transformation.
PrimaryKey
T
Allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
Synonyms
ST
If set to true this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
The SystemTables property is only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation
ST
Allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The following lists the five supported isolation levels.
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
VERSIONING - Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency.
For further details about transaction isolation levels, see the Microsoft ODBC SDK documentation.
UseCursors
T
Allows you to turn cursor support on and off. Default is false. If you set the property to true and the selected ODBC driver does not support cursor inserts, the integration platform uses the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts.
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL.
Views
ST
If set to true this property allows you to see views. The view names appear in the table list along with the tables. Default is true.
ArraySize
T
Determines the number of rows to be sent to the server at one time. The default value is 1, meaning each row is individually sent to the server. Larger values will buffer multiple rows and send them all at once. While this improves the speed, it affects error reporting (a server error won't be detected/reported until the next batch of records is sent to the server).
The maximum value allowed for this property is 100000. While the connector allows the value to be set that high, many drivers have lower limits. The connector will log a message indicating if the driver is forcing a lower value for the array size. In addition, the connector does not support arrays when there is a LOB-type field in the table, or when the (maximum) length of a character-type field is longer than 32767 characters. In these cases, a message will be logged indicating the array size has been reduced to 1.
Due to the way the connector attempts to support older drivers, the array support requires either BulkOperations be true (which means UseCursors must also be true), or both must be false. The condition where BulkOperations is false and UseCursors is true will cause the array size to be reset to 1, with a message logged indicating this condition occurred.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. This might be used to get the SQL database server to filter the data based on a condition before it is sent to the integration platform. Omit Where when you enter the clause. No default.
Note:  When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
Data Types
The following data types are available:
binary
bit
byte
character
counter
currency
date time
double
guid
integer
long binary
long char
real
small integer
variable binary
variable character
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
Rich Text Format
Rich Text Format (RTF) is one of the special data publishing file formats that can be used in many applications. With this connector, the integration platform can write Rich Text Format files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Mode
T
Allows you to select the format in which the integration platform writes out the RTF file. The following mode options are available:
Label - Writes out the field name (in bold) and the data for each record. A colon is inserted between the field name and the data. Each field is written to a new line in the document (long text wraps).
Table - Writes out the data in a two-dimensional matrix format. This format is particularly appropriate for data publishing projects involving data such as part numbers and price lists.
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in a Rich Text Format file is Text.
RightNow CX
RightNow CX is a web-based CRM (Customer Relationship Management) software platform that enables users to access information using a Web Services Interface.
Supported Versions
RightNow CX May 10 release and later
Installation Requirements
The user must have a functional RightNow CX account. Also, the Web Service API for your organization must be enabled within the RightNow CX platform.
Property Options
The following source (S) and target (T) properties.
Property
S/T
Description
URL
ST
URL for the RightNow CX web services as given in the example below.
http://<rightnowinstance>.rightnow.com/cgi-bin/<rightnowinstance>.cfg/services/soap
Username
ST
Username for the RightNow Account you are trying to access.
Password
ST
Password for the RightNow Account you are trying to access.
Batch Size
S
Number of source records the connector caches before processing them in a map. Default is zero.
Show Children
S
A Boolean variable to control whether sub-objects are shown in metadata and in data.
True – Show all the sub-object.
False (default) – Doesn't show any sub-object except ID and Named ID type sub-object.
Flush Frequency
T
Number of operations the connector caches before writing them to the target. Default is zero.
Batch Response
T
Sets the path name for a batch response file, which is a reporting mechanism for the connector. The file provides detailed results for each object in a batch where the batch size is greater than one. Obtaining detailed results is useful for doing the following:
Capturing system-generated object IDs for use in future updates.
Correlating an error with its object and having enough information about the error for exception handling and error diagnosis.
A batch response entry is generated only for each top-level record.
Supported Entities
The connector supports both RightNow defined entities and custom entities. It also supports read/write from custom fields in both entity types.
Supported Operations
The RightNow CX connector supports CRUD operation types on both primary and custom entities. 6
Data Type Support
The data type in both source and target fields should not be changed. Changing the data type in a target field may cause write failure.
The following data types are used in RightNow CX:
Boolean
Date
DateTime
Decimal
Integer
Long
String
Connector Specific Notes
Object ID Ignored in Insert operation
When inserting a new object into RightNow, the ID field is ignored and a new ID will be assigned to this object automatically by RightNow.
File Attachment type sub-objects are not supported
Currently, the File Attachment type sub-objects are not supported. This includes FileAttachment, FileAttachmentAnswer, FileAttachmentCommon, and FileAttachmentShared.
Working with custom objects
The RightNow CX connector supports custom objects. Working with custom objects is similar to working with primary objects. The only difference is the object name used. Primary objects use the name of the class such as "Contact." Custom objects use a combination of the package name and the custom object name to uniquely identify the custom object. The package name and object name are joined by a dot (.). For example, for a custom object package called CO and a custom object named PurchasedProduct, you would reference this object as CO.PurchasedProduct.
RightNow Object Query Language (ROQL)
In a source connection, you can use a query statement to query for objects or fields. The RightNow CX connector supports RightNow Object Query Language (ROQL) only.
The RightNow Object Query Language (ROQL, pronounced "Rockwell") provides a query subsystem that allows clients of the API to perform SQL-like queries against the RightNow CX platform. The query language is based on a subset of Object Query Language (OQL), a text-based SQL-like query language that has been extended to handle object notation.
Syntax
ROQL supports both object query and fields query.
Object query
SELECT primaryObjectReference FROM primaryObject [WHERE conditionExpression] [LIMIT number_of_rows] [OFFSET start_index]
SELECT <Parent Object>.<Parent to Child Relationship Name> FROM <Parent Object> SELECT <Child Object>.<Child to Parent Relationship Name> FROM <Child Object>
Fields query:
SELECT fieldList from primaryObject [WHERE conditionExpression] [LIMIT number_of_rows] [OFFSET start_index]
Special usage
LIMIT
Limit can be used to limit the number of rows that will be returned in the query. For example, the following query will return the first 100 Contact objects with the Contact first name populated.
SELECT Contact.Name.First FROM Contact LIMIT 100;
OFFSET
Offset can be used to specify the offset or start index for the returned results. For example, the following query will return the first 100 contacts beginning at the 101 entry.
SELECT Contact.Name.First FROM Contact LIMIT 100 OFFSET 100;
Relationship Queries
Relationship queries are defined to help navigate between related objects, serving as a shortcut for performing two selects during query operations. Relationships can chained by adding a dot (.) followed by the appropriate relationship name. For example, the query below starts at Incident, goes to Organization, then goes to Contact to list the last names of all contacts for the organization to which the incident belongs.
SELECT Incident.ParentOrganization.Contacts.Name.Last FROM Incident;
Relationship between primary objects:
Query Custom Objects
Custom objects can be queried through Connect Web Services for SOAP in a similar fashion to the primary RightNow objects. The syntax used to query custom object data is as follows: Package.ObjectName in the SELECT clause and Package.ObjectName in the FROM clause. For example, if there were a CO package and an RMA object, the query would be this:
SELECT CO.RMA from CO.RMA;
Examples
Search all Contact objects:
SELECT Contact FROM Contact;
Search an Incident object from incident with 1D=7:
SELECT Incident FROM Incident I WHERE I.ID = 7;
Search a field in Name sub-object in Contact object:
SELECT C.Name.First FROM Contact C;
Search a field in Name sub-object in Contact object with WHERE clause:
SELECT C.Name.First FROM Contact C WHERE C.Name.First like 'C%';
Salesforce
Salesforce, a product of Salesforce.com, is a web-based customer relations management (CRM) software platform that enables users to access sales-related information on a Web Services repository.
Tip:  This documentation covers Salesforce version 2.5 and later. To set the version to use, before establishing a connection, enter a number for the API Version property in both the source and target datasets.
The Salesforce connector supports endpoint API version 42.0.
Salesforce is an open XML-based online platform that provides Simple Object Access Protocol (SOAP) access. SOAP provides a way for different operating systems, such as Windows and Linux, to communicate with each other. SOAP specifies the envelope for a message, expressed in XML, so a program in one computer can pass information to another computer. In addition, it specifies how the called program can return a response.
The Salesforce connector connects Salesforce data to virtually any flat file or database application. Salesforce cannot be held responsible for any use of third-party products that results in loss of or undesirable changes to data. For this reason, Salesforce strongly recommends that you use a separate Salesforce organization for testing and development and that you use, retrieve, and validate a backup of your data by using the Salesforce weekly export service. Contact Salesforce for information on establishing a separate Salesforce organization.
Tip:  Salesforce offers a test environment, called a sandbox, that is a mirrored representation of your production data and that does not access your live data. You can use the UseSandbox property to access the sandbox server. See Property Options. The test location can be accessed by logging on to the https://test.salesforce.com Salesforce test environment.
You may have setup tasks for which you need to contact Salesforce. In addition to using a test environment, you can back up data using the Salesforce weekly export service.
This connector sets field width in bytes. What varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
All entities exposed by the Salesforce API are supported. Entity names change with each version of the API. This connector also supports custom entities.
Salesforce provides a diagram of its sales objects and a list of standard objects in its API. If these links are no longer current, search for their updates in the Salesforce help. To use the integration platform with Salesforce, enable the API for your organization within Salesforce. Direct questions regarding Salesforce, including activating API access, to Salesforce. Be sure to mention that you are using the integration platform.
Note:  The Salesforce connector is SHA-256 compliant.
This section includes the following topics:
Connectivity Pointers
Upserts, Inserts, and Updates with Relationships
Connector-Specific Notes
Property Options
Additional Information
Connectivity Pointers
Two security connection options are available: User ID and Password and Session Token.
User ID and Password – Enter your Salesforce user ID and password to connect. The connection is limited to the access rights of the user ID. For most purposes, you should use a user ID with an Administrator profile. Use a separate user ID that is not used for normal logins.
Session Token – Enter a session token in the User ID field or pass the value to the field using a macro to connect. No value is needed for the Password field. For more information on session tokens, see Session Tokens.
Query statement syntax for this CRM connector is SOQL. For details on how to write query statements, see the application documentation.
If your company is using a proxy, enter the appropriate information into the ProxyServer field in the source or target properties. For more information about these settings, see Property Options.
Upserts, Inserts, and Updates with Relationships
Upserts with relationships are especially useful if you want to insert a parent record and then upsert a related child record in the same map.
Inserts and updates with relationships are supported.
You can use external IDs to set other ID fields on an object without having to retrieve Salesforce IDs.
A special syntax in the target field value of a reference field ensures the data contains an entity name, external ID field, and a value for the external ID field relationship. Outside of this special syntax, normal values are permitted.
Example
Creating a Custom Field for Parent or Child Entities:
In your Salesforce account, create a custom field for the desired parent and child entities with settings for the unique and externalID attributes.
See Also
To insert, upsert, or update an object that has an external ID attribute selected, but not the unique attribute selected, you must have the permissions to view all data.
For more information, see the Salesforce web site. For useful information on custom field name and relationship names, and upserts and foreign keys, see the Salesforce AppExchange Web Services API Developer's Guide.
Connector-Specific Notes
Long Connection Times. When you are connecting to a large source with many records, it may take a prolonged time to retrieve the records for viewing. If you open the Windows Task Manager, the status may say "not responding". This is expected behavior. After the records are read, the status returns "running" and you can view the records.
Daylight Savings Time. Dates and times are not automatically changed for daylight savings time. Make adjustments to your data as needed for your location.
Access Rights
Records. You are only able to access or modify records in Salesforce that are available to the user ID that you are using. Therefore, some records may only be read-only.
Tables. Some Salesforce tables are only visible to a user ID, or only modifiable by a user ID, depending on the user permissions or on the features that are enabled for your organization.
Fields. Some fields in certain tables are read-only, or can only be set during insert, not during update operations. For example, the Last Modified Date and Last Modified By fields are read-only. They are implicitly set each time a user modifies a record. The Owner ID fields on each table can only be set using the integration platform during insert. You must transfer ownership within the Salesforce application to change an Owner ID on existing records. Also, some fields in the Case table cannot be changed after the records have been inserted. Finally, some tables, such as User and Product do not allow deletes. You can, however, mark records in those tables as inactive.
Recycle Bin. You cannot access the Salesforce recycle bin using the integration platform.
Restricted values. Some fields have restricted values. Most commonly, all cross-reference ID fields (those fields that end with ID) must be references to valid entities of the appropriate type that the user has access to. For more information on ID fields, see the procedures for Cross Reference IDs, Direct Updates, and Deletes and Lookups.
Because of these restrictions, some insert, update, or delete operations may be rejected if the operation is not allowed for that user, or if the information is invalid. Check the the integration platform log to verify which records were sent properly.
Configuration Tables. Some of the Salesforce tables that are accessible, depending on your access rights, are configuration tables, such as User, Product, Role and Profile. The User table is a list of users defined in your Salesforce organization. You can determine user information and Owner IDs that are listed in another table by querying the User table. You can also provision new users, provided you have licenses available, or update existing users using the User table through the integration platform. Note that you cannot change the active status of a user with the integration platform.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Version
Description
API Version
ST
User-defined
Salesforce API version to use. This property must be set before establishing a connection. Default is 23.0.
AssignmentRule
T
2.5 and later
Allows you to leave ID fields blank and let Salesforce create the ID with its own dynamic rules, or allows you to use a default assignment rule.
None is the default, which indicates that no assignment rule should be used. Using Default Assignment Rule applies the assignment rule that the account administrator has assigned as the default. This option applies for Lead or Case entities only.
BatchResponse
T
2.5 and later
Path to a batch response file, which captures selected messages, errors, and other information returned through the connection. The file can be written locally after connecting to a Salesforce server. It contains detailed results for each object in a batch where the batch size is greater than one. This is useful in the following cases:
You need to capture system-generated object IDs for use in future updates
You need to correlate an error with its object and have enough information about the error for exception handling and error diagnosis
For more information, see Batch Response File.
MaxBatch Setting
The MaxBatch property must be set to greater than 1, or no batch response file is written.
ClientSideValidate
T
2.5 and later
Allows you to download and check field IDs for matches when doing updates, deletes, or upserts with the action key as the field ID. Default is false.
The effect of this property can vary with the MaxBatch value.
MaxBatch
ST
2.5 and later
Source property: Controls how many records the server is asked to send at one time. The default is 200 and the maximum is 2000.
Target property: Controls how many records are sent in one group for each insert, update, or delete operation. The default is 200 and the maximum is also 200.
Note:  The maximum value for the MaxBatch source property in Salesforce API versions 5.0 and earlier is 500.
If you use a large number in this setting, you may see faster transformation run times. However, larger numbers may increase the need for memory. If you experience memory problems, lower the number.
For more information, see MaxBatch.
ProxyAuth
ST
16.0 and later
Contains your proxy user IDs and passwords. Separate the entries with a colon, as in userid:password.
ProxyServer
ST
16.0 and later
Contains your proxy server name. You can append a port number by separating it with a colon, such as name.of.proxyserver:80.
QualifiedRecordNames
S
8.0 and later
Displays the Salesforce records as either partial or fully qualified record names. The default is partial. The full option displays each child record with its full name separated by a period. For instance, if Contact is a child of the parent Account, the child record name is Account.Contact.
For duplicate record type names, set this property to full to see the hierarchy of parent-child relationships of all record types, including different child record types with the same names.
QueryAll
S
16.0 and later
Specifies that the 'queryAll' REST method should be used, rather than the 'query' method. This will retrieve all archived and deleted records, in addition to the records that would be retrieved with the 'query' method. If a query statement is entered that includes the ALL ROWS clause, the query will be sent using the 'queryAll' method, regardless of the setting of this property.
ResendOnError
ST
2.5 and later
Determines whether to resend request message on error. To resend even if an error occurs, set to true. If the resent message also results in failure, no entries are added to the batch response file for the failed records.
When set to false (default), if a TCP error or timeout occurs when receiving a response, the request message is not resent. Instead, the HTTP session returns an error.
In addition, if batch response is enabled and an error occurs while sending an insert, update, delete or upsert request, each record in the failed batch is listed in the batch response file with this error message: "Error or timeout reading response from Salesforce".
SessionTimeout
ST
16.0 and later
Sets the connection session timeout value. Default is 2 hours. Options include, none, 30 minutes, 1 hour, 2 hours, 4 hours, and 8 hours.
Set the session timeout property to match the value that appears in the Setup tab in Salesforce.
This property is supported for user ID and password connections only. If you are using a session token to connect, there is no session timeout, and this property is ignored.
TriggerAutoResponseEmail
T
8.0 and later
Triggers autoresponse e-mail rules for lead and case transactions. Default is false. For more information, see Automatic E-mail Properties.
TriggerOtherEmail
T
8.0 and later
Triggers an external e-mail message when a case contact is edited. Default is false. For more information, see Automatic E-mail Properties.
TriggerUserEmail
T
8.0 and later
Triggers an e-mail message to internal users in the organization. Default is false. For more information, see Automatic E-mail Properties.
UpdateNulls
T
2.5 and later
Allows you to decide whether nulls or empty strings are sent to the server. The following options are available:
Always (default) Nulls and empty strings are always sent to the server to set fields to null.
NullOnly Null values are sent to the server, but empty strings are not sent.
Never Null values and empty strings are never sent to the server.
UseSandbox
ST
2.5 and later
Points to mirror test database for client sandbox testing. Allows you to easily switch between a sandbox server and a production server. The default setting is false, which indicates a sandbox should not be used. If set to true, the connector constructs a URL to connect to Salesforce test areas. For more information, see the Salesforce documentation.
WhereStmt
S
2.5 and later
Provides a pass-through mechanism so that you can construct the Where clause of the SQL SOQL query. Use as an alternative to writing a lengthy query statement in the Query Statement text box. Omit the keyword Where when you enter the clause. No default exists for this property.
Review the Salesforce documentation for details on the use of order by, limit, and other Where conditions.
Note:  Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table.
Automatic E-mail Properties
When you insert, update, upsert or delete records on the Salesforce server, you can add an extra header to outgoing SOAP messages. If you set any of the target e-mail properties to true, the extra header is added to the SOAP message, indicating that automatic e-mail rules should be triggered. Any properties whose value is false are omitted in the EmailHeader. If all properties are set to false (default), the EmailHeader is omitted from the SOAP message. Configure automatic e-mail rules on Salesforce for your organization.
MaxBatch
You can set the MaxBatch property to determine what size batch to send to the server. The server also submits the maximum batch size it allows. The integration platform uses the smaller of these two values.
For example, if you set the MaxBatch property to 50 and the server tells the integration platform that 200 are allowed, the application uses 50 (because it is the smaller number). If you set the MaxBatch property to 200 and the server indicates 100, then 100 is used.
Tip:  If you use a large number in this setting, you may see faster transformation run times. However, larger numbers may increase the need for memory. If you experience memory problems, lower the number.
To retrieve the ID of a newly inserted record
If the ID field is present in your record layout, and you want Salesforce to retrieve IDs for newly inserted records, do the following:
Set the MaxBatch property to the value of 1; then the ID of the new inserted record is retrieved from Salesforce at run time and is placed in the ID field. If MaxBatch is not set to 1, the ID field is ignored.
To obtain the correct number of errors
Set the MaxBatch property to the value of 1. This ensures the correct error count is returned.
To avoid interleaving of child record types
If the number of records returned from a query is greater than the batch size set by the MaxBatch property, the returned record set is divided into batches. If subqueries return multiple child record types, the first batch contains a representation of the entire hierarchy with a partial number of child records from each subquery interleaved in the result.
An interleaved result with three subqueries presents the parent record and then some of the first children, some of the second children, some of the third children, then the rest of the first children, the rest of the second children, and the rest of the third children in the following batches. All records are returned but not in sequential order.
If this interleaving is not wanted, try to set MaxBatch high enough so that the parent and all of its children can be delivered in one response without their needing to be split into batches. The maximum MaxBatch value is 2000.
Data Types
Salesforce offers the following data types:
Base64
Boolean
Combo
Currency
Date
Datetime
Double
Email
Id
int
Lookup
Multipicklist (see note below)
Percent
Phone
Picklist
Reference
String
Text Area
URL
The Multipicklist data type is the same as the standard Picklist data type field, except that you can select multiple values instead of only one.
Additional Information
This section includes the following:
Performance Notes
Session Tokens
Security Tokens
Importing Records
SOQL Parsing
Cross-Reference IDs
Direct Updates and Deletes
Updating Salesforce Account Ownership
Lookups
Date Formatting
Filtering Salesforce
Connecting to Alternate Servers
Connecting to Alternate Endpoints
Debugging
Unmapped Fields
Performance Notes
Time-consuming operations may cause the server to become slow for your organization. The effect on performance is related to the complexity and number of requests, as well as HTTP transmission rates, and loads on Salesforce servers. In rare cases, a server may become blocked and you may need to wait for it to clear.
You can improve the performance in these cases by reducing the impact of the transactions:
Only update records that are required. For instance, if you are updating 100 records with new information, do not include additional records in the update that have no effect.
Reduce the size of queries. Restrict queries by using filters to minimize the number of records returned. In particular, you may use the Last Modified Date to get only new records.
Use Direct Updates. "Normal" nondirect updates do an implicit query, usually of the whole table. Use direct updates when you already have the Salesforce ID. In cases where you are matching on other fields, you may want to manually do a more restricted query to obtain the necessary Salesforce IDs. Similarly, use direct deletes when possible.
If you are updating existing Salesforce records, querying them first, modifying them offline and then updating them with the integration platform, the best method is to query the Salesforce ID and use Direct Updates and Deletes to update the records.
Session Tokens
Session tokens enable you to connect using a session token instead of logging in using a user name and password. This allows you to log in once by an external method, and reuse the session connection information for multiple instances or components.
Reusing the session information is more convenient and provides better performance than repeatedly sending a user ID and password information for each connection.
Tip:  Session tokens expire after two hours by default. You can change the session token timeout in Salesforce under Security > Controls.
A session token consists of a sessionID and serverURL. The session token must be in the following format:
sid:sessionId:serverURL
where sessionID is a unique ID associated with your session and serverURL is the URL of the webservice that processes subsequent URI calls.
If you are calling a map from within a process, create the session token as a macro. Enter the macro in the User ID field in the session window.
To obtain a sessionID and serverURL for a session token, use the Salesforce Login Invoker.
Security Tokens
For added security, Salesforce only permits access from a specified range of IP addresses within an organization. Access from a computer with an IP outside the range requires an additional security token. This token can be e-mailed to the principle e-mail address associated with an organization and used from any computer. The token takes the form of a login name, and uses the original password for the account. Do not share the token with other users.
For information on obtaining and using a security token, see the Salesforce documentation.
Importing Records
The following example discusses the proper way to import Accounts and Contact records into Salesforce using new data, as well as existing data from a legacy system or prior software, based on the information about Lookups.
For this example, assume that you have two tables in some local database, one for Accounts and one for Contacts. Further, the Contacts table indicates the Accounts that each Contact is one. The linkage of Contacts to Accounts is through some field, or combination of fields, in the Account and Contact tables.
That linkage could be through the Account name. The Contact table could have a column named "Account Name" that indicates the Account Name that each Contact is associated with. Note that you cannot directly insert the Account Name into the Contact table.
The linkage may be through some combination of fields, such as the Account name and address.
The linkage may be through some third-party ID field, such as your own internal Account Number. In those cases, you should define a custom field on the Account table for that external ID field. When you insert the Account records, make sure to map that field and insert those IDs into the custom field in Salesforce. You can delete the custom field later, when the import is complete.
Insert the Accounts into the Account table, including any columns needed to link the Account and Contact tables.
Query the Account table for the Account ID field, plus any columns needed to link the Account and Contact tables, such as the Account Name or external ID custom fields. Store that query in a temporary local DB.
Note:  You cannot perform direct lookups on the Salesforce database within a transformation. To use the return of a Salesforce query in a transformation, store the return in a local database and use this temporary database in your transformation.
Insert the Contact records using an integration platform lookup function for the Account ID field to cross-reference the Contact and Account tables. Use the incore lookup both for speed and to preserve case-sensitivity on the lookup data.
SOQL Parsing
The Salesforce Object Query Language (SOQL), is used to construct query strings for the queryString parameter in the Salesforce query call. According to Salesforce, "SOQL allows you to specify the source object (such as Account), a list of fields to retrieve, and conditions for selecting rows in the source object."
This connector supports multiple record types when connecting to a data source using a relationship SOQL query. The SOQL feature was added to support relationship join queries. When retrieving source records, the connector finds the record type related to the query and populates it. The results of relationship queries are displayed using partially-qualified record names.
The following code examples take advantage of the SOQL query grammar that allows relationship joins. You can type these queries in the Query Statement or Query File parameters in a session.
Examples of SOQL Relationship Queries
Child-to-Parent Relationships
SELECT Id, Name, Account.Name FROM Contact WHERE Account.Industry='Retail'
Parent-to-Child Relationships
SELECT Account.Name, Account.Id, (SELECT Contact.FirstName, Contact.LastName from Account.Contacts) From Account where Account.Name like 'DJ1%'
Alias Notation
SELECT a.Name, a.Id, (SELECT c.Id, c.FirstName, c. LastName, c.AccountId FROM a.Contacts c) FROM Account a where Name like 'DJ1%'
Field Name Notation
SELECT Name, (SELECT CreatedBy.Name FROM Notes) FROM Account where Name like 'DJ1%'
Where Clauses in a Subquery
SELECT Name, (SELECT LastName FROM Contacts WHERE Title like 'QE') FROM Account WHERE Industry='Retail'
ALL ROWS Clause
If the ALL ROWS clause appears in the SOQL query, it will be ignored while parsing the query statement, and will be removed when sending the query to Salesforce. In this case, the query will be sent using 'queryAll'.
More Information on SOQL
See the Salesforce API documentation for specifics on relationship join queries on the Salesforce web site.
Cross-Reference IDs
Fields ending with "ID" in Salesforce are cross-referencing IDs. The value in a cross-reference ID field must be a valid Salesforce entity ID that the user has access to and it must be of the appropriate type.
IDs in Salesforce are case-sensitive, alphanumeric values. Be careful when comparing or looking up Salesforce IDs in some other system to ensure that it is done in a case-sensitive manner. For more information, see Lookups.
You can determine the cross-reference entity ID by querying the appropriate Salesforce table using the integration platform, or from within Salesforce using reports.
Entity IDs can be generated through the Reports tab in Salesforce by selecting to display the particular column in either step 2 or 3 of the Report Wizard. The step number depends on whether you are running a tabular or summary report.
Some cross-reference fields in Salesforce allow for an empty cross-reference. In these cases, you can clear the cross-reference field by setting it to a string of 15 zeroes (000000000000000). For more information, see Direct Updates and Deletes and Lookups.
Direct Updates and Deletes
Records in Salesforce are referenced by unique entity IDs. Using the integration platform, you can update or delete records using other keys and the integration platform determines the appropriate entity IDs for you by doing an implicit query of the table and matching the keys.
When you already have the Salesforce entity IDs for the records that you want to update or delete, it is faster to use "Direct Updates" (or deletes).
To use direct updates or deletes
1. Map the ID of the entity that you want to update or delete.
2. Specify target keys for the update or delete only the ID column.
3. Within Target Keys and Output Mode Options, select the following options:
Update Mode: Update FIRST matching record and drop non-matching records.
Update Fields: Update only mapped fields.
Note:  If you select the Insert ONLY Non-matching Records Update Mode, the Update Fields option changes to Update ALL Fields. This is the expected behavior.

This configuration generates an error message advising you that the Update All Fields option is not supported. Due to limitations or restrictions in the Salesforce API, Update ALL Fields cannot be implemented and is not a supported option. The error message is the only method available at this time to communicate this limitation.
Updating Salesforce Account Ownership
In connections to Salesforce 16.0, it is possible to update account ownership in the OwnerID field if you have the Transfer Record permission granted by an administrator login in Salesforce.
Lookups
The integration platform can automatically determine the appropriate Salesforce entity ID when updating or deleting records and matching on a field other than the ID field. When joining data across tables in Salesforce, though, you must either already have the proper cross-reference entity ID, or you must use a the integration platform lookup function to obtain it.
The ILookup Function instructs the integration platform to look for specific data in a field in the source file and replace that data in the target file with values from an external table.
To perform lookups
1. Insert the Account records, or whichever records do not require any cross-reference IDs for insert.
2. Query the records you just inserted using the integration platform. Query the ID field, plus any fields that you need to uniquely identify the records, such as the name, address, or some external ID and store the result in a temporary table.
3. Insert the Contact records. For the Account ID field on the Contacts record, use an integration platform lookup function to look up the appropriate Account ID for each Contact record.
Salesforce entity IDs are case-sensitive. Take this into account when linking the lookup table within a map. You may need an expression to deal with case based on your source. Applications such as Excel are not case-sensitive and may cause import errors or bad data in the system. Remember also that incore lookups have better performance. See the mapping help topics for details on creating lookups in maps.
Date Formatting
Some fields in Salesforce are time stamp or date fields that need time stamp or date values when inserting or updating those fields. The integration platform can help you transform data as necessary.
Some other data sources have an explicit notion of a time stamp or date field also, while some deal with these as strings. For example, Oracle and SQL Server have an internal notion of a time stamp or a date, while Excel, ASCII, and Unicode delimited files do not. For text-based files, such as Excel or ASCII/Unicode text files, use the CDate() or DateValMask() functions to transform the dates to the internal time stamp or date type.
The DateValMask Function is used to convert formatted date strings into real date values based on a date edit mask.
Example
DateValMask(Fields("Field Name"), "mm/dd/YY1950")
The CDate Function coerces a date string into a Date data type.
Example
CDate((Fields("Field Name"))
Filtering Salesforce
If you are using Salesforce as a source connector, be aware that an expression filter in SourceFiltering works in a different way than other connections. If an expression complies to specific syntax, the query is sent to Salesforce with the Salesforce API and the filtering is done there, prior to the records being retrieved across the network.
This can greatly improve the speed of a transformation.
If a query does not meet the limitations, but is still a valid filter for the integration platform, all the records are retrieved and the filtering is performed locally.
To be transmitted to Salesforce, the expression must be in the following format:
field <comparison> value
Only numbers, strings, field references, TRUE and FALSE, and the DATEVALUE() and DATEVMASK() functions are permitted. These can be combined using conditional operators, such as greater than, less than, and equal to. The IF… THEN… ELSE statement can also be used with the limitation that the IF condition must be TRUE. See the following example:
If Fields("Location") == "TX" And Fields("Type") <>"Rent" Then
 
TRUE
 
Else
 
FALSE
 
End If
The above is filtered at Salesforce and only the records that have a Location of "TX" and a Type other than "Rent" are retrieved.
A query that uses other expressions, such as the following, requires all the records to be retrieved over the network and then filtered locally:
Left(Trim(Fields("First Name")),1) == "J"
Note:  Local filtering is performed on all Source records whether valid filtering takes place at Salesforce or not, but as it is the same query this does not have any effect.
Connecting to Alternate Servers
To connect to a server besides Salesforce, use the environment variable DJ_SFDC_LOGINSERVER.
To set DJ_SFDC_LOGINSERVER
1. Before launching the integration platform, open the System Control Panel and select the Advanced tab and then Environment Variables.
2. In the System Variables section, click New.
3. Enter DJ_SFDC_LOGINSERVER in the Variable name field.
4. Enter the name of the server in the Variable value field.
Connecting to Alternate Endpoints
To connect to a different endpoint, use the environment variable DJ_SFDC_ENDPOINT.
To set DJ_SFDC_ENDPOINT
1. Before launching the integration platform, open the System Control Panel and select the Advanced tab and then Environment Variables.
2. In the System Variables section, click New.
3. Enter DJ_SFDC_ENDPOINT in the Variable name field.
4. Enter the endpoint in the Variable value field.
Connecting to Alternate Salesforce Instance URL
To connect to a different Salesforce Instance URL, use the environment variable DJ_SFDC_INSTANCE_URL.
To set DJ_SFDC_INSTANCE_URL
1. Before launching the integration platform, open the System Control Panel and select the Advanced tab and then Environment Variables.
2. In the System Variables section, click New.
3. Enter DJ_SFDC_INSTANCE_URL in the Variable name field.
4. Enter the URL of the Salesforce instance that you want to connect to in the Variable value field.
Debugging
For debugging Salesforce transformations on Windows systems, use the environment variable DJ_SFDC_DEBUG. It enables the writing of request and reply files of the XML data sent to and received from the Salesforce server. These files, named post.request.n and post.reply.n, are written to the current working directory.
The creation of these files has a performance cost, so running them in a production environment is not recommended. However, they can be helpful for troubleshooting issues when working with technical support engineering.
To set DJ_SFDC_DEBUG
1. Before launching the integration platform, open the System Control Panel and select the Advanced tab and then Environment Variables.
2. In the System Variables section, click New.
3. Enter DJ_SFDC_DEBUG in the Variable name field.
4. Enter Yes in the Variable value field.
Unmapped Fields
If you run a map that includes unmapped fields and connects to Salesforce, the integration platform does one of the following:
If the map was created in version 9, all unmapped fields are marked as NULL.
If the map was created in version 10 and you did not specify a batch response file, all unmapped fields are ignored unless all of the following conditions apply:
The fields are creatable (metadata value is true).
The fields are not nillable (do not allow a nil or null value).
The fields do not have a default value.
If these three conditions exist for a field, the integration platform returns an error message stating that the field requires a value.
SAP Business One
SAP Business One (SAP B1) is an integrated Enterprise Resource Planning (ERP) system. This connector uses the SAP B1 DI API to integrate with SAP B1.
SAP B1 Versions Supported
This connector supports the following versions of SAP B1:
SAP Business One 8.8
SAP Business One 8.81
Installing the SAP B1 Connector
Note:
Before you install the SAP B1 connector, the SAP B1 client must be installed and running on the same machine where the connector will be used. The following must also be installed on this machine:
MS SQL Server Native client
the DI API for SAP B1
If these conditions are not met, the connector will be unusable and you will get an error message.
To install the connector
1. Go to the subdirectory B1DIAPI on the machine where the SAP B1 Server is installed. For example, C:\Program Files\SAP\SAP Business One Server\B1_SHR\B1DIAPI.
2. Copy the entire folder contents to the machine where SAP B1 Client is installed.
3. Run setup.exe to install the DI API on the SAP B1 Client machine.
After installation, you can find the following subdirectory with the DI API client libraries:
C:\Program Files\SAP\SAP Business One DI API\DI API 88\
This directory contains the SAPbobsCOM88.dll and four additional supporting DLLs.
To install the connector on other machines, you can copy the DI API DLLs manually and register them using this command:
regsvr32 SAPbobsCOM88.dll
For the SAP B1 connector to access the SBO client libraries, you must copy the JCO (Java Connector) library, sbojni.dll, to the C:\windows\system32 folder. This library is available in the connector package.
Connection Information
Note:  If, when selecting the SAP B1 connector in Map Designer, you get a message indicating that the connector may not be licensed, you may need an updated license file. Contact your Account Executive for this file.
Enter the following connection information:
Property
Description
ServerName
Enter one of the following:
Server Name
IP Address of the Server, or
Localhost (if the integration platform and the SAP B1 8.8 server are on the same machine
Company/Database
Company Database Name.
SAP Username
Username for connecting to the SAP B1 Server.
SAP Password
Password for connecting to the SAP B1 Server.
Table
Name of the SAP B1 entity to connect to.
Properties
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Server Name
ST
One of the following:
Server Name
IP Address of the Server, or
Local host (if the integration platform and SAP B1 servers are on the same machine)
Company/ Database
ST
Name of the company database.
SAP Username
ST
Username for connecting to the SAP B1 server.
SAP Password
ST
Password for connecting to the SAP B1 server.
Table
ST
Select the Table (SAP Business One Object). This should be the source/transformation target object where the data will be read from or written to.
Batch Size
S
Number of records buffered by the integration platform before processing them to a map and transfering data to the target.
Trusted
ST
Connection to the SAP B1 server can be through either of the following modes:
Trusted mode does not require the Database Username and Password.
If Trusted is set to False then values for DBUser and DBPassword are mandatory.
DBServerType
ST
Database type, one of the following:
SQL Server 2008
SQL Server 2005
MS Sql
DB2
Sybase
MaxDB
DB Language
ST
Database server language (e.g., English, Spanish, etc.).
DB UserName
ST
UserName for connecting to the SAP B1 database. Not needed if the Trusted option is set to True.
DB Password
ST
Password for connecting to the SAP B1 database. Not needed if the Trusted option is set to true.
LicenseServer:Port
ST
Required if the SAP B1 License Server is running on a different server from the DBServer or if the port is not 30000 (the default).
Batch Response
T
For delete operations, one batch response entry exists for each delete action.
For insert and update operations, one batch response entry exists for each insert or update action.
The returnCode for an individual operation shows the result of the operation execution. A successful execution returns 0.
For failed operations, the errorDescription tag shows a detailed error message. This includes both the SAP B1 error message as well as error messages generated by the integration platform. When the returnCode is 0 (success), this tag returns the entity’s unique ID number.
Supported Operations
The SAP B1 connector supports the following operations, although not all entities support all operations. See Entity Support for options available for each entity.
Create (C)
Read (R)
Update (U)
Delete (D)
Note:  The Delete operation invokes either the Delete, Remove, or Cancel method depending upon the supported operation of the particular entity.
Entity Support
The SAP Business One Connector supports the following entities and corresponding operations.
Note:  Although Map Designer may allow or require the use of action keys, they are not supported by this connector. The SAP B1 Connector only uses the keys specified in the table below.
 
Entity
Supported Operations
Key Field
BusinessPartnerGroups
CRU
Code
BusinessPartners
CRUD
CardCode
ChartOfAccounts
CRU
Code
Contacts
CRU
ContactCode
CreditNotes
CRU
DocEntry
DeliveryNotes
CRU
DocEntry
Drafts
CRUD
DocEntry
EmployeesInfo
CRUD
EmployeeID
Invoices
CRU
DocEntry
ItemGroups
CRUD
Number
Items
CRUD
ItemCode
LengthMeasures
R
-
Orders
CRUD
DocEntry
PriceLists
CRU
PriceListNo
ProductionOrders
CRUD
AbsoluteEntry
PurchaseCreditNotes
CRU
DocEntry
PurchaseDeliveryNotes
CRU
DocEntry
PurchaseInvoices
CRU
DocEntry
PurchaseOrders
CRUD
DocEntry
PurchaseReturns
CRU
DocEntry
Quotations
CRUD
DocEntry
Returns
CRU
DocEntry
SalesForecast
CRUD
Numerator
SalesOpportunities
CRUD
SequentialNo
SalesPersons
CRUD
SalesEmployeeCode
SalesStages
CRU
SequenceNo
SalesTaxCodes
R
-
Service Calls
CR
ServiceCallID
ServiceContracts
CRU
ContractID
ShippingTypes
CRU
Code
Users
CRUD
InternalKey
VatGroups
R
-
WarehouseLocations
R
-
Warehouses
R
-
WeightMeasures
R
-
WithholdingTaxCodes
R
-
UDO (User Defined Objects) - Type: Document
CRU
DocEntry
UDO (User Defined Objects) - Type: Master Data
CRUD
Code
Upsert with Relationships
Upsert is not supported at this time.
Limitations
The SAP B1 connector has the following limitations:
The Delete operation invokes either delete, remove, or cancel methods depending upon the supported operation of the particular entity. However, the business logic prevents the deletion of entities that are in use in order to maintain the referential integrity of the data. In such cases, the The Batch Response shows a detailed message returned by SAPB1.
Update and Delete operations for certain child entities are not supported due to limitations in the DI API.
The Time data type for a UserDefinedField is not supported. This is a limitation of the DI API.
There are reports of memory leakage when using JCO with the integration platform API. This leakage may result in increased consumption of resources on the server where the connector runs. In rare instances, when resources are limited and a large number of records are involved, the process may terminate. See the following SAP OSS Notes for details:
1034147 - JCO_JVM shuts down with large payload
1157304 - JCO_Failed connection to SBO produces memory leak
1079163 - DI_Memory Leak in Add/Update BP With Big Number of Contacts.
Data Type Support
The SAP B1 Connector supports these data types:
Boolean
Date
Double
Float
Integer
String
Debugging Tips
When building maps using SAP B1 as a target, you can use a reference help file REFDI.CHM distributed with the DI SDK to see relationships between entities and record types. This help file lists which records you can add as children to other records. It also lists supported operations and Key fields for each entity.
Additional Information
If the user does not have access to a child object of a particular entity, they will not see that child object in the list of objects available for viewing through the data browser.
SAP IDoc (eDoc)
Systems, Applications, and Products in Data Processing (SAP) is an enterprise-wide information system designed to coordinate all resources, information, and activities that are required to complete business processes such as order fulfillment and billing. Intermediate Document (IDoc) is an SAP format to transfer the data for a business transaction.
Schema File Required
You can obtain an SAP IDoc schema file (in W3C XML format) from SAP that contains the structure for your SAP IDoc file connections. If your SAP IDoc file structure is different, you may have to modify the template.
To make changes to an SAP IDoc template file
1. Import the file into DataConnect v11 as a text artifact.
2. Open the file and make the required changes.
3. Save the file with a different name in the required project folder.
To obtain a template file, contact your sales representative. These template files are standard schemas.
To connect to an SAP IDoc source or target file, you must set a schema in the IDocSchemaFile property.
Note:  You cannot connect directly to the schema. You must select a schema file from the current project that contains the schema you want to use.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
ST
To set encoding for reading source and writing target data, select a code page translation table.
Default is OEM.
To use ANSI, select ISO8859-1.
For Windows CodePage 1252, select CP1252.
IDocSchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
RecordSeparator
ST
Sets the character that is used to mark the end of a record.
The default is CR-LF (carriage return-line feed). An HCFA1500-NSF file is expected to use this default between records.
To use a different record separator, select the required option. For the list of options, see Element Separator.
If the required option is not available and is a printable character, then enter the character in the drop-down box.
If the record separator is not a printable character, replace CR-LF with a backslash (\), an X, and the hexadecimal value for the separator. For example, for a check mark, enter \XFB.
For a list of the 256 standard and extended ASCII characters, search for "hex values".
Version
ST
Select the required version. This connector supports SAP IDoc versions 3.x and 4.x.
Element Separator
This connector supports the following element separators:
CR-LF
STX (0001)
SOT (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS (001C)
GS (001D)
RS (001E)
US (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
' (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
SAP IDoc
Systems, Application and Products in Data Processing (SAP) applications, built around their latest R/3 system, provide the capability to manage financial, asset and cost accounting, production operations and materials, personnel, plants and archived documents. The R/3 system runs on a number of platforms and uses the client/server model. A recent R/3 version was provided for IBM's AS/400 platform. The latest version of R/3 includes a comprehensive Internet-enabled package.
SAP has recently recast its product offerings under a comprehensive Web interface, called mySAP.com and added new e-business applications, including customer relationship management (CRM) and supply chain management. More information is available at www.sap.com.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
The SAP(IDoc) connector can use either the relevant SAP IDoc definition file exported from the SAP applications or the XML IDoc schemas downloaded from SAP (at http://ifr.sap.com) to parse and build SAP (IDoc) files. Using a sample SAP (IDoc) data instance is optional.
SAP (DMI) with ODBC
The SAP Data Migration Interface (DMI) is a defined interface published by SAP; it defines how import files are to be laid out, so that the DMI can load them into the various SAP modules. The import files are flat files, with multiple structured schemas.
The Connect Premium for SAP ODBC driver and Connect ADO provider are designed to use the appropriate BAPIs and RFCs in a production SAP system. This provides full read/write transactional capabilities through standard open interfaces.
Map Editor supplies SAP DMI layout dictionaries that make mapping any Source into the SAP DMI easy. Look in the Common directory for the Connections folder, then open the SAP folder. Many .djd (dictionary files) are included in the SAP folder for your convenience.
To access a SAP DMI file with ODBC, see ODBC 3.5.
Using the SAP (IDoc) connector
SAP IDocs are proprietary layout definitions corresponding to common business transactions such as purchase orders, invoices, material master information and more. They are published as XML schemas and are used as templates in Document Schema Designer. Documentation is located at the SAP Online Help site at help.sap.com. SAP IDoc XML metadata schemas are available for download directly from the SAP Interface Repository.
SAP IDoc internal metadata definitions are exported from SAP applications as text files.
SAP Interface Repository and IDocs
SAP XML IDocs are available from the Interface Repository located at ifr.sap.com. The site is organized by Module and SAP release number. Navigate to the appropriate Module and select the relevant release number to scroll through a list of available IDocs. SAP employs a naming convention that relates the IDoc to its business function and its release version. Download links and documentation are accessed from this list.
IDocs are configured and linked to programs designed to process the application events associated with a specific document in an SAP system. Follow the steps below to transfer the downloaded IDoc to your local default installation directory for use with Integration product software.
Transferring an IDoc to Your Local Default Folder
SAP IDocs should be stored in the installation directory in a SAP subdirectory. The path is InstallDir\Common\LegacyMetadata\SAP.
After you download and unzip the IDoc, save it to this directory.
IDoc Schemas
IDocs consist of segments and fields within those segments. Each IDoc must include one control record (Header Record - EDI_DC40). The Header Record contains transmission and processing information. Based on a profile set up in the ALE (Application Linking and Embedding) interface, the control segment informs SAP how to treat an incoming IDoc and what routine to follow to produce the expected result. In the case of outbound IDocs, the control record is generally not needed by the receiving system unless it is also SAP.
Segments that follow the Control Record make up the body of the business document. Six fields, called Data Control Segments, are found at the beginning of each segment. These six fields retain segment identification information and are not defined in the SAP IDoc schema. When you load the Source/target connector, these fields are added to the beginning of the Data Control Segment. The following table gives a description of these fields.
Segment ID
Description
Value
SEGNAM
Segment (external name)
30
MANDT
Client
3
DOCNUM
IDoc number
16
SEGNUM
Segment number
6
PSGNUM
Number of superior parent segment
6
HLEVEL
Hierarchy level of SAP segment
2
The fields that follow each Data Control Segment hold information that pertains to the specific business transaction.
In Map Editor, each segment of the IDoc is treated as an individual record type. This enables you to exercise complete control of each segment and means that you can decide where, when and how a particular segment is read or written.
"Complete control" also means that you can decide if a record is read or written at all. Many IDocs contain looping or repeatable segments. A looping segment may be hierarchically dependent on preceding segments or distinguished by a qualifier value, a common feature in many EDI (Electronic Data Interchange) formats. The Partner Information segment, for example, might be the vendor, customer, sold-to-party, ship-to-party, or some other partner. An IDoc may configure the Partner Information segment to repeat several times. The partner type qualifier value for Customer is "KU", so if the partner is a Customer, the "KU" value can be used to control the segment looping to write only one iteration.
Mapping IDocs
 
Source/Target
Task
Source
If you are working with an IDoc as a source file, navigate to the Source Connection Window in Map Designer to choose the SAP (IDoc) adapter and the .ds.xml file, then just drag the fields from the Source segments to your target fields. In looping (repeatable) segments, you can use a conditional expression to examine the contents of the qualifier field and accept only those records or segments that contain the information you want in your target.
Target
If you are working with an IDoc as a target file, navigate to the Target Connection Window in Map Editor to choose the SAP (IDoc) adapter, the .ds.xml file and to name the target file.
The .ds.xml file provides part of the Control Record information; IDoc type, for example. Your Trading Partner, or the contact person who configured the ALE profile, can provide Message Type, Partner Profile names, port number and other information.
The .ds.xml file fills-in some of the information required in the Data Control segments. Some of the Control Field values are identical to the Control Record field values, but some are user-defined. Doc Num, for example, can be hard coded as "0000000000000000". You have to setup a counter to increment Segment Count and add multiple ClearMapPutRecord Actions to repeatable segments if you want to write those records to your target. Then navigate to the Map tab and drag each Source field to its counterpart in the IDoc Target layout.
Schema File Required
You can obtain an SAP IDoc schema file (in W3C XML format) from SAP that contains the structure for your SAP IDoc file connections. If your SAP IDoc file structure is different, you may need to modify the template.
To make changes to an SAP IDoc template file
1. Import the file into Project Explorer as a text artifact.
2. Open the file in Project Explorer using the Text Editor.
3. Make your changes.
4. Save the file with a different name.
To obtain a template file, contact your sales representative. These template files are standard schemas.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Description
Encoding
ST
To set encoding for reading source and writing target data, select a code page translation table. Default is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252.
SchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
RecordSeparator
ST
Sets the character used to mark the end of a record.
The default is CR-LF (carriage return-line feed). An HCFA1500-NSF file is expected to use this default between records. To use a different record separator, click the RecordSeparator property and select an option from the list. See Element Separator below for the list of options.
If the record separator is not one of the choices on the list and is a printable character, highlight CR-LF and then enter the correct character.
If the record separator is not a printable character, replace CR-LF with a backslash (), an X, and the hexadecimal value for the separator. For example, for a check mark, enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values."
Validation
ST
This setting determines whether or not the IDoc file is checked for validity. The default is false. If you want to validate the IDoc file, change this setting to true.
Version
ST
This connector supports SAP IDoc versions R/3 and R/4.
Element Separator
This connector supports the following element separators:
CR-LF
STX (0001)
SOT (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS (001C)
GS (001D)
RS (001E)
US (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
' (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
Data Type
These are the data types of the fields in your target data table:
boolean
date
decimal
double
float
record
string
time
If you are writing to a new file, you may use any of these available data types from the list that appears when you click the arrow in the Type cell of a field.
If you are appending data to an existing file, the data type of each field becomes the data type in the file.
SAS Transport Format
With this connector, the integration platform reads and writes SAS transport data sets for mainframe and PC SAS versions 5 and later. Because the procedures are slightly different for version 5.x and 6.x and higher, the procedure section is divided into two sections, each specific to the import and export commands. Refer to the SAS documentation for specific information about your version of SAS.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
ODBC and SAS
SAS data can also be read and written easily using ODBC drivers. For details, see ODBC 3.x.
SAS Versions 7.x and 8.x
The integration platform SAS connector supports the ability to create Transport Data Sets using PROC COPY only (this includes all SAS versions, including the newest 7.x and 8.x). The integration platform does not support the use of CPORT. You MUST use PROC COPY to create the SAS Transport Data Sets.
SAS versions 7.x and 8.x also include the ability to read/write other intermediate file formats, such as dBASE, Excel and others.
Large File Support
Since an enhancement was made to this connector, the integration platform can now handle large files (2 gigabytes and larger), with no maximum file size limitation.
Connectivity Pointers
To connect to SAS Transport Format v5 as your source
1. Log on to SAS.
2. In the program editor at the bottom of the screen, type the following commands starting on line 1:
libname XXX sasv5xpt 'c:\path\file.ext;
proc copy in=work out=XXX;
select YYY; run;
Where XXX is any library where you want a copy of the data set. YYY is the name of the data set you want to transport. (It must exist in the work library. If it is in another library, change in=work to in=name of the library it is in.)
Where c:\path\file.ext is the drive, directory path and file name of the transport file you want SAS to create. If you do not specify a path, it creates it in the current directory.
3. At the command line, type submit and press Enter.
4. After you create the transport data set, open the integration platform.
5. Create a new data set.
6. In the File box, select the file 'c:\path\file.ext' that you created in SAS.
7. At Member, click the arrow to select a member and click OK.
To connect to SAS Transport Format v5 as your target
1. Create a target data set in the integration platform with SAS as your target connector and 'c:\path\file.ext' as your target file. Enter a name for the table.
2. Log on to SAS.
3. In the program editor at the bottom of the screen, enter the following commands starting on line 1:
libname XXX sasv5xpt 'c:\path\file.ext;
proc copy in=XXX out=work;
proc print; run;
Where XXX is any library where you want a copy of the data set. 'c:\path\file.ext' is the drive, directory path and file name of the transport file you created with the integration platform. If you do not specify a path, SAS looks in the current directory.
4. At the command line, type submitand press Enter.
Now you have a dataset called 'work.XXX' available in your SAS session.
Note:  If you are using SAS version 8, you must use PROC COPY rather than the CPORT method of creating the SAS Transport data sets. The integration platform does not support CPORT. But, as SAS version 8 still supports the use of PROC COPY, you should experience no problems using the following procedure whether you are using version 6 or version 8.
To connect to SAS Transport Format v6.x and later as your source
1. Log on to SAS.
2. In the program editor at the bottom of the screen, type the following commands starting on line 1:
libname XXX XPORT 'c:\path\file.ext'
proc copy in=work out=XXX
select YYY; run;
Where XXX is any library where you want a copy of the data set. YYY is the name of the data set you want to transport. (It must exist in the work library. If it is in another library, change in=work to in=name of the library it is in.)
Where c:\path\file.ext is the drive, directory path and file name of the transport file you want SAS to create. If you do not specify a path, it creates it in the current directory.
Note:  For VAX/VMS version 6.08+, you must include the option CC=NONE; at the end of the LIB statement. For versions prior to 6.08, the file must be modified by removing the CR-LF's. (Refer to your SAS documentation for details.)
3. At the command line, type submit and press Enter.
4. After the transport data set has been created, start the integration platform.
5. Create a source data set and select SAS as the connector. In the file list, select the file path name that you created in SAS. Select a table from the list.
To connect to SAS Transport Format v6.x and later as your target
1. Create a target data set and select SAS as your target connector and 'C:\path\file.ext' as your target file. Type a name for the table.
2. Log on to SAS.
3. In the program editor at the bottom of the screen, type the following commands starting on line 1:
libname XXX XPORT 'c:\path\file.ext'
proc copy in=XXX out=work;
proc print; run;
Where XXX is any library where you want a copy of the data set. 'C:\path\file.ext' is the drive, directory path and file name of the transport file you created with the integration platform. If you do not specify a path, SAS looks in the current directory.
4. At the command line, type submit and press Enter. Now you have a data set called 'work.XXX' available in your SAS session.
Connector-Specific Notes
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error, as do connectors that have truncation support. You may instead use ODBC as your connector to trap truncation errors.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Description
ScanTables
ST
Sets whether you want to scan the entire file for multiple tables or sets of data. Default is true. If you have a particularly large file, or a file with many tables and you wish to save scan time, you should set this option to false.
CodePage
ST
Sets the translation table for the encoding to use to read and write data. Default is ANSI.
Special Notes about SAS Dates
To convert SAS date strings into valid date formats for other database applications, use the following target field expression:
DateAdd("d", Fields("FIELDNAME"), DateValue("01/01/60"))
where ("FIELDNAME") is the source field name in the SAS source file. The remainder of the expression should be written exactly as shown above.
Note:  The SAS date field is read as a Text data type. You must manually change the data type to Date.
Data Types
The following data types are available:
Date
DateTime
Text
Variable length IBM float
SBT Acct (FoxPro)
SBT Accounting is an accounting application that uses an xBASE format. If the data file references a memo field file, the memo file must exist for the connection to occur. The primary data file usually has a .DBF extension and the memo file usually has a .FPT extension. SBT Acct (FoxPro) files are structured. For example, both the data and the file structure are stored inside the primary data file. With this connector, the integration platform uses FoxPro to connect to SBT Acct files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Field Names: Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter. Examples: ACCOUNT, LAST_NAME, FIRST_NAME, PHONE_NO
Size of Fields: Character fields can be no longer than 254 characters. If a field is longer than 254 characters, it must be defined as a Memo field.
Number of Fields: A maximum of 255 fields are allowed.
Record Width: The maximum combined width of all fields in one record is 4000 bytes, excluding memo fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
ST
This option determines how memo files are handled. The following options are available:
Never - This is the default. The integration platform look for and includes any memo file fields when the source data file is read.
Errors - The integration platform looks for and includes any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data.If the memo file (.FPT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always - The integration platform ignores the memo file completely. This means that the memo fields are not included with the transformed data.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Character – may contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes).
Date – may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. For example, the date January 1, 1999, would read 1999101.
Float – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (-), or plus sign (+). Float fields are used to store small and large numbers needed in scientific calculations, such as 9.1234e 12 or 9,123,400,000,000.
Logical – may contain only one byte and is formatted to contain t, f, T, or F, for true or false.
Memo – may contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (-), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field. Numeric fields are used to store the exact value of a number with a fixed number of digits.
Sequential Binary
Sequential Binary connections are binary connections that work with sequential data streams. The linear data streams can be either a sequential file or a named pipe. With this connector, the integration platform cannot detect file length or end of file. It reads one byte at a time in a forward direction.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
To connect to a named pipe, enter the path and pipe name in this format:
\\.pipe\pathandname
You cannot browse to a pipe or list pipe names. You must already know the name and path to the pipe. It must be local or on the local network. The integration platform treats pipes like files.
A dataset containing a Source pipe continues to run until the pipe is closed by the application that created it.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
RecordSeparator
ST
A Sequential Binary file is presumed to have no record separator between records. To use other characters for a record separator, click the RecordSeparator cell and click once. Then click the arrow to the right of the box and click the desired record separator in the list box. The choices are carriage return-line feed, line feed, carriage return, line feed-carriage return, form feed, empty line and no record separator (default). To use a separator other than one from the list, you can type it here.
If the record separator is not one of the choices from the list and is a printable character, highlight the current value and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, see Hex Values.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating-point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display time
Interval
Interval day
Interval day to hour
Interval day to minute
Interval day to second
Interval hour
Interval hour to minute
Interval hour to second
Interval minute
Interval minute to second
Interval second
Interval year
Interval year to month
Magic PC date
Magic PC extended
Magic PC number
Magic PC real
Magic PC time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Variable length IBM float
Zoned decimal
SGML
SGML is one of the special data publishing file formats that can be used in many applications. With this connector, the integration platform can write SGML files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in a SGML file is Character.
SPLUS
SPLUS is a statistical package. With this connector, you can read and write SPLUS data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
SubLabels
ST
This option determines if labels are substituted into value/label pairs. The default for this property is false.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are supported:
32-bit IEEE floating point
Boolean
Complex
Float
Integer
Text
SPSS
SPSS is a statistical package. With this connector, you can read and write SPSS portable files created and used with the SPSS EXPORT commands.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
SubLabels
ST
This option determines if labels are substituted into value/label pairs. The default for this property is false.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Date
Numeric
Text
SQL Script
This connector generates a SQL script text file to be used to manipulate the data within any SQL database. The SQL Script connector allows you to perform multiple operations (such as table drops, table inserts) upon your target database. However, the integration platform does not actually run the transformation in this case. The transformation can be completed on the SQL system, running the script file generated from the SQL Script connector. You can also create a process to call the SQL script file from a SQL step.
SQL Script is a format that the integration platform can write to as a target type. It is not available as a source type.
Multimode connectors allow you to perform multiple operations (such as table drops, table inserts) directly on your target database.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Note:  Most of the "Multimode" connectors run the transformation in your target database. However, in the case of SQL Script, the integration platform does not actually run the transformation. With SQL Script, the transformation can be completed on the SQL system, running the script file generated from the SQL Script connector. You can also create a process to call the SQL script file from a SQL step.
Connector-Specific Notes
Because the integration platform has no method to determine your target type with the SQL Script connector, there is no target table list in the Event Action Parameters window. You must enter the table name.
When using a SQL Script file, the integration platform attempts to pass a precision indicator if your data type is Number. A precision indicator is not accepted and an error is returned. To prevent this, open your completed SQL Script in an external text editor and remove the precision indicator next to each Number data type. Save your changes and run again.
In multimode targets, modifications to column names, data types, and sizes are not permitted.
Property Options
Because the SQL Script connector is meant to connect to all SQL databases, you need to know your target formats to set the properties correctly. The integration platform includes this format information in a note beneath each property.
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Binary Format
T
Specifies how to represent Binary data within your target SQL script. The default is SQL Hexadecimal Literal. The other available options are C/C++ Literal and Hexadecimal Number.
ByteOrder
T
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The other list box options are Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
Date Format
T
Specifies how to format dates within your target SQL script. The default format is month/day/four-digit year (or mm/dd/yyyy). If you wish your date formats to appear in another format, highlight the default value and change it to the format you wish (for example, yyyy/mm/dd).
Date Time Format
T
Specifies how to format dates that include times within your target SQL script. The default format is mm/dd/yyyy H24:MM:SS. If you wish to use another format, highlight the default value and change it to the format you wish (for example, yyyy/mm/dd H12:MM:SS).
Encoding
T
This allows you to select the type of encoding used with your source and target files.
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Line Break
T
Specifies which character(s) to write to the target SQL script as line breaks. The default is CR-LF. The available options are CR-LF (default), LF, CR, LF-CR, Empty Line, and None.
Name Quote
T
Specifies how to delimit the table and column names within the target SQL script. The default is double quotes. The available options are double quotes, single quote, and None.
Order Mark
T
The Order Mark is a special character value that is sometimes written to a Unicode text file to indicate the byte order used for encoding each of the Unicode characters. You have the option of writing byte order mark at the beginning of Unicode (wide) output or not. The default is false. If you wish to have the byte order mark placed at the beginning of the output, change this option to true.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Statement Separator
T
Specifies what characters to output as a statement separator to your target SQL script. The default value is a semicolon ( ; ). The available options are semicolon and None.
String Quote
T
Specifies how to delimit strings within your target SQL script. The default is a single quote ('). The available options are double quote ("), single quote (") and None.
Time Format
T
Specifies how to format times within the target SQL script. The default format is H24:MM:SS. If you wish your time format to appear in another form, highlight the default value and change it to the format you wish.
Time Stamp Format
T
Specifies how to format times within the target SQL script. The default format is H24:MM:SS. If you wish the time to appear in another form, highlight the default value and change it to the format you wish. This format may not have a Null value.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Data Types
The integration platform does not support ID field types. The default text data type is Char.
The following data types are available for SQL Script connections:
Binary
Bit
Bit Varying
Char
Char Varying
Character
Character Varying
Date
DateTime
Decimal
Double Precision
Float
Integer
Long Binary
Long Varbin
Long VarChar
National Character
National Character Varying
NChar
NChar Varying
Number
Number ( )
Numeric
NVarChar
Real
RowID
SmallInt
Text
Text ( )
Time
TimeStamp
VarBinary
VarChar
VarChar2
SQL Server 7 (OLEDB)
SQL Server 7 is a database application that you can connect to natively through an OLE DB interface or through ODBC. With this OLEDB connector, the integration platform reads and writes to SQL Server 7 tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
The SQL Server engine must be running in the background to connect to SQL Server 7.
Under Server Name, enter the name of your server. If SQL Server is installed locally, leave this line blank.
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
Connector-Specific Notes
When a SQL passthrough query is used in place of a table name, the integration platform does not correctly identify identity (for example int identity) fields, but can use the nonidentity version of the data type (for example a field of type int identity is considered an int field). The problem identifying identity is due to limitations in the SQL Server OLEDB provider. OLEDB has no way to directly identify whether a column is an identity type and the SQL Server OLEDB provider does not return the base column/table/owner information needed to determine this from the system catalog.
You can use a SQL passthrough query as a source and the integration platform reads the data correctly. If this fix causes identity fields to be mapped to non-identity fields in the Target, you change the data type in the Target to fix the fields.
No Millisecond Support - The integration platform does not support milliseconds in DateValMask conversions. Although a mask is provided for them, the integration platform returns zeros in those places.
Cursor Considerations - This limitation is due to OLE DB bugs in SQL Server 7 and SQL Server 2000. After connecting to views and changing cursor type from the default (static) to another cursor type, the integration platform may stop functioning. If this occurs, use the Task Manager to close the integration platform, then reopen the integration platform and the transformation, and choose SQL Server v. 6.5 as your source connector.
Client Cursors and Sessions - According to Microsoft, there is a limitation to keep in mind when using client cursors:
"…support only one active statement at a time on a connection. After a statement has been executed, no other statements can be executed on the connection until all of the results have been retrieved by the consumer, or the statement has been cancelled."
This limitation could cause problems if you are using SQL Server sessions in process designs. Sessions allow you to execute multiple SQL statements under the umbrella of a single connection/transaction.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints, and integrity rules for SQL databases. There is no default.
CursorType
S
Type of cursor to use for retrieving items from the source table. Available options are client (default), fast forward, static, keyset, dynamic. See CursorType for details.
Encoding
ST
Type of encoding to use with source and target files. Default is OEM. See Encoding Notes for details.
Synonyms
ST
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
Only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to SystemTables. If set to true, this property allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is false.
TransactionIsolation
T
Allows you to specify an isolation level when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE (default) - Does not permit any of P1, P2, and P3.
None - Does not start a transaction.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
ST
If set to true (default), allows you to see views. The view names appear in the table list along with the tables.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors so advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement, for example, to instruct the SQL database server to filter data based on a condition before it is sent to the integration platform. Omit Where when you enter the clause. No default exists for this property.
When the source connection is a Select statement, do not apply WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
CursorType
client – This cursor is the default because it supports all Transact-SQL statements. The other cursor options (Static, Dynamic, Keyset) do not. Client cursors should be used only to alleviate the restriction that server cursors do not support all Transact-SQL statements or batches. One obvious benefit of the client-side cursor is quick response. After the result set has been downloaded to the client computer, browsing through the rows is fast. Your application is generally more scalable with client-side cursors because the cursor's resource requirements are placed on each separate client and not on the server.
fast forward – A cursor that cannot be scrolled; rows are read in sequence from the first row to the last row only. In SQL Server, this cursor is called forward-only, and is the default. Note that the default for this Connector is client.
The fast forward cursor can only move forward through the result set. To return to a previous row, the application must close and reopen the cursor and then read rows from the beginning of the result set until it reaches the required row. Forward-only cursors provide a fast mechanism for making a single pass through a result set. This cursor is less useful for screen-based applications, in which the user scrolls backward and forward through the data. Such applications can use a forward-only cursor by reading the result set once, caching the data locally and performing scrolling themselves. This works well only with small amounts of data.
static – A cursor that shows the result set exactly as it was at the time the cursor was opened. Sometimes called snapshot cursors.
Static cursors never detect other updates, deletes and inserts made to underlying data while the cursor is open. For example, suppose a static cursor fetches a row and another application then updates that row. If the application re-fetches the row from the static cursor, the values it sees are unchanged, despite the changes made by the other application. All types of scrolling are supported, but providers may or may not support bookmarks.
Keyset – A cursor that shows the effects of updates made to its member rows by other users while the cursor is open, but does not show the effects of inserts or deletes.
Keyset-Driven cursors take note of a key you may have in your database. Under normal circumstances, when you request records from a source, you get the entire package. However, when a Keyset cursor is selected, only the keys are returned, giving a performance boost. Then, when you scroll through the recordset, the data for the keys is retrieved. However, when you first invoke the cursor, both keys and data are returned so you can see something in your bound control. When you scroll and the data is not in the current batch of data, the Keyset fetches the next block. This way, it only has to manage small keys rather than large chunks of data. Keyset cursors are similar to dynamic cursors, except you cannot see records others add. If another user deletes a record, it is inaccessible from your recordset.
Dynamic – A cursor that can reflect data modifications made to the underlying data while the cursor is open. Updates, deletes and inserts made by users are reflected in the dynamic cursor.
The dynamic cursor can detect any changes made to the rows, order and values in the result set after the cursor is opened. Updates made outside the cursor are not visible until they are committed (unless the cursor transaction isolation level is set to "uncommitted").
Choosing a CursorType
Every cursor uses temporary resources to hold its data. These resources can be memory, a disk paging file, temporary disk files, or even temporary storage in the database. The cursor is called a client-side cursor when these resources are located on the client computer and a server-side cursor when these resources are on the server.
Your cursor choice depends upon whether you need to change, or simply view the data. The following are recommendations from Microsoft:
If you just need to scroll through a set of results, but not change data, use a fast forward (called forward-only in SQL Server) or use the static cursor, OR
If you have a large result set and need to select a few rows, use a keyset cursor, OR
If you want to synchronize a result set with recent adds, changes and deletes by all concurrent users, use a dynamic cursor.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the data file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
SQL Server 7 Mass Insert (OLEDB)
The SQL Server 7 Mass Insert (OLEDB) connector provides a rapid way of inserting records into a SQL Server database. It bypasses the transactional layer of the SQL Server database and adds information directly to the storage tables. SQL Server 7 Mass Insert is a useful option if you are dealing with large tables and performance is of paramount importance.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Tip:  To connect to a data table in a SQL Server database, the SQL Server engine must be running in the background.
Connector-Specific Notes
SQL Server 7 Mass Insert does not transform to updateable views, only to tables.
The Update Mode is not an available option for this connector, since it is a Mass Insert application.
Process Limitation: This connector cannot be used in a multithreaded process. The mass insert feature causes problems when more than one thread runs parallel in a process. Use the single-thread process instead. Important: Your process can include multiple transformations running in parallel to different tables.
The integration platform does not support milliseconds in DateValMask conversions. Although a mask is provided for them, the integration platform returns zeros in those places.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints, and integrity rules for SQL databases. There is no default.
Encoding
T
Type of encoding used with your source and target files. Default is OEM.
IdentityInsert
T
By default, identity fields are not inserted into the database. To enable insertion of identity fields, change to true.
SystemTables
T
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Tablock
T
Either true (default) or false.
Views
T
If set to true (default) this property allows you to see views. The view names appear in the table list along with the tables. (This property is available on all SQL connectors, but Mass Inserts can only write to tables, not views. So setting this to true has no effect in this instance.)
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Data Types
The only data type available for SQL Server 7 Mass Insert is Text.
SQL Server 7 Multimode (OLEDB)
SQL Server 7 is a database application that you can connect directly to, or through an OLE DB interface or ODBC. With this connector, the integration platform can write SQL Server 7 data tables and it is not available as a source. The integration platform integration platformsupports concurrent writes to multiple Target Tables within a SQL RDBMS, such as with this SQL Server 7 multimode connector.
Multimode connectors allow you to perform multiple operations (change Source or Target, table drops, table inserts, etc.) directly on your target database.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Tip:  To connect to SQL Server 7 Multimode (OLEDB), you must enter a supported User ID and Password. The connection is limited to the access rights of the User ID that you use. For most purposes, you should use a User ID with an Administrator profile. We also suggest that you use a separate User ID that is not used for normal logins.
Connector-Specific Notes
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
There is a SQL Server 7 provider bug that causes difficulty with inserting identity fields with IdentityInsert enabled. To get around this limitation, use the Mass Insert Target type.
Note:  The integration platform does not support milliseconds in DateValMask conversions. Although a mask is provided for them, the integration platform returns zeros in those places.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, either browse to the file or enter the path and file name.
SQL Output
T
Allows you to select bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster as bind variables are used. Target Only is the default.
Target Only (default) - Uses bound mode, which uses bind variables. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
T
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If set to False, null values are not sent to the database when inserting or updating records and the connector is forced to operate in unbound mode, which may cause slower performance.
Views
T
If set to true (default), allows you to see views. The view names appear in the table list along with the tables.
TransactionIsolation
T
Allows you to specify an isolation level when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
None - Does not start a transaction.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Encoding
T
Type of encoding to use with source and target files. Default is OEM. For details, see Encoding Notes.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the data file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
The only data type available for SQL Server 7 (OLEDB) is Text.
SQL Server 2000
SQL Server 2000 is a database application that you can connect natively to through an OLE DB interface, or through ODBC. In this newest version of the SQL Server line, Microsoft has enhanced the database platform for large-scale online transactional processing (OLTP), data warehousing, XML integration of relational data and e-commerce applications. With this connector, the integration platform reads and writes to SQL Server 2000 tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
Set your server to allow NT authentication as the default method. Then when following the procedure below, leave the User ID and Password options blank.
Under Server Name, enter the name of your server. If SQL Server is installed locally, leave this line blank.
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
You can use an EZScript to auto-generate a GUID for a variable declared as the uniqueidentifier data type. See Generating a Random Unique Identifier.
Connector-Specific Notes
Identity Columns with SQL Passthrough Query
When a SQL passthrough query is used in place of a table name, the integration platform does not correctly identify identity (for example int identity) fields, but can use the non-identity version of the data type (for example a field of type int identity is considered an int field). The problem identifying identity is due to limitations in the SQL Server OLEDB provider. OLEDB has no way to directly identify whether a column is an identity type and the SQL Server OLEDB provider does not return the base column/table/owner information needed to determine this from the system catalog.
You can use a SQL passthrough query as a source and the integration platform reads the data correctly. If this fix causes identity fields to be mapped to non-identity fields in the target, you change the data type in the target to fix the fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CommitFrequency
T
Controls how often data is committed to the database. By default, CommitFrequency is zero. This means that data is committed at the end of the transformation. When doing large transformations, this is not practical as it may fill up the transaction logs. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints, and integrity rules for SQL databases. There is no default.
CursorType
S
Type of cursor to use for retrieving items from the source table. Available options are client (default), fast forward, static, keyset, and dynamic. See CursorType for details.
Encoding
ST
Type of encoding to use with source and target files. Default is OEM.
Synonyms
ST
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Views
ST
If set to true (default), this property allows you to see views. The view names appear in the table list along with the tables.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors so advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement, for example, to instruct the SQL database server to filter the data based on a condition before it is sent to the integration platform. Omit Where when you enter the clause. No default exists for this property.
When the source connection is a Select statement, do not apply WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
Transactionlsolation
T
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default for version 3.x, 3.5 and IBM DB2 Universal Database is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The five isolation levels are as follows:
READ_UNCOMMITTED – Permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1.
REPEATABLE_READ – Permits P3. Does not permit P1 and P2.
SERIALIZABLE – Does not permit any of P1, P2, and P3.
VERSIONING – Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
CursorType
client – This cursor is the default because it supports all Transact-SQL statements. The other cursor options (Static, Dynamic, Keyset) do not. Client cursors should be used only to alleviate the restriction that server cursors do not support all Transact-SQL statements or batches. One obvious benefit of the client-side cursor is quick response. After the result set has been downloaded to the client computer, browsing through the rows is fast. Your application is generally more scalable with client-side cursors because the cursor's resource requirements are placed on each separate client and not on the server.
fast forward – A cursor that cannot be scrolled; rows are read in sequence from the first row to the last row only. In SQL Server, this cursor is called forward-only, and is the default. Note that the default for this Connector is client.
This cursor can only move forward through the result set. To return to a previous row, the application must close and reopen the cursor and then read rows from the beginning of the result set until it reaches the required row. Forward-only cursors provide a fast mechanism for making a single pass through a result set. This cursor is less useful for screen-based applications, in which the user scrolls backward and forward through the data. Such applications can use a forward-only cursor by reading the result set once, caching the data locally and performing scrolling themselves. This works well only with small amounts of data.
static – A cursor that shows the result set exactly as it was at the time the cursor was opened. Sometimes called snapshot cursors.
Static cursors never detect other updates, deletes and inserts made to underlying data while the cursor is open. For example, suppose a static cursor fetches a row and another application then updates that row. If the application re-fetches the row from the static cursor, the values it sees are unchanged, despite the changes made by the other application. All types of scrolling are supported, but providers may or may not support bookmarks.
Keyset – A cursor that shows the effects of updates made to its member rows by other users while the cursor is open, but does not show the effects of inserts or deletes.
Keyset-Driven cursors take note of a key you may have in your database. Under normal circumstances, when you request records from a source, you get the entire package. However, when a Keyset cursor is selected, only the keys are returned, giving a performance boost. Then, when you scroll through the recordset, the data for the keys is retrieved. However, when you first invoke the cursor, both keys and data are returned so you can see something in your bound control. When you scroll and the data is not in the current batch of data, the Keyset fetches the next block. This way, it only has to manage small keys rather than large chunks of data. Keyset cursors are similar to dynamic cursors, except you cannot see records others add. If another user deletes a record, it is inaccessible from your recordset.
Dynamic – A cursor that can reflect data modifications made to the underlying data while the cursor is open. Updates, deletes and inserts made by users are reflected in the dynamic cursor.
The dynamic cursor can detect any changes made to the rows, order and values in the result set after the cursor is opened. Updates made outside the cursor are not visible until they are committed (unless the cursor transaction isolation level is set to "uncommitted").
Choosing a CursorType
Every cursor uses temporary resources to hold its data. These resources can be memory, a disk paging file, temporary disk files, or even temporary storage in the database. The cursor is called a client-side cursor when these resources are located on the client computer and a server-side cursor when these resources are on the server.
Your cursor choice depends upon whether you need to change, or simply view the data. The following are recommendations from Microsoft:
If you just need to scroll through a set of results, but not change data, use a fast forward (called forward-only in SQL Server) or use the static cursor, OR
If you have a large result set and need to select a few rows, use a keyset cursor, OR
If you want to synchronize a result set with recent adds, changes and deletes by all concurrent users, use a dynamic cursor.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the data file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
SQL Server 2000 Mass Insert
The SQL Server 2000 Mass Insert connector provides a rapid way of inserting records into a SQL Server 2000 database. It bypasses the transactional layer of the SQL Server database and adds information directly to the storage tables. SQL Server 2000 Mass Insert is a useful option if you are dealing with large tables and performance is of paramount importance.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
To connect to SQL Server 2000 with NT authentication, set your server to allow NT authentication as the default method. Then when following the procedure below, leave the User ID and Password options blank.
Note:  To connect to a data table in a SQL Server database, the SQL Server engine must be running in the background.
Connector-Specific Notes
SQL Server 2000 Mass Insert does not transform to updatable views, only to tables.
The Update Mode is not an available option for this connector, since it is a Mass Insert application.
Process Limitation: This connector cannot be used in a multithreaded process. The mass insert feature causes problems when more than one thread runs parallel in a process. Use the single-thread process instead. Important: Your process can include multiple transformations running in parallel to different tables.
Note:  The integration platform does not support milliseconds in DateValMask conversions. Although a mask is provided for them, the integration platform returns zeros in those places.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CommitFrequency
T
Controls how often data is committed to the database. By default, its value is zero, which means that data is committed at the end of transformations. For large transformations, this is not practical as it may fill up the transaction logs. Setting the CommitFrequency to some nonzero value instructs the connector to do a database commit after the specified number of records have been inserted or updated in the table. This keeps the transaction log from getting too full but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints, and integrity rules for SQL databases. There is no default.
Encoding
T
Type of encoding to use with source and target files. Default is OEM. See Encoding Notes.
IdentityInsert
T
The integration platform does not automatically insert identity fields into the database. The default is false. To allow identity fields, you must change this property to true.
SystemTables
T
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Tablock
T
Either true (default) or false.
Views
T
If set to true (default), allows you to see views. The view names appear in the table list along with the tables. To disallow views, set to false.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
SQL Server 2000 Multimode
SQL Server 2000 is a database application that you can either connect to directly or through ODBC 3.x. This multimode connector is available as a target only. The integration platform supports concurrent writes to multiple target tables within a SQL RDBMS, such as with this SQL Server 2000 Multimode connector.
Multimode connectors allow you to perform multiple operations (change source or target, table drops, table inserts, etc.) directly on your target database within the same transformation.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Milliseconds: The integration platform does not support milliseconds in DateValMask conversions. Although a mask is provided for them, the integration platform returns zeros in those places.
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Transaction Control: SQL Server 2000 does not allow mixing of API-level transaction control and Transact-SQL control. If you want explicit control of transactions, use Target Only (Unbound) mode, which does not use bind variables. The Target Only mode uses bind variables for speed (the side effect is less control of transactions).
Transaction Support: When a transformation starts, it is in AutoCommit mode (for example, every statement is considered a separate unit of work and is automatically committed). To operate in Explicit mode, do a BEGIN TRANSACTION statement. For Implicit mode, use a SET IMPLICIT_TRANSACTIONS ON Statement.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
T
Type of encoding to use with source and target files. The default is OEM.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
SQL Output
T
Allows you to select either bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster, as bind variables are used. Select from the following:
Target Only (default) - Uses bound mode, which uses bind variables. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
T
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. When set to False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Views
T
When set to true (the default), this property allows you to see views. To disallow Views, change the setting to false. The view names appear in the table list along with the tables.
transactionalIsolation
T
The Translation Isolation option allows you to specify any one of five different isolation levels when reading from or writing to a database table with ODBC. The default for version 3.x, 3.5 and IBM DB2 Universal Database is Serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The five isolation levels are as follows:
READ_UNCOMMITTED – permits P1, P2, and P3.
READ_COMMITTED – Permits P2 and P3. Does not permit P1
REPEATABLE_READ – Permits P3. Does not permit P1 and P2
SERIALIZABLE – Does not permit any of P1, P2, and P3.
VERSIONING – Provides SERIALIZABLE transactions, but does so without a significant impact on concurrency.
For further details about TransactionIsolation levels, see Microsoft ODBC SDK documentation.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
SQL Server 2005
SQL Server 2005 is a database application that you can connect natively to through an OLE DB interface, or through ODBC. In this newest version of the SQL Server line, Microsoft has enhanced the database platform for large-scale online transactional processing (OLTP), data warehousing, XML integration of relational data and e-commerce applications. With this connector, the integration platform reads and writes to SQL Server 2005 tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
Under Server Name, enter the name of your server. If SQL Server is installed locally, leave this line blank.
If your database has many tables, selecting the table from a list may take more time than typing the UserID.Tablename value in the Source Table/View box. If you enter the table name without the user ID and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query Statement Box.
You can use an EZScript to auto-generate a GUID for a variable declared as the uniqueidentifier data type. See Generating a Random Unique Identifier.
Connector-Specific Notes
Identity Columns with SQL Passthrough Query
When a SQL passthrough query is used in place of a table name, the integration platform does not correctly identify identity (for example int identity) fields, but can use the non-identity version of the data type (for example a field of type int identity is considered an int field). The problem identifying identity is due to limitations in the SQL Server OLEDB provider. OLEDB has no way to directly identify whether a column is an identity type and the SQL Server OLEDB provider does not return the base column/table/owner information needed to determine this from the system catalog.
You can use a SQL passthrough query as a source and the integration platform reads the data correctly. If this fix causes identity fields to be mapped to nonidentity fields in the Target, you change the data type in the Target to fix the fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
CursorType
S
Type of cursor to use for retrieving items from the source table. Available options are client (default), fast forward, static, keyset, and dynamic. See CursorType for details.
Encoding
ST
The SQL Server 2005 connector is Unicode-based. The encoding property specifies the encoding from which to translate user names and query statements to Unicode before passing them to SQL Server. Default is OEM.
Synonyms
ST
If set to true, this property allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
ST
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
T
Allows you to specify one of five different isolation levels when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
NONE- Does not start a transaction.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
ST
If set to true (default), allows you to see views. The view names appear in the table list along with the tables.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. This might be used to get the SQL database server to filter the data based on a condition before it is sent to . Omit Where when you enter the clause. No default exists for this property.
When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
CursorType
client – This cursor is the default because it supports all Transact-SQL statements. The other cursor options (Static, Dynamic, Keyset) do not. Client cursors should be used only to alleviate the restriction that server cursors do not support all Transact-SQL statements or batches. One obvious benefit of the client-side cursor is quick response. After the result set has been downloaded to the client computer, browsing through the rows is fast. Your application is generally more scalable with client-side cursors because the cursor's resource requirements are placed on each separate client and not on the server.
fast forward – A cursor that cannot be scrolled; rows are read in sequence from the first row to the last row only. In SQL Server, this cursor is called forward-only, and is the default. Note that the default for this Connector is client.
This cursor can only move forward through the result set. To return to a previous row, the application must close and reopen the cursor and then read rows from the beginning of the result set until it reaches the required row. Forward-only cursors provide a fast mechanism for making a single pass through a result set. This cursor is less useful for screen-based applications, in which the user scrolls backward and forward through the data. Such applications can use a forward-only cursor by reading the result set once, caching the data locally and performing scrolling themselves. This works well only with small amounts of data.
static – A cursor that shows the result set exactly as it was at the time the cursor was opened. Sometimes called snapshot cursors.
Static cursors never detect other updates, deletes and inserts made to underlying data while the cursor is open. For example, suppose a static cursor fetches a row and another application then updates that row. If the application re-fetches the row from the static cursor, the values it sees are unchanged, despite the changes made by the other application. All types of scrolling are supported, but providers may or may not support bookmarks.
Keyset – A cursor that shows the effects of updates made to its member rows by other users while the cursor is open, but does not show the effects of inserts or deletes.
Keyset-Driven cursors take note of a key you may have in your database. Under normal circumstances, when you request records from a source, you get the entire package. However, when a Keyset cursor is selected, only the keys are returned, giving a performance boost. Then, when you scroll through the recordset, the data for the keys is retrieved. However, when you first invoke the cursor, both keys and data are returned so you can see something in your bound control. When you scroll and the data is not in the current batch of data, the Keyset fetches the next block. This way, it only has to manage small keys rather than large chunks of data. Keyset cursors are similar to dynamic cursors, except you cannot see records others add. If another user deletes a record, it is inaccessible from your recordset.
Dynamic – A cursor that can reflect data modifications made to the underlying data while the cursor is open. Updates, deletes and inserts made by users are reflected in the dynamic cursor.
The dynamic cursor can detect any changes made to the rows, order and values in the result set after the cursor is opened. Updates made outside the cursor are not visible until they are committed (unless the cursor transaction isolation level is set to "uncommitted").
Choosing a CursorType
Every cursor uses temporary resources to hold its data. These resources can be memory, a disk paging file, temporary disk files, or even temporary storage in the database. The cursor is called a client-side cursor when these resources are located on the client computer and a server-side cursor when these resources are on the server.
Your cursor choice depends upon whether you need to change, or simply view the data. The following are recommendations from Microsoft:
If you just need to scroll through a set of results, but not change data, use a fast forward (called forward-only in SQL Server) or use the static cursor, OR
If you have a large result set and need to select a few rows, use a keyset cursor, OR
If you want to synchronize a result set with recent adds, changes and deletes by all concurrent users, use a dynamic cursor.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
Note:  When your target is SQL Server 2005, you can set the maximum length of variable length data types (such as varchar, nvarchar, varbinary) beyond the maximum 8000 KB. The maximum can be up to 231–1 (2 GB – 1). If the length supplied is more than 8000, the engine automatically sets the MAX argument.
SQL Server 2005 Mass Insert
The SQL Server 2005 Mass Insert connector provides a rapid way of inserting records into a SQL Server 2005 database. It bypasses the transactional layer of the SQL Server database and adds information directly to the storage tables. This connector is a useful option if you are dealing with large tables and performance is of paramount importance.
Tip:  Under Server Name, enter the name of your server. If SQL Server is installed locally, leave this line blank.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Note:  To connect to a data table in a SQL Server 2005 database, the SQL Server engine must be running in the background.
Connector-Specific Notes
SQL Server 2005 Mass Insert does not transform to updateable views, only to tables.
The Update Mode is not an available option for this connector, since it is a mass insert application.
Process Limitation: This connector cannot be used in a multithreaded process. The mass insert feature causes problems when more than one thread runs parallel in a process. Use the single-thread process instead. Important: Your process can include multiple transformations running in parallel to different tables.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
This is a pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
Encoding
T
Type of encoding to use with source and target files. Default is OEM.
IdentityInsert
T
The integration platform does not automatically insert Identity fields into your database. The default is false. If you want to allow Identity fields, you must change this property to true.
SystemTables
T
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
Tablock
T
Either true (default) or false.
Views
T
When set to true (default), this property allows you to see views. To disallow Views, change the setting to false. The view names appear in the table list along with the tables.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
Note:  When your target is SQL Server 2005 Mass Insert, you can set the maximum length of variable length data types (such as varchar, nvarchar, varbinary) beyond the maximum 8000 KB. The maximum can be up to 231–1 (2 GB – 1). If the length supplied is more than 8000, the engine automatically sets the MAX argument.
SQL Server 2005 Multimode
SQL Server 2005 is a database application that you can connect to directly or through ODBC 3.x. The integration platform supports concurrent writes to multiple target tables within a SQL RDBMS, such as with this SQL Server 2005 Multimode connector.
Multimode connectors allow you to perform multiple operations (Change Source or Target, table drops, table inserts, etc.) directly on your target database within the same transformation.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Tip:  Under Server Name, enter the name of your server. If SQL Server is installed locally, leave this line blank.
Connector-Specific Notes
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Transaction Control: SQL Server 2005 does not allow mixing of API-level transaction control and Transact-SQL control. If you want explicit control of transactions, use Target Only (Unbound) mode, which does not use bind variables. The Target Only mode uses bind variables for speed (the side effect is less control of transactions).
Transaction Support: When a transformation starts, it is in AutoCommit mode (for example, every statement is considered a separate unit of work and is automatically committed). To operate in Explicit mode, do a BEGIN TRANSACTION statement. For Implicit mode, use a SET IMPLICIT_TRANSACTIONS ON Statement.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Encoding
T
The connector uses encoding to translate user names and query statements to Unicode before passing them on to SQL Server. The default is OEM.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
SQL Output
T
Allows you to select bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster as bind variables are used. Select from the following options:
Target Only (default) - Uses bound mode, which uses bind variables. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound Mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
T
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
T
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note:  This property is applicable only if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
T
Allows you to specify isolation level when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
NONE- Does not start a transaction.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when inserting or updating record and the connector is forced to operate in unbound mode, which may cause slower performance.
Views
T
When set to true (default), this property allows you to see views. The view names appear in the table list along with the tables. Set to false to disallow views.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
Note:  When your target is SQL Server 2005, you can set the maximum length of variable length data types (such as varchar, nvarchar, varbinary) beyond the maximum 8000 KB. The maximum can be up to 231–1 (2 GB – 1). If the length supplied is more than 8000, the engine automatically sets the MAX argument.
SQL Server 2008
The SQL Server 2008 connector functionality is the same as the SQL Server 2005 connector. For information on using this connector, see SQL Server 2005.
SQL Server 2008 Mass Insert
The SQL Server 2008 Mass Insert connector functionality is the same as the SQL Server 2005 Mass Insert connector. For information on using this connector, see SQL Server 2005 Mass Insert.
SQL Server 2008 Multimode
The SQL Server 2008 Multimode connector functionality is the same as the SQL Server 2005 Multimode connector. For information on using this connector, see SQL Server 2005 Multimode.
SQL Server 2012
SQL Server 2012 is a database application that you can connect natively to through an OLE DB interface, or through ODBC. With this connector, the integration platform reads and writes to SQL Server 2012 tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
The connector uses the SQL Server Native Client 11.0 ODBC driver, which must be installed on your system.
Connectivity Pointers
In the Server field, enter the name of your server.
If your database has many tables, selecting the table from a list may take more time than typing the SchemaName.Tablename value in the Source Table/View box. If you enter the table name without the schema name and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query statement field.
You can use an EZScript to auto-generate a GUID for a variable declared as the uniqueidentifier data type. See Generating a Random Unique Identifier.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of a transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. If you want to maximize speed and instruct Map Designer to use a bulk add, change this setting to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
CursorType
S
Type of cursor to use for retrieving records from the source table. Available options are forward only (default), dynamic, and static. See CursorType for details.
Encoding
ST
The SQL Server 2012 connector is Unicode-based. The encoding property specifies the encoding from which to translate user names and query statements to Unicode before passing them to SQL Server. Default is OEM.
For choices, see Encoding Reference.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and returns a syntax error that they are not separated by a comma. There are four identifier quote options: Default, None, an apostrophe ('), and double quotes (").
MaxDataLength
ST
Refers to the maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and data requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1MB, Map Designer presets the default in respect of the capabilities of that particular ODBC driver. As a best practice, do not set this property any higher under those conditions.
SystemTables
ST
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
TransactionIsolation
ST
Allows you to specify an isolation level when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
ST
If set to true (default), allows you to see views. The view names appear in the table list along with the tables.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors that allows advanced users to construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. This might be used to get the SQL database server to filter the data based on a condition before it is sent to Map Designer. Omit "WHERE" when you enter the clause. No default exists for this property.
When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
PrimaryKey
T
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
UseCursors
T
The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified ODBC driver does not support cursor inserts, Map Designer falls back on the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases such as Access. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL (see details above).
CursorType
The following CursorType options are available for a SQL Server 2012 connection:
forward only – A cursor that cannot be scrolled; rows are read in sequence from the first row to the last row only.
This cursor can only move forward through the result set. To return to a previous row, the application must close and reopen the cursor and then read rows from the beginning of the result set until it reaches the required row. Forward-only cursors provide a fast mechanism for making a single pass through a result set. This cursor is less useful for screen-based applications, in which the user scrolls backward and forward through the data. Such applications can use a forward-only cursor by reading the result set once, caching the data locally and performing scrolling themselves. This works well only with small amounts of data.
static – A cursor that shows the result set exactly as it was at the time the cursor was opened. Sometimes called snapshot cursors.
Static cursors never detect other updates, deletes and inserts made to underlying data while the cursor is open. For example, suppose a static cursor fetches a row and another application then updates that row. If the application re-fetches the row from the static cursor, the values it sees are unchanged, despite the changes made by the other application. All types of scrolling are supported, but providers may or may not support bookmarks.
dynamic – A cursor that can reflect data modifications made to the underlying data while the cursor is open. Updates, deletes and inserts made by users are reflected in the dynamic cursor. A dynamic cursor can detect any changes made to rows, order and values in the result set after the cursor is opened. Updates made outside the cursor are not visible until they are committed, unless the cursor transaction isolation level is set to "uncommitted".
Choosing a CursorType
Every cursor uses temporary resources to hold its data. These resources can be memory, a disk paging file, temporary disk files, or even temporary storage in the database. The cursor is called a client-side cursor when these resources are located on the client computer and a server-side cursor when these resources are on the server.
Your cursor choice depends on whether you need to change or simply view the data. The following are recommendations from Microsoft:
If you just need to scroll through a set of results, but not change data, use a forward only (called forward-only in SQL Server) or use the static cursor.
If you want to synchronize a result set with recent adds, changes, and deletes by all concurrent users, use a dynamic cursor.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
SQL Server 2012 Mass Insert
The SQL Server 2012 Mass Insert connector provides a rapid way of inserting records into a SQL Server 2012 database. It bypasses the transactional layer of the SQL Server database and adds information directly to the storage tables. This connector is a useful option if you are dealing with large tables and performance is of paramount importance.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
The SQL Server 2012 Mass Insert connector does not transform to updateable views, only to tables.
The Update Mode is not an available option for this connector, since it is a mass insert application.
Process Limitation: This connector cannot be used in a multithreaded process. The mass insert feature causes problems when more than one thread runs parallel in a process. Use the single-thread process instead. Important: Your process can include multiple transformations running in parallel to different tables.
Property Options
The following properties are supported for SQL Server 2012 Mass Insert target connections:
Property
Use
CommitFrequency
Controls how often data is written to the database. The default is zero, which means that data is not written until the transformation is complete.
ConstraintDDL
This is a pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
Encoding
Type of encoding to use with source and target files. Default is OEM. For choices, see Encoding Reference.
Notes:
Shift-JIS encoding is meaningful only in Japanese operating systems.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
IdentityInsert
Determines if the integration platform automatically inserts Identity fields into your database. The default is false. If you want to allow Identity fields, you must change this property to true.
SystemTables
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Data Types
ID field types are not supported.
SQL Server 2012 Multimode
SQL Server 2012 is a database application that you can connect to directly or through ODBC. This connector uses the SQL Server Native Client 11.0 ODBC driver, which must be installed on your system.
This connector allows you to perform multiple operations (Change Source or Target, table drops, table inserts, etc.) directly on your target database within the same transformation.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Transaction Control: SQL Server 2012 does not allow mixing of API-level transaction control and Transact-SQL control. If you want explicit control of transactions, use Target Only (Unbound) mode, which does not use bind variables. The Target Only mode attempts to use bind variables for speed, which limits your ability to explicitly control of transactions.
Transaction Support: When a transformation starts, it is in AutoCommit mode (for example, every statement is considered a separate unit of work and is automatically committed). To operate in Explicit mode, do a BEGIN TRANSACTION statement. For Implicit mode, use a SET IMPLICIT_TRANSACTIONS ON Statement.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
Encoding
T
Translates user names and query statements to Unicode before passing them on to SQL Server. The default is OEM. For choices, see Encoding Reference.
Notes:
Shift-JIS encoding is meaningful only in Japanese operating systems.
This property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use these identifiers to make a SQL statement parseable and to distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and tables names in SQL statements and single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement.
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and returns a syntax error that they are not separated by a comma.
The SQL Server 2012 Multimode connector supports the following identifier quotes: Default, None, double quotes ("), and a single quote (').
MaxDataLength
T
Refers to the maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target data requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) target connector and the default setting is not 1 MB, then Map Designer presets the default in respect for the capabilities of that particular ODBC driver. It is not recommended that you set this property higher under these conditions.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file or enter the path and file name.
SQL Output
T
Allows you to select bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster as bind variables are used.
Available options:
Target Only (default) - Uses bound mode, which uses bind variables. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SystemTables
T
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
TransactionIsolation
T
Allows you to specify isolation level when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The supported isolation levels are as follows:
read uncommitted - Permits P1, P2, and P3.
read committed - Permits P2 and P3. Does not permit P1.
repeatable read - Permits P3. Does not permit P1 and P2.
serializable - Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select false, null values are not sent to the database when inserting or updating record and the connector is forced to operate in unbound mode, which may cause slower performance.
Views
T
When set to true (default), this property allows you to see views. The view names appear in the table list along with the tables. Set to false to disallow views.
Data Types
ID field types are not supported.
SQL Server 2016
SQL Server 2016 is a database application that you can connect natively to through ODBC. With this connector, the integration platform reads and writes to SQL Server 2016 tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
The connector uses the SQL Server Native Client 11.0 ODBC driver, which must be installed on your system.
Connectivity Pointers
In the Server field, enter the name of your server.
If your database has many tables, selecting the table from a list may take more time than typing the SchemaName.Tablename value in the Source Table/View box. If you enter the table name without the schema name and then connect to the database, you can reselect the table in the source data browser to view it.
If possible, create query statements using the source database application, so that syntax and validity can be checked by the native application. You can then copy the query statement and paste it in the Query statement field.
You can use an EZScript to auto-generate a GUID for a variable declared as the uniqueidentifier data type. See Generating a Random Unique Identifier.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of a transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. If you want to maximize speed and instruct Map Designer to use a bulk add, change this setting to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
CursorType
S
Type of cursor to use for retrieving records from the source table. Available options are forward only (default), dynamic, and static. See CursorType for details.
Encoding
ST
The SQL Server 2012 connector is Unicode-based. The encoding property specifies the encoding from which to translate user names and query statements to Unicode before passing them to SQL Server. Default is OEM.
For choices, see Encoding Reference.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and returns a syntax error that they are not separated by a comma. There are four identifier quote options: Default, None, an apostrophe ('), and double quotes (").
MaxDataLength
ST
Refers to the maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and data requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) source or target connector and the default setting is not 1MB, Map Designer presets the default in respect of the capabilities of that particular ODBC driver. As a best practice, do not set this property any higher under those conditions.
SystemTables
ST
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
TransactionIsolation
ST
Allows you to specify an isolation level when reading from or writing to a database table with ODBC. The default is Serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read) and P3 (Phantoms).
The isolation levels are as follows:
READ_UNCOMMITTED - Permits P1, P2, and P3.
READ_COMMITTED - Permits P2 and P3. Does not permit P1.
REPEATABLE_READ - Permits P3. Does not permit P1 and P2.
SERIALIZABLE - Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
Views
ST
If set to true (default), allows you to see views. The view names appear in the table list along with the tables.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors that allows advanced users to construct the Where clause of the SQL query themselves. It can be used as an alternative to writing a lengthy query statement. This might be used to get the SQL database server to filter the data based on a condition before it is sent to Map Designer. Omit "WHERE" when you enter the clause. No default exists for this property.
When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statements. This property enables data filtering when you select a table.
PrimaryKey
T
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. The field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
UseCursors
T
The UseCursors property allows you to turn cursor support on and off. The default is false. If you set the property to true and the specified ODBC driver does not support cursor inserts, Map Designer falls back on the SQL INSERT mode of adding records.
For exports, cursor support is supposed to enhance the performance of inserting records. This appears to be the case for desktop databases such as Access. For database servers, there is no noticeable change in insert speed. They seem to execute prepared queries about as quickly as they handle cursor inserts
Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL (see details above).
CursorType
The following CursorType options are available for a SQL Server 2012 connection:
forward only – A cursor that cannot be scrolled; rows are read in sequence from the first row to the last row only.
This cursor can only move forward through the result set. To return to a previous row, the application must close and reopen the cursor and then read rows from the beginning of the result set until it reaches the required row. Forward-only cursors provide a fast mechanism for making a single pass through a result set. This cursor is less useful for screen-based applications, in which the user scrolls backward and forward through the data. Such applications can use a forward-only cursor by reading the result set once, caching the data locally and performing scrolling themselves. This works well only with small amounts of data.
static – A cursor that shows the result set exactly as it was at the time the cursor was opened. Sometimes called snapshot cursors.
Static cursors never detect other updates, deletes and inserts made to underlying data while the cursor is open. For example, suppose a static cursor fetches a row and another application then updates that row. If the application re-fetches the row from the static cursor, the values it sees are unchanged, despite the changes made by the other application. All types of scrolling are supported, but providers may or may not support bookmarks.
dynamic – A cursor that can reflect data modifications made to the underlying data while the cursor is open. Updates, deletes and inserts made by users are reflected in the dynamic cursor. A dynamic cursor can detect any changes made to rows, order and values in the result set after the cursor is opened. Updates made outside the cursor are not visible until they are committed, unless the cursor transaction isolation level is set to "uncommitted".
Choosing a CursorType
Every cursor uses temporary resources to hold its data. These resources can be memory, a disk paging file, temporary disk files, or even temporary storage in the database. The cursor is called a client-side cursor when these resources are located on the client computer and a server-side cursor when these resources are on the server.
Your cursor choice depends on whether you need to change or simply view the data. The following are recommendations from Microsoft:
If you just need to scroll through a set of results, but not change data, use a forward only (called forward-only in SQL Server) or use the static cursor.
If you want to synchronize a result set with recent adds, changes, and deletes by all concurrent users, use a dynamic cursor.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Data Types
ID field types are not supported.
SQL Server 2016 Mass Insert
The SQL Server 2016 Mass Insert connector provides a rapid way of inserting records into a SQL Server 2016 database. It bypasses the transactional layer of the SQL Server database and adds information directly to the storage tables. This connector is a useful option if you are dealing with large tables and performance is of paramount importance.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
The SQL Server 2016 Mass Insert connector does not transform to updateable views, only to tables.
The Update Mode is not an available option for this connector, since it is a mass insert application.
Process Limitation: This connector cannot be used in a multi-threaded process. The mass insert feature causes problems when more than one thread runs parallel in a process. Use the single-thread process instead. Important: Your process can include multiple transformations running in parallel to different tables.
Property Options
The following properties are supported for SQL Server 2016 Mass Insert target connections:
Property
Use
CommitFrequency
Controls how often data is written to the database. The default is zero, which means that data is not written until the transformation is complete.
ConstraintDDL
This is a pass-through mechanism that allows you to specify any additional data definition language (DDL) statements that need to be executed when a new table is created. DDL is used to define keys, indexes, constraints and integrity rules for SQL databases. There is no default.
Encoding
Type of encoding to use with source and target files. Default is OEM. For choices, see Encoding Reference.
Notes:
Shift-JIS encoding is meaningful only in Japanese operating systems.
The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
IdentityInsert
Determines if the integration platform automatically inserts Identity fields into your database. The default is false. If you want to allow Identity fields, you must change this property to true.
SystemTables
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Data Types
ID field types are not supported.
SQL Server 2016 Multimode
SQL Server 2016 is a database application that you can connect to directly or through ODBC. This connector uses the SQL Server Native Client 11.0 ODBC driver, which must be installed on your system.
This connector allows you to perform multiple operations (Change Source or Target, table drops, table inserts, etc.) directly on your target database within the same transformation.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Target Schema modification: In multimode targets, modifications to column names, data types, and sizes are not permitted.
Transaction Control: SQL Server 2016 does not allow mixing of API-level transaction control and Transact-SQL control. If you want explicit control of transactions, use Target Only (Unbound) mode, which does not use bind variables. The Target Only mode attempts to use bind variables for speed, which limits your ability to explicitly control of transactions.
Transaction Support: When a transformation starts, it is in AutoCommit mode (for example, every statement is considered a separate unit of work and is automatically committed). To operate in Explicit mode, do a BEGIN TRANSACTION statement. For Implicit mode, use a SET IMPLICIT_TRANSACTIONS ON Statement.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, there is no way to roll back changes once they have been made. The default is false.
Encoding
T
Translates user names and query statements to Unicode before passing them on to SQL Server. The default is OEM. For choices, see Encoding Reference.
Notes:
Shift-JIS encoding is meaningful only in Japanese operating systems.
This property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use these identifiers to make a SQL statement parseable and to distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and tables names in SQL statements and single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement.
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and returns a syntax error that they are not separated by a comma.
The SQL Server 2016 Multimode connector supports the following identifier quotes: Default, None, double quotes ("), and a single quote (').
MaxDataLength
T
Refers to the maximum data length for long data types. The default is 1 MB. You can reset this number as you choose based on your available memory capabilities and target data requirements.
Some ODBC drivers have limitations concerning the maximum data length they can handle. If you choose an application (ODBC) target connector and the default setting is not 1 MB, then Map Designer presets the default in respect for the capabilities of that particular ODBC driver. It is not recommended that you set this property higher under these conditions.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file or enter the path and file name.
SQL Output
T
Allows you to select bound or unbound mode and whether or not to write SQL statements to a SQL log. Keep in mind that bound mode is faster as bind variables are used.
Available options:
Target Only (default) - Uses bound mode, which uses bind variables. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target Only (Unbound mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SystemTables
T
If set to true, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
TransactionIsolation
T
Allows you to specify isolation level when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The supported isolation levels are as follows:
read uncommitted - Permits P1, P2, and P3.
read committed - Permits P2 and P3. Does not permit P1.
repeatable read - Permits P3. Does not permit P1 and P2.
serializable - Does not permit any of P1, P2, and P3.
For further details about TransactionIsolation levels, see the Microsoft ODBC SDK documentation.
UpdateNullFields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select false, null values are not sent to the database when inserting or updating record and the connector is forced to operate in unbound mode, which may cause slower performance.
Views
T
When set to true (default), this property allows you to see views. The view names appear in the table list along with the tables. Set to false to disallow views.
Data Types
ID field types are not supported.
SQL Server BCP
A SQL Server BCP (Bulk Copy Program) file is a special bulk loader format. Using BCP boosts performance when writing large amounts of data to a SQL Server table. SQL Server BCP provides the best performance in simple transformations that do not contain such features as lookups, many expressions, and event actions. With this connector, you can read and write bulk copy program files to SQL Server tables.
You may notice that SQL Server BCP appears on both the source and the target connection lists. It appears on the source connection primarily for reading and verifying loader files created by the integration platform, when the user does not have the application in which to check the files.
Note:  Target Properties must be adjusted if SQL Server BCP is the target connector. See property options below.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
The integration platform does not support milliseconds in DateValMask conversions. Although a mask is provided for them, the integration platform returns zeros in those places.
Property Options
You can set the following source (S) and target (T) properties.
Note:  The properties FormatSeparator and RecordSeparator must be different values for the BCP file to be recognized by a loader utility. The default values for these properties are identical and therefore at least one property must be changed when SQL Server BCP is the connector. Check the documentation on your utility to determine preferred separators and alter the properties accordingly.
 
Property
S/T
Description
FieldSeparator
T
By default, the integration platform assumes that SQL Server BCP files use tabs to separate fields. To specify a different field separator, click the FieldSeparator value and then click the arrow to select either comma (,), space, carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), or ctrl-R. To use a field separator that is not on the list, type it here.
If the field separator is not one of the choices from the list and is a printable character, highlight the default value and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, see Hex Values.
FormatSeparator
T
Sets the line separator in the format file. The default is carriage return-line feed (CR-LF). To use a different line separator, click the FormatSeparator value and then click the arrow to select either line feed (LF), carriage return (CR), or line feed-carriage return (LF-CR). To use a separator that is not on the list, type it here.
If the format separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard.
If the format separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, see Hex Values.
Caution!  The FormatSeparator cannot be the same as the RecordSeparator.
RecordSeparator
T
By default, the integration platform assumes that SQL Server BCP files use carriage return-line feeds (CR-LF) between records. To use a different character as a record separator, click the RecordSeparator value cell and then click the arrow to select either line feed (LF), carriage return (CR) , or line feed-carriage return (LF-CR), form field (FF), empty line, or none. To use a separator that is not on the list, type it here.
If the record separator is not on the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, see Hex Values.
Caution!  The FormatSeparator cannot be the same as the RecordSeparator.
StripLeadingBlanks
ST
By default, the integration platform does not strip leading blanks from SQL Server BCP data. To remove them, set StripLeadingBlanks to true.
StripTrailingBlanks
ST
By default, the integration platform does not strip trailing blanks from SQL Server BCP data. To remove them, set StripTrailingBlanks to true.
StyleSampleSize
S
Sets the number of records analyzed to a set width for each source field. The default is 5,000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file and it may be necessary to analyze every record to ensure that no data is truncated.
To change the value, click StyleSampleSize, highlight the default value, and type a new value.
Version
T
Allows you to select the same version as the SQL Server BCP source file. The default setting is 6.0. To select 4.2 or another version, click the Version value and type the version number you want.
Data Types
The only data type available is Text.
SWIFT
The Society for Worldwide Interbank Financial Telecommunication (SWIFT) operates a global financial messaging network that exchanges messages between banks and other financial institutions. Messages formatted to SWIFT standards can be read by, and processed by financial processing systems, whether or not the message traveled over the SWIFT network.
Schema File Required
You are provided a template file that contains the structure for your SWIFT file connections. If your SWIFT file structure is different, you may need to modify the template.
To make changes to a SWIFT template file
1. Import the file into Project Explorer.
2. Open the file in the Schema Editor.
3. Make your changes.
4. Save the file with a different name.
To obtain a template file, contact your sales representative. These template files are standard schemas.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
SchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
Validation
ST
False is the default setting. When Validation is true, the adapter validates:
1. Field requirement
2. Field default value if this value is defined
3. Validates the CheckSum
4. Validation works differently on source and target files. Validation on the source side occurs when you apply the source property options. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run time.
Unicode
ST
This property is important if your IDoc file is Unicode encoded. The default is false, therefore if your target IDoc file is Unicode encoded, change this setting to true.
SWIFT (eDoc)
The Society for Worldwide Interbank Financial Telecommunication (SWIFT) operates a global financial messaging network that exchanges messages between banks and other financial institutions. Messages formatted to SWIFT standards can be read by, and processed by financial processing systems, whether or not the message traveled over the SWIFT network.
Schema File Required
You are provided a template file that contains the structure for your SWIFT file connections. If your SWIFT file structure is different, you may need to modify the template.
To make changes to an SWIFT template file
1. Import the file into Project Explorer.
2. Open the file in the Schema Editor.
3. Make your changes.
4. Save the file with a different name.
To obtain a template file, contact your sales representative. These template files are standard schemas.
To connect to a SWIFT source or target file, you must set a schema in the SchemaFile property.
Note:  You are not able to connect directly to the schema (Step 3-Select Schema is read only). You must select a schema file from your current project that contains the schema you want to use.
Property Options
You can set the following source (S) and target (T) properties for SWIFT files.
Property
ST
Use
Encoding
ST
To specify encoding for reading source and writing target data, select a code page translation table. The default value is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252.
SchemaFile
S
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
Sybase Adaptive Server 12.x and 15.x
Sybase Adaptive Server is a relational database management system. The application runs on both on Windows platforms and on most Unix systems. You must have Sybase Adaptive Server 12.x or 15.x client installed to connect to Sybase Adaptive Server databases. With this connector, Map Editor can read and write to Sybase Adaptive Server tables.
Use of the Sybase DSEdit utility to set up the Sybase server is recommended.
Note:   you install the Sybase client on a DataConnect system, check that the path environment variable has both the default installation directory and the Sybase directory. For a 64-bit Sybase 15 client installation, also verify that the path contains C:\Sybase_Client\OCS-15_0\lib.
Connector-Specific Notes
Setting up the Sybase Server
We recommend that you use the Sybase DSEdit utility to set up the Sybase server.
Note:  When you install the Sybase client on your integration platform system, check that the path environment variable has both the default installation directory and the Sybase directory. For a 64-bit Sybase 15 client installation, also verify that the path contains C:\Sybase_Client\OCS-15_0\lib
Simulated Database Connections
If you try to make a live connection to a database to which you do not have direct access, the integration platform displays a message indicating that the connection is simulated. In simulated mode, it may not be possible to view all property options. To view all property options, install and configure the database client before connecting on the integration platform.
Sybase Settings
Perform the following Sybase settings:
In Windows, set the environmental variable LC_ALL or LANG to a valid locale defined in Sybase_Install_Directory\locales\locales.dat file. It is recommended to add Set.LC_ALL=default to com.pervasive.di.service.wrapper.properties file.
In Linux:
1. Set the environmental variable SYBASE=Sybase_Install_Directory. It is recommended to add Set.SYBASE=Sybase_Install_Directory to com.pervasive.di.service.wrapper.properties file.
2. Add Sybase library to com.pervasive.di.service.wrapper.properties file:
wrapper.java.library.path.n=Sybase_Install_Directory/ASE-15_0/lib:Sybase_Install_Directory/OCS-15_0 /lib:Sybase_Install_Directory/OCS-15_0/lib3p64:Sybase_Install_Directory/OCS-15_0/lib3pRename based on the wrapper.java.library.path order.
Field Width
This connector sets field width in bytes. The variation is the number of characters that can be included into a given field. For additional details, see Determining Field Width in Characters or Bytes.
Update/Delete Modes
Unless there is a unique index in the table that is being updated or deleted, the Update and Delete output modes do not function properly with this connector. You must have a unique index field to make a comparison with an existing record. If the index field matches an existing record, then the record can either be updated or deleted. If the index field is not unique, Map Editor cannot determine which record needs to be updated or deleted. You can have more than one index field, but there must be a unique index field for Update/Delete modes to work.
Note:  Map Editor now supports SQL stored procedures in Sybase Adaptive Server.
Map Editor can read accurate data types on existing tables and views in Sybase Adaptive Server, but not from SQL statements. Consequently, if you are connecting with a SQL statement, the data types that show in Map Editor are either synonyms or generic versions of the data types used in the database schema. This applies especially to user-defined data types.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Version
Description
CodePage
ST
12x, 15.x
Type of encoding to use for reading and writing data. The default is ANSI, which is the standard in the US.
CommitFrequency
T
12x, 15.x
Frequency at which data is committed to the database. The default is zero, which means that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting.
For large transformations, this is not practical as it can fill up transaction logs. Set CommitFrequency to a nonzero value commit to the database after the specified number of records have been inserted or updated in the table. This prevents the transaction log from getting too full but limits the ability to restart the transformation.
ConstraintDDL
T
12x, 15.x
Pass-through mechanism for setting additional data definition language (DDL) statements to be executed when a new table is created. DDL is used to define keys, indexes, constraints, and integrity rules for SQL databases. There is no default. For more information, see ConstraintDDL.
IdentityInsert
T
12x, 15.x
Enables insertion of user-defined identity values in identity fields. Default is false.
SystemTables
ST
12x, 15.x
When set to True, enables viewing of all tables created by the DBA. Default is false.
Only applicable if you are logged onto the database as the database administrator. Only the database administrator has access to SystemTables.
Views
ST
12x, 15.x
When set to true (the default), allows you to see views. The view names appear in the table list along with the tables.
WhereStmt
S
12x, 15.x
Pass-through mechanism for SQL connectors so that you can construct the Where clause of a SQL query. This can be used as an alternative to writing a lengthy query statement in the Query Statement text box. For example, you might use WhereStmt to instruct the SQL database server to filter data based on a condition before it is available to the integration platform. Omit WHERE when you enter the clause. No default exists for this property.
Note: Do not apply the WhereStmt when the source connection is a Select statement. If you are using a Select statement, include the Where clause in that statement. The intent of this property is to enable data filtering when selecting a table.
PacketSize
The packet size is automatically set using the following algorithm:
The initial packet size is 16384 (32 * 512) bytes.
If the connection fails, the outcome depends upon the size, as follows:
If size > 1024, the packet reduces to one-quarter the size until the connection succeeds.
If size <= 1024, the packet reduces to one-half the size until the connection succeeds.
When the connection succeeds, the value of PacketSize is logged.
Working with Large Packet Sizes
To determine how large to set the packet size, you can first set PacketSize to 0 (auto) and establish a connection. Then open the log file to get the actual size of the packet, and use that value in the PacketSize property setting.
The advantage of the auto default setting is that a connection can accept a large packet size. The disadvantage is that Sybase may log an error if the connection fails.
Sybase documentation describes PacketSize as follows:
"Increase the packet size for the connection: On most platforms, the packet size default is 512 bytes. This default is provided for backward compatibility. Larger packet sizes, 2048 to 4096 bytes, for example, almost always yield better performance. This is generally true for applications that fetch rows with large columns, rows with lots of columns, or lots of rows with a few small columns. It is especially true for applications that do bulk copy data transfers or that handle text and image data.
To increase a connection's packet size, set the CS_PACKETSIZE connection property, via ct_con_props, prior to opening the connection."
ConstraintDDL
Constraint DDL is similar to the support provided for SQL pass-through with the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, consider the following statements, which would create two indexes on the table mytable:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
The first index does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on Field2 and Field3.
ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, they are written to the error and event log file. During transformation, the transformation error dialog box shows the errors. If desired, you can ignore the DDL errors and continue the transformation.
ConstraintDDL also supports an escape mechanism that allows users to specify DDL in the native SQL of the database system. Any statement that is preceded by an "@" is sent directly to the database system.
Consider the following statement, which is an Access DDL statement for creating a primary key for the table mytable:
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
The Access ODBC driver does not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. The only way you can create the primary key for an Access table is with the native SQL.
Note:  This option works only in Replace mode.
Sybase Adaptive Server 12.x and 15.x Multimode
Sybase Adaptive Server is a relational database management system. The application runs on both on Windows platforms and on most Unix systems. You must have Sybase Adaptive Server 12.x or 15.x multimode client installed to connect to Sybase Adaptive Server databases. With this connector, Map Editor can read and write to Sybase Adaptive Server multimode tables.
The main distinction between single and multimode connectors is in the use of event actions. In the single mode connectors, you can only access one file or table at a time with event actions. With multimode connectors, you access multiple tables in the same transformation, each time that transformation runs. This is done with Multimode event actions.For more information and property options, see Sybase Adaptive Server 12.x and 15.x.
SYSTAT
STAT is a statistical package. With this connector, you can read and write SYSTAT for DOS version 5.03 data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Field Names
Field names for Text fields are limited to 8 characters.
Field Length
The byte length of all Text fields is 12.
The byte length of all Float fields is 8.
Property Options
You can set the following source (S) and target (T) properties
Property
S/T
Description
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
Data is transformed to SYSTAT as Text or Float. Fields that contain nonnumeric data should be defined as Text. Numeric data should be defined as Float.
Length
These are field lengths in your data. All Text fields default to a length of 12 bytes, and the value cannot be changed. All Float fields default to a length of 8 bytes, which also cannot be changed.