Connectors : Source and Target Connectors : Source and Target Connectors T-Z
 
Share this page                  
Source and Target Connectors T-Z
This section provides information about source and target connectors from T to Z.
Tape Drive Sequential
The Tape Drive Sequential connector uses a binary connection that works with sequential data streams. The linear data streams can either be a sequential file or a named pipe. The integration platform cannot detect file length or end of file with this connection. It can only read one byte at a time in a forward direction.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connectivity Pointers
To connect to a named pipe, enter the path and name of the pipe in this format:
\\.pipe\pathandname
You cannot browse to a pipe, or list pipe names in any way. You must already know the name and path to the pipe. It must be local or on the local network. The integration platform treats pipes like files.
A source connection with a pipe continues to run until the pipe is closed by the application that created it.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
RecordSeparator
ST
When a binary sequential file is your connector and you are using a 01 copybook to define the fields, you may have a record separator at the end of each record. If so, you may specify the record separator here. This causes the integration platform to automatically ignore the record separator when it reads the source data. The default is "None". The available list of separators are None (default), carriage return-line feed, line feed, carriage return, line feed-carriage return, form feed, and empty line.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating-point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display time
Interval
Interval day
Interval day to hour
Interval day to minute
Interval day to second
Interval hour
Interval hour to minute
Interval hour to second
Interval minute
Interval minute to second
Interval second
Interval year
Interval year to month
Magic PC date
Magic PC extended
Magic PC number
Magic PC real
Magic PC time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Variable length IBM float
Zoned decimal
Teradata (Fastload)
The integration platform reads and writes Teradata Fast Load files. This connector produces unique Teradata Bulk Load format files. Teradata is an NCR data warehouse management product.
Teradata FastLoad Utility
The Teradata FastLoad Utility moves data into empty tables in the Teradata RDBMS. This batch mode product does a high-speed initial load of tables into the database. The source for FastLoad can be from a flat file or an access module provided by NCR or an end user.
See the NCR website for more information on Teradata.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Teradata decimal data types do not have precision and decimal places. The integration platform only displays the size (4 bytes). The decimal data type maps to the text data type. This is because the Teradata decimal data type has a varied length (1, 2, 4, 6, 8). It is impossible to map it into a decimal data type (fixed length=4). Therefore, Teradata (Fastload) creates a fixed decimal (two places) in both Source and Target.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
RecordSeparator
ST
LF(000A) – default
CR(000D)
JobScript
T
Optional. Enter a text file name to create a flat file that can be uploaded to a Teradata data warehouse database. The integratin platform automatically creates a text file that contains a script that the Teradata FastLoad Utility can understand, making it faster than typing in the script by hand. You must modify the Database ID, User name and Password to execute the script.
FastLoadTable – optional. Enter a table name here. If you do not, a default name is provided, called Table1.
FastLoadTable
T
Enter a table name here. If you do not, a default name is provided, called Table1.
Data Types
If you want to set different record delimiters for your target records, see Record Delimiter under property options. For Teradata (Fastload) data, the following types are available:
Byte
ByteInt
Char
Date
Decimal
Float
Integer
Long VarChar
Numeric
Real
SmallInt
Time
Timestamp
Varbyte
VarChar
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Note:  A field length shorter than the default may truncate data.
Text (Delimited - EDI)
EDI, or Electronic Data Interchange, uses standard formats to pass data between the disparate business systems. Third parties provide EDI services that enable organizations with different equipment to connect. Although interactive access may be a part of it, EDI implies direct computer to computer transactions into vendors' databases and ordering systems. The EDI standard is ANSI X12, developed by the Data Interchange Standards Association (DISA).
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Target connectors
If any Null characters (hex 00) exist in a Fixed ASCII data file, the integration platform assumes this is the end of the file and stops reading the data at the first occurrence of a Null character. If you experience this, change the source connector to "Binary".
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
ByteOrder
S
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian, and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ElementSeparator
S
This is a complete list of Element Separators from which you can choose from the list:
CR-LF
SOH (0001)
STX (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
 
 
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
NAK (0015)
 
 
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS/IS4 (001C)
GS/IS3 (001D)
RS/IS2 (001E)
US/IS1 (001F)
SP (0020)
 
 
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
 
 
+ (002B) - Default
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
 
 
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
' (0060)
{ (007B)
(007C)
} (007D)
~ (007E)
DEL (007F)
SegmentTerminator
S
In this property, you may select a Segment Terminator from the list. (See the ElementSeparator list, above, for your options).
SourceScriptFile
S
The script file used for the source structure. To change the script file, enter the new file and click OK.
Encoding
S
This allows you to select the type of encoding used with your source and target files.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. Open the data file with a text editor and change UCS-2 to UCS2.
DatatypeSet
T
Allows you to choose between standard and COBOL data types in your fixed ASCII data file. Standard is the default and means that all the data in the file is readable (lower) ASCII data.
If your fixed ASCII file contains (or needs, for a target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you MUST change this property option to "COBOL" before connecting to the COBOL copybook in the External Structured Schema window.
FieldSeparator
T
Allows you to choose a field separator character for your target file. The default is None. The other choices are comma (,), tab, space, carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), control-R, and pipe ( | ).
If the record separator is not one of the choices from the list and is a printable character, highlight None and then type the correct character. For example, if the separator is an asterisk ( * ), type an asterisk from the keyboard.
If the record separator is not a printable character, replace None with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
FillFields
T
Writes an ASCII data file where every field is variable length. If this property is set to false, all trailing spaces are removed from each field when the data is written. The default is true. The true setting pads all fields with spaces to the end of the field length to maintain the fixed length of the records.
RaggedRight
T
Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records.
RecordSeparator
T
A fixed ASCII file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters as the record separator or no record separator, click the RecordSeparator cell and click once. Then click the down arrow to the right of the box and click the desired record separator in the list box. The choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line and no record separator. To use a separator other than one from the list, you can type it here.
If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
TabSize
T
If your source fixed ASCII file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero. To change it, highlight the zero and type a new value.
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in Text (Delimited – EDI) is Text, but you may use other data types. The following data types are available:
Boolean
Name
Number
Text
Alignment
Data in each field of your target file or table has an alignment property of general, left, center, or right.
Pad Character
This option is valid only for character fields. When data does not fill a field completely, the remainder of the field may be filled with some character.
Text (Delimited - EDIFACT)
EDIFACT stands for Electronic Data Interchange for Administration, Commerce and Transport. It uses standard formats to pass data between the disparate business systems. Third parties provide EDIFACT services that enable organizations with different equipment to connect. Although interactive access may be a part of it, EDIFACT implies direct computer to computer transactions into vendors' databases and ordering systems.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Target connectors
If any Null characters (hex 00) exist in a Fixed ASCII data file, the integration platform assumes this is the end of the file and stops reading the data at the first occurrence of a Null character. If you experience this, change the source connector to "Binary".
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
ByteOrder
S
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian. and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ElementSeparator
S
The following element separators are available:
CR-LF
SOH (0001)
STX (0002)
ETX (0003)
EOT (0004)
ENQ (0005)
ACK (0006)
BEL (0007)
BS (0008)
HT (0009)
LF (000A)
 
 
VT (000B)
FF (000C)
CR (000D)
SO (000E)
S1 (000F)
DLE (0010)
DC1 (0011)
DC2 (0012)
DC3 (0013)
DC4 (0014)
 
 
NAK (0015)
SYN (0016)
ETB (0017)
CAN (0018)
EM (0019)
SUB (001A)
ESC (001B)
FS/IS4 (001C)
GS/IS3 (001D)
RS/IS2 (001E)
US/IS1 (001F)
SP (0020)
 
 
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
' (0027)
( (0028)
) (0029)
* (002A)
 
 
+ (002B) - Default
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
 
 
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
' (0060)
{ (007B)
(007C)
} (007D)
~ (007E)
DEL (007F)
SegmentTerminator
S
In this property, you may select a Segment Terminator from the list. (See the ElementSeparator list, above, for your options).
SourceScriptFile
S
The script file used for the source structure. To change the script file, enter the new file name and click OK.
Encoding
S
This allows you to select the type of encoding used with your source and target files.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. Open the data file with a text editor and change UCS-2 to UCS2.
DatatypeSet
T
Allows you to choose between standard and COBOL data types in your fixed ASCII data file. Standard is the default and means that all the data in the file is readable (lower) ASCII data.
If your fixed ASCII file contains (or needs, for a target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you MUST change this property option to "COBOL" before connecting to the COBOL copybook in the External Structured Schema window.
FieldSeparator
T
Allows you to choose a field separator character for your target file. The default is None. The other choices are comma (,), tab, space, carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), control-R, and pipe ( | ).
If the record separator is not one of the choices from the list and is a printable character, highlight None and then type the correct character. For example, if the separator is an asterisk ( * ), type an asterisk from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
FillFields
T
Writes an ASCII data file where every field is variable length. If this property is set to false, all trailing spaces are removed from each field when the data is written. The default is true. The true setting pads all fields with spaces to the end of the field length to maintain the fixed length of the records.
RaggedRight
T
Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records.
RecordSeparator
T
A fixed ASCII file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters as the record separator or no record separator, click the RecordSeparator cell and click once. Then click the down arrow to the right of the box and click the desired record separator in the list box. The choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line and no record separator. To use a separator other than one from the list, you can type it here.
If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
TabSize
T
If your source fixed ASCII file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero. To change it, highlight the zero and then type a new value.
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in Text (Delimited – EDIFACT) is Text, but you may wish to use other data types. The following data types are available:
Boolean
Name
Number
Text
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Alignment
Data in each field of your target file or table has an alignment property of general, left, center, or right.
Pad Character
This option is valid only for character fields. When data does not fill a field completely, the remainder of the field may be filled with some character.
Unicode (Delimited)
Unicode is a character set that uses 16 bits (two bytes) for each character and is able to include more characters than ASCII. Unicode can have 65,536 characters and therefore can be used to encode almost all the languages of the world. Unicode includes the ASCII character set within it. With this delimited text connector, you can read and write Unicode files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Source files containing null characters (0x00) embedded in a text string are not supported. All information following the null characters is stripped from the file.
Using an External Schema to Override Source Structure
When Unicode (Delimited) is the source connector, the data structure is normally set by field delimiters and the header record of the source file. However, after connecting to a file, you can override this structure by applying an external schema, for example to change field names, change their size, or even add additional fields for multiple record layouts.
Using an External Schema to Override Target Structure
When Unicode (Delimited) is the target connector, the data structure is normally set by field delimiters and the header record of the target file. However, after connecting to a file, you can override this structure by applying an external schema, for example to change field names, change their size, or even add additional fields for multiple record layouts.
Delimiter Characters Occurring as Data within a Field
Characters that delimit the start and end of a field may also appear as data within the field. To ensure that a data character is not interpreted as a delimiter, the integration platformthe creates an escape sequence by doubling the character when it is assumed to be data.
Quotation marks are a common example of this escape sequence. As shown below, the quotation marks enclose quoted words in an Excel source field. In the mapping to a delimited Unicode target with a quotation mark selected as field delimiter, the quotation marks are doubled for the data but not for the delimiters enclosing the field.
Excel source
The customer said, "A penny saved is a penny earned."
Delimited Unicode target, with field delimiters
"The customer said, ""A penny saved is a penny earned."""
HeaderRecord Property in the Source
If the HeaderRecord property is set to true, then a single header record is skipped at the beginning of the file. If there are later header records for the additional record types, they appear as data and can possibly cause errors.
All records are read using the same properties.
Unless truncation handling is turned off (set to Ignore), each record is read twice. To read a single Source record require reading the discriminator record, then if the discriminator indicates a different record type we must reread with the new record type. However, while reading the discriminator record we have to momentarily turn truncation handling off, so even if the discriminator record indicates itself as the record type, if truncation is not set to ignore we must turn it back on and reread the record.
Simply connecting to a source file produces only one record type. If the file has multiple record types, the user must create the record type structure in a schema.
HeaderRecord Property in the Target
If the HeaderRecord property is set to true, a header is written only for the first record type in a multiple record layout.
All records are written using the same properties.
Supported Encoding
CP935 IBM EBCDIC Simplified Chinese, Combined (836 + 837) is supported.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AlternateFieldSeparator
S
Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave this set to None (the default).
The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, and pipe (|). To select a separator, click AlternateFieldSeparator. Then click the arrow to the right of the box to choose from the list of available separators. If you have an alternate field separator other than one from the list, you can type it here.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator.
The Unicode connectors read the data from the file as Unicode and look for the Unicode characters specified as the separators to break up the data into fields or records. Then, the actual Unicode data is assigned to fields or records.
AutomaticStyling
S
AutomaticStyling changes the way Unicode data is read or written. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, it determines and formats (automatically) particular data types, such as numeric and date fields.
AutomaticStyling insures, for example, that a date field in a Unicode source file is formatted as a date field in the target file, and not as a character or as text data.
Note:  If a source file contains zip codes, you may want to leave AutomaticStyling to false so that the leading zeros in some zip codes in the eastern United States are not deleted.
Note:  For a Unicode target file, if you set FieldDelimitStyle to Text, you must also set AutomaticStyling to true so that delimiters are placed around only the nonnumeric fields.
ByteOrder
ST
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
EmptyFieldsNull
S
Allows you to treat all empty fields as null.
Encoding
ST
Type of encoding to use with source and target files.
Field1IsRecTypeId
S
If the first field of each record in your source file contains the Record Type ID, you can select true for this property and the integration platform treats each record as a separate record type. Within each record, field names derived from the Record Type ID are automatically generated for each field. For details, see Field1IsRecordType.
FieldDelimitStyle
T
When Unicode (Delimited) is your connector, this option determines whether the specified FieldStartDelimiter and the FieldEndDelimiter is used for all fields, only for fields containing a separator, or only for text fields, as follows:
All – Places the delimiters specified in FieldStartDelimiter and FieldEndDelimiter before and after every field. Default setting is All. For example: "Smith","12345","Houston".
Partial – Places the specified delimiters before and after fields only where necessary. A field that contains a character that is the same as the field separator would have the field delimiters placed around it. A common example is a memo field that contains quotes within the data: "Customer responded with "No thank you" to my offer"
Text – Places delimiters before and after text and name fields (non-numeric fields). Numeric and date fields have no FieldStartDelimiter or FieldEndDelimiter. For example: "Smith", 12345,"Houston", 11/13/04
Non-numeric – Places delimiters before and after all nonnumeric types, such as date fields. An important difference between non-numeric and text is that non-numeric delimits date fields, while text does not.
FieldEndDelimiter
ST
Delimited Unicode files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quotation mark because it is the most common. However, some files do not contain field delimiters, so this option is available for both source files and target files. To read from or write to a file with no delimiters, set FieldStartDelimiter to none.
FieldSeparator
ST
A delimited Unicode file is presumed to have a comma between each field. To specify some other field separator, click once in the FieldSeparator Current Value box. Then click the down arrow to the right of the box to display the list of options. The list box options are comma (default), tab, space, carriage return-line feed, linesep, line feed, carriage return, line feed-carriage return, a pipe (|), and no field separator. If you have or need an alternate field separator other than one from the list, you can type it here.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator.
The Unicode connectors read the data from the file as Unicode and look for the Unicode characters specified as the separators to break the data up into fields or records. Then the actual Unicode data is assigned to fields or records.
FieldStartDelimiter
ST
Delimited Unicode files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quotation mark because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldEndDelimiter to none.
Header
ST
In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false.
Note:  If your target connector is Unicode (Delimited) and you are appending data to an existing file, leave Header set to false.
MaxDataLen
T
When Unicode (Delimited) is your target connector, this option allows you to specify the maximum number of characters to write to a field. If this value is set to 0 (the default), the number of characters written to a field is determined by the field length. If you set this value to a number other than zero, data may be truncated.
NullIndicator
ST
This property allows you to enter a special string used to represent null values. You can select predefined values or type any other string.
Target – When writing a null value, the contents of the null indicator string are written.
Source – A check is made to see if the null indicator is set. If it is set, the data is compared to the null indicator. If the data and the null indicator match, the field is set to null.
NumericFormatNormalization
S
Setting this property to true handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Supported in 9.2.2 and later. Default is false.
OrderMark
T
The Order Mark is a special character value sometimes written to a Unicode text file to indicate the byte order used for encoding each of the Unicode characters. In the integration platform, you have the option of writing byte order mark at the beginning of Unicode (wide) output or not. The default is false. If you wish to have the byte order mark placed at the beginning of your output, change this option to true.
RecordFieldCount
S
If your source data file has field separators but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero.
RecordSeparator
ST
A delimited Unicode file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters for a record separator, click RecordSeparator for a list of choices, including system default, carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E, and no record separator. To use a separator other than one from the list, enter it here. The SystemDefault setting enables the same transformation to run with CR-LF on Windows systems and LF on Unix systems without having to change this property.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator.
The Unicode connectors read the data from the file as Unicode and look for the Unicode characters specified as the separators to break the data up into fields or records. Then the actual Unicode data is assigned to fields or records.
StartOffset
 
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
StripLeadingBlanks
ST
For a Unicode source file, by default the integration platform leaves leading blanks in delimited Unicode data. If you want to delete the leading blanks, set StripLeadingBlanks to true.
For a Unicode target file, by default, the integration platform strips leading blanks in delimited Unicode data. If you want to leave the leading blanks, set StripLeadingBlanks to false.
StripTrailingBlanks
ST
For a Unicode source file, by default the integration platform keeps trailing blanks in the data. If you want to delete the trailing blanks, set StripTrailingBlanks to true.
For a Unicode target file, by default the integration platform strips trailing blanks in the data. If you want to leave the trailing blanks, set StripTrailingBlanks to false.
The field options that you may change are listed below.
StyleSampleSize
S
Sets the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure that no data is truncated.
To change the value, click the StyleSampleSize Current Value box, highlight the default value and type a new value.
TransliterationIn
T
Allows you to specify a character, or a set of characters, to be filtered out of the source data. For any character in TransliterateIn, the corresponding character from the TransliterateOut property is substituted. If there is no corresponding character, the source character is filtered out completely. TransliterateIn supports C-style escape sequences such as \n (new line), \r (carriage return) and \t (tab).
TransliterationOut
T
Allows you to specify a character to be substituted for another character from the source data. For any character in TransliterateIn, the corresponding character from the TransliterateOut property is substituted. If you wish the source character to be filtered out completely, leave this field blank. If there are no characters to be transliterated, this field should be left blank. The TransliterateOut property supports C-style escape sequences such as \n (new line), \r (carriage return) and \t (tab).
Field1IsRecordType
If Field1IsRecordType is to true and your first record consists of the following:
"Names", "Arnold", "Benton", "Cassidy", "Denton", "Exley", "Fenton"
Then the integration platform assigns these field names:
Names_01
Names_02
Names_03
Names_04
Names_05
Names_06
Names_07
Names
Arnold
Benton
Cassidy
Denton
Exley
Fenton
See the Field1IsRecordType entry in the table of property options for delimited Unicode connectors.
Additional Information about Encoding
You should be aware of the following regarding the Encoding property option:
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer a valid encoding name, but you may use UCS2. Open the data file with a text editor and change UCS-2 to UCS2.
To display Chinese-Japanese-Korean-Vietnamese (CJKV) data in Data Browser
1. Verify that your operating system has at least one font available that corresponds to the specific character set and code page you want to use.
2. Select the Unicode connector and your encoding method in the source or target properties.
3. Go to the main menu and select View > Preferences > Fonts.
4. Choose a font that corresponds to your character set and encoding method.
Data Types
By default, all Unicode data is read as Text. Fields containing dates or numbers may be changed to a different data type.
Note:  Because of the variety of date formats in data files, we suggest you change the type of any field that contains a date to Date in the schema.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
The maximum supported source field length is 2 GB.
For example, there is a field that contains numbers. The numbers are a dollar value with two decimal places, such as 7122.50. The field type default is Text and (because of values in other records) the field Size default is 10. You are transforming the data to a database application in which you want the data in this field to be numeric. If you change the source field type to Float, the field length becomes blank, the precision default is 15, and the decimal changes to 2. This field automatically appears as an appropriate numeric data type in your target schema and is a numeric field in your target data file.
If you want to set different field delimiters for fields that contain numeric data, see FieldDelimitStyle in the property options. The following data types are available:
Boolean
Date
Date/Time
Decimal
Float
Integer
Name (parses and displays a proper name into its parts, such as honorific, title, last name, middle initial, first name)
Text
Time
Unicode (Fixed)
Unicode is a character set that uses 16 bits (two bytes) for each character and therefore is able to include more characters than ASCII. Unicode can have 65,536 characters and therefore can be used to encode almost all the languages of the world. Unicode includes the ASCII character set within it. With this fixed text connector, you can read and write Unicode files.
Unicode (Fixed) data can be described as any Unicode file that has no characters separating fields, records may or may not be separated, and where each record in the file occupies the same number of bytes.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Supported Encoding
CP935 IBM EBCDIC Simplified Chinese, Combined (836 + 837) is supported.
 
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
ByteOrder
ST
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
CharFieldWidths
ST
Allows you to set field width by number of characters or number of bytes. With MBCS characters, a character may take more than one byte; files may have columns fixed by number of characters regardless of the number of bytes or may have columns fixed by number of bytes (variable number of characters). If truncation occurs in a column, the last double-byte character is replaced by a single-byte padding character. The default is false. False sets the field width by number of bytes. true sets the field width by number of characters.
DatatypeSet
ST
Allows you to choose standard or COBOL data types in a Unicode (Fixed) data file. Standard is default and means that all data in the file is readable.
If your Unicode (Fixed) file contains (or needs, for target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you must change this property option to COBOL before connecting to the COBOL copybook in the External Structured Schema window.
Encoding
ST
Type of encoding to use with source and target files. For details, see Additional Information About Encoding.
FieldSeparator
T
Allows you to choose a field separator character for your target file. The default is None. The choices are None (default), coma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, control-R, and pipe (|). If the alternate field separator is not one of the listed choices and is a printable character, see Alternate Tip on FieldSeparator Property.
Fill Fields
T
Allows writing a Unicode (Fixed) data file where every field is variable length. If this property is set to false, all trailing spaces are removed from each field when the data is written. The default is true. The true setting pads all fields with spaces to the end of the field length to maintain the fixed length of the records.
InsertEOFRecSep
S
This option inserts a record separator on the last record of the file, if it is missing. The default is false. If set to true, this property captures the last record (with no record separator) instead of discarding it.
NumericFormatNormalization
S
Setting this property to true handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Default is false.
Order Mark
T
The Order Mark is a special character value that is sometimes written to a Unicode text file to indicate the byte order used for encoding each of the Unicode characters. In the integration platform, you have the option of writing byte order mark at the beginning of Unicode (wide) output or not. The default is false. If you wish to have the byte order mark placed at the beginning of your output, change this option to true.
Ragged Right
T
Writes a data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records. You must set FillFields to false for the RaggedRight property to work properly. The Ragged Right property has no effect if you set FillFields to true. If FillFields is false, then the RaggedRight property determines whether blank fields and fields with only spaces as data appears at the end of the record.
RecordSeparator
ST
A Unicode (Fixed) file is presumed to have a carriage return-line feed (CR-LF) between records. To specify other characters to separate records, click RecordSeparator for a list of choices, including system default, carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E, and no record separator. To use a separator other than one from the list, enter it here. The SystemDefault setting enables the same transformation to run with CR-LF on Windows systems and LF on Unix systems without having to change this property.
If your field or record separator is not listed, highlight the default separator. Enter the characters you wish to use as a separator.
The Unicode connectors read the data from the file as Unicode and look for the Unicode characters specified as the separators to break the data up into fields or records. Then the actual Unicode data is assigned to fields or records.
Sample Size
S
Set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default is 1000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated. To change the value, click the Sample Size Current Value box, highlight the default value and type a new value.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. This property is set in number of bytes, not characters, regardless of the CharFieldWidths property setting.
StripLeadingBlanks
S
For a Unicode source file, by default, leading blanks are left in Unicode (Fixed) data. To delete leading blanks, click the StripLeadingBlanks Current Value box and click once. Then click the down arrow to the right of the box and click true.
StripTrailingBlanks
S
By default, trailing blanks are left in Unicode (Fixed) data. To delete trailing blanks, click the StripTrailingBlanks Current Value box and click once. Then click the down arrow to the right of the box and click true.
Tab Size
ST
If your source or target Unicode (Fixed) file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero.
Alternate Tip on FieldSeparator Property
If the alternate field or record separator is not listed
1. Highlight the default separator.
2. Enter the characters you wish to use as a separator.
The Unicode connectors read the data from the file as Unicode and look for the Unicode characters specified as the separators to break the data up into fields or records. Then the actual Unicode data is assigned to fields or records.
Additional Information About Encoding
You must be aware of the following regarding the Encoding property option:
Shift-JIS encoding is meaningful only in Japanese operating systems.
To display Chinese-Japanese-Korean-Vietnamese (CJKV) data in Data Browser
1. Verify that your operating system has at least one font available that corresponds to the specific character set and code page you want to use.
2. Select the Unicode connector and your encoding method in the source or target properties.
3. Go to the main menu and select View > Preferences > Fonts.
4. Choose a font that corresponds to your character set and encoding method.
For details on how to manually define the structure of fields and records, search for "data parser" in the documentation. If you have a COBOL 01 copybook file with which to define your fields, use Binary as your source connector.
Data Types
All data in Unicode files is Text, but you may wish to use other data types. The following data types are available:
Boolean (parses and displays true or false values)
Name (parses and displays proper name into name parts, such as honorifics, titles, last name, middle initial, first name)
Number (parses and displays numeric and floating values)
Text (parses and displays alphanumeric values)
Decimal (parses and displays a proper fraction whose denominator is a power of 10)
Note:  Use the Name data type if you want to parse a name into its component pieces. Some examples are honorifics (Mr., Dr.), names (first, middle, last) and titles (PhD., Jr.). You can also display the component pieces according to an edit mask. For example, you can set the mask to display a name in a field as "lastname, firstname" (Jones, Joan).
USMARC
MARC is the format of the Library of Congress information files. With this connector, the integration platform can only read the USMARC data; it cannot write data to this format.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CodePage
S
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Variable Sequential (MVS)
Variable Sequential (MVS) is a storage format for data downloaded from a mainframe. Variable Sequential (MVS) has a Record Descriptor Word (RDW) in the form of a four byte signed integer field at the beginning of each record and records of various lengths within the file. With this connector, the integration platform pads all the records to the maximum record length. Information in a Variable Sequential (MVS) can be stored in EBCDIC or ASCII format.
Note:  This data is not the same as an MVS Variable Blocked Sequential file (which is not supported by this software).
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Property Options
EBCDIC
If Variable Sequential (MVS) is your source, the CodePage property is already set to US (EBCDIC). If your source file is written in ordinary ASCII characters, you must change the CodePage.
You can set the following source (S) and target (T) properties.
Property
S/T
Description
MaxRecordLength
S
This specifies the maximum record length of the data. The default is 32700 bytes.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character)
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating-point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display boolean
Display date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display time
Magic PC date
Magic PC extended
Magic PC number
Magic PC real
Magic PC time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Variable Sequential (Record-V UniKix)
Variable Sequential (Record-V UniKix) is a storage format for data downloaded from a mainframe. Variable Sequential (Record-V UniKix) has a Record Descriptor Word (RDW) in the form of a four byte signed integer field at the beginning of each record and records of various lengths within the file. With this connector, the integration platform pads all the records to the maximum record length. Information in a Variable Sequential (Record-V UniKix) can be stored in EBCDIC or ASCII format. Variable Sequential (Record-V UniKix) files have no headers.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
MaxRecordLength
S
This specifies the maximum record length of the data. The default is 32700 bytes.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character)
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating-point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE numeric
Display
Display boolean
Display date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display time
Magic PC date
Magic PC extended
Magic PC number
Magic PC real
Magic PC time
Microsoft BASIC double
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Variable Sequential (SyncSort)
Variable Sequential (SyncSort) is a storage format for data downloaded from a mainframe. It has a Record Descriptor Word (RDW) in the form of a two byte signed integer field at the beginning of each record and records of various lengths within the file. With this connector, the integration platform pads all the records to the maximum record length. Information in a Variable Sequential (SyncSort) can be stored in EBCDIC or ASCII format.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
MaxRecordLength
S
This specifies the maximum record length of the data. The default is 32700 bytes.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character)
Record Length Inclusive
ST
When true, this setting indicates that the record length indicator includes the bytes of the indicator itself. The default is false, meaning that the record length indicated does not include the bytes of the indicator itself.
RLF MSB first
ST
This setting adjusts the byte order of the Record Length Field, also called the Record Descriptor Word (RDW). The default is true, which means the Most Significant Byte of the record length field is first.
Word Align Record
ST
Tells the integration platform to align records on a word (16-bit) boundary when true, which is the default.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating-point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE numeric
Display
Display boolean
Display date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display time
Magic PC date
Magic PC extended
Magic PC number
Magic PC real
Magic PC time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Variable Sequential (User Defined)
Variable Sequential (User Defined) is a storage format for data downloaded from a mainframe. It has a Record Descriptor Word (RDW) in the form of a two byte signed integer field at the beginning of each record and records of various lengths within the file. With this connector, the integration platform pads all the records to the maximum record length. Information in a Variable Sequential (User Defined) can be stored in EBCDIC or ASCII format.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
MaxRecordLength
S
This specifies the maximum record length of the data. The default is 32700 bytes.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
RecordSeparator
ST
This specifies what sort of character is used to mark the end of a record. The default record separator is carriage return-line feed. To use other characters for a record separator, click the RecordSeparator cell, click the arrow and select a record separator. Choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line and no record separator.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character)
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating-point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display boolean
Display date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display time
Magic PC date
Magic PC extended
Magic PC number
Magic PC real
Magic PC time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Velocis (ODBC 3.x)
Velocis – formerly a product of Centura Software – is now called Birdstep RDM Server, but is labeled Velocis (ODBC 3.x) in the source and target connection lists in the integration platform. The name change occurred when Birdstep Technology of Norway acquired all Centura/Raima database product lines (late 2001). With this ODBC 3.x connector, you can read and write Velocis files.
Velocis (Birdstep RDM Server) is an application-oriented relational database management system (RDBMS) and is compatible with Windows, LINUX and Unix operating systems. Velocis is a cross-platform database server for e-Business, Web applications, industrial servers and other applications. The Velocis application-server architecture supports ANSI SQL, Unicode, RPC (Remote Procedure Calls), JDBC and ODBC.
The integration platform supports Velocis version 3.5 (now named Birdstep RDM Server version 3.5). Instead of connecting to a communications layer as in previous versions of Velocis, in Birdstep's version 3.5, you can connect directly to a Web server. Because of this enhancement, users can develop multithreaded applications that provide faster access to their database information.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Installing Drivers and Troubleshooting Connections
Installing an ODBC Driver
ODBC Connectivity Tips
Connector-Specific Notes
Source
The Velocis ODBC driver (version 3.5) on the Source does not support Dynamic cursor calls or SetEnvAttribute calls.
Note:  The integration platform connects to Velocis tables with ODBC 3.x. For the procedure, and information about the property options, and source and target schemas, see ODBC 3.x.
Visual dBASE 5.5
Visual dBASE 5.5 is a database application. With this connector, you can read and write Visual dBASE 5.5 files. If the data file references a memo field file, the memo file must exist for the connection to occur. The primary data file usually has a .DBF extension and the memo file usually has a .DBT extension. Visual dBASE 5.5 files are structured; for example, both the data and the file structure are stored inside the primary data file.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Field Names – Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter.
Size of Fields – Character fields can be no longer than 254 characters. If a field is longer than 254 characters, it must be defined as a Memo field.
Number of Fields – A maximum of 255 fields are allowed.
Record Width – The maximum combined width of all fields in one record is 4000 bytes, excluding memo fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
ST
The following options determine how Visual dBASE memo files are handled:
Never - this is the default. This option causes the integration platform to look for and include any memo file fields when the source data file is read.
Errors - Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data.
If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always - Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Character – may contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes).
Date – may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1999 would read 19990101.
Float – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (-), or plus sign (+). Float fields are used to store small and largesmall and large numbers needed in scientific calculations, such as 9.1234e 12 or 9,123,400,000,000.
Logical – may contain only one byte and is formatted to contain a t, f, T, or F, for true or false.
Memo – may contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (-), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field. Numeric fields are used to store the exact value of a number with a fixed number of digits.
Ole – embedded object
Visual FoxPro
Visual FoxPro is a database application that uses a xBASE file format. If the source data file references a memo field file, the memo file must exist for the connection to occur. The primary data file usually has a .DBF extension and the memo file usually has a .FPT extension. FoxPro files are structured; for example, both the data and the file structure are stored inside the primary data file.
With this dBASE IV connector, the integration platform can read Visual FoxPro files. For more information about connector options, see dBASE IV.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Field Names - Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter. Examples are ACCOUNT, LAST_NAME, FIRST_NAME, PHONE_NO.
Size of Fields - Character fields can be no longer than 254 characters. If a field is longer than 254 characters, it must be defined as a Memo field.
Number of Fields - A maximum of 255 fields are allowed.
Record Width - The maximum combined width of all fields in one record is 4000 bytes, excluding memo fields.
Visual FoxPro (ODBC)
With this ODBC 3.x connector, Map Editor connects to Visual FoxPro tables. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
XDB (ODBC 3.x)
XDB is an XML document repository that supports structured storage of XML data. It uses an relational database management system (RDBMS) mapping over PostgreSQL. The application provides an XML database framework, and supports the following:
XML Path Language (XPath)
XML Query Language (XQL)
XML document and SAX API storage
The integration platform connects to XDB tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
XML
XML stands for eXtensible Markup Language. It is a form of non-platform specific text presentation, particularly useful for Internet applications. It is a subset of SGML and compatible with HTML. This XML connector uses a DOM parser. With this connector, Map Designer can read and write XML files.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
This connector writes multimode output, so replace/append target mode does not apply.
XML Terminology
XML terms may be unfamiliar and often are unique to XML. Read about how elements interact with Map Designer structures and find definitions to common terms under the entry “XML Glossary” in Glossary Reference.
Limitations
The following are the limitations for XML connector:
Case Sensitivity in Tags - XML tagged fields are case sensitive, so it is legal to define record types with the same name as long as the case is different. (For example, addr, aDDr, ADDR and Addr would be considered four different tags.)
Reserved Characters Not Supported in XML data fields:
< use: &lt;
> use: &gt;
& use: &amp;
' use: &apos;
" use: &quot;
Data vs. Presentation Format - Map Designer is a data transformation tool. If you try to use Map Designer to read an XML file that has been written with presentation in mind, rather than data content, the results may be undesirable.
Validating DTD files - The DTD external source for the target file is used to create the output structure only. There is no validation of the target file against the DTD on the Target side, therefore you need to use another means to validate the file.
Truncation Error Trapping - This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error, as do connectors that have truncation support. The connectors with truncation support are Access 2.x/95, ASCII (Delimited), ASCII (Fixed), EDI, IBM DB2, MySQL, Oracle 7/8/Direct Path/SQL Loader, ODBC 3.x, and SQL Server 7/2000/BCP (Replace and Append modes only).
Unions Not Supported - Map Designer does not read schemas with unions. If your schema contains unions, you must modify the schema to not include unions.
Specifying DTD Requirements - In DTDs, it is not currently possible to specify the existence of an element as 1, 1 to many, 0 to 1, 0 to many, or to specify records as a "sequence of" or "choice of", etc. If you want to create a more "robust" DTD with these specifications, you must use a DTD authoring tool.
Property Options
The following table provides the properties that you may need to specify for your source or target data.
Property Name
S/T
Description
AttrFields
S
XML Attributes are considered as fields. The default is true.
DoctypeRecsOnly
S
Build records that exist below the DOCTYPE (root). The default is false. If you want to define only the records that actually appear in the root, change this option to true. Otherwise, Map Editor defines all possible records, including those below the root.
IgnoredAttributes
S
A list of attributes that must be ignored when determining file structure. The default is e-dtype a-dtype.
IgnoreWhite
S
Ignores "ignorable" whitespace. The default is true.
MaxQualifications
ST
Allows you to control how much recursion is allowed when using QualifiedRecordNames with nested objects. Default value is 0 to indicate no limit.
QualifiedRecordNames
S
Qualifies record names with higher-level records. The default is false.
Tip:  Set this property to true to see the hierarchy of parent-child relationships of all record types, including different child record types with the same names.
This property qualifies all the records with the hierarchical path from which the record is used. In other words, it concatenates the parent record names as it goes down the hierarchy, thereby creating unique record names. This allows you to get separate events for what is the same record, but under a different parent or set of parents.
This is a source property only. However on the Target side, if you have a DTD, you can specify XML (DTD) Qualified Records in the Target Schema and select the DTD you want to use as a structure.
RootDataElements
S
Reads all the data elements contained within the doctype element without creating records based on these elements.
Note:  When you set the RootDataElements property to true, the entire document is read into memory before any records are parsed.
Sample document:
<root>
  <data>Here is some data</data>
  <record>
    <data1>Here is some data</data1>
    <data2>Here is some data</data2>
  </record>
</root>
The preferred method is to read the data element as a record containing one field that is also named data. This way, the data can be read normally, without setting the RootDataElements property and without reading the entire document into memory. However, if you use a schema to read from an external DTD subset, the transformation XML schema may not be able to set the records up this way. In this case, setting the RootDataElements property causes the entire data set to be read into memory before it is parsed. This also allows the root record to contain a data field, which is populated with data.
Schema
S
XML Schema that must be used.
StripLeadingBlanks
S
For an XML source file, by default, Map Editor strips leading blanks in XML data. If you do not want to delete the leading blanks, select False.
StripTrailingBlanks
S
For an XML source file, by default, Map Editor strips trailing blanks in XML data. If you do not want to delete the trailing blanks, select False.
StyleSampleSize
S
Allows you to set the number of records (starting with record 1) that Map Editor analyzes to set a default width for each field in your source file. The default value for this option is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, Map Editor requires more time to analyze the file, but it may be necessary to analyze every record to ensure no data is truncated.
ElementRecords
S
All elements are considered as records.
ByteOrder
T
Allows you to specify the byte order of Unicode (wide) characters. The options are:
Auto (default) - Determined by the architecture of your computer.
Little Endian- Generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value.
Big Endian - Used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
DoctypeName
T
Doctype that Map Editor uses to create the target file. The default is recordset.
DTDFile
T
Allows you to specify the name of the DTD file that Map Editor writes to or references when the target file is created. If this is left blank, the default file name is DOCNAME.DTD.
Encoding
T
Allows you to select the type of encoding used with your source and target files. The default encoding is ISO-8859-1.
Shift-JIS: This encoding is meaningful only to users with Japanese operating systems.
Formatted
T
Select output with or without line breaks and indenting. If set to true (default), the output is written with the current line breaks and indenting. If set to false, the output is written with no line breaks or indenting in the body of the XML document.
Tip:  Large XML messages that include hierarchical formatting can greatly slow down near real-time processing. Use this property to write data strings without formatting to alleviate the formatting overhead issue.
InternalSubset
T
Specify the internal DTD subset to write when referencing an external subset. To select an internal DTD, click the box and click once. Type the text of the internal DTD subset, without the '[' and ']' characters that encloses the internal subset in the DTD.
ProcessingInstructions
T
Type a description of the processing instructions. These instructions are similar to the DTD declaration, since it instructs the XML parser about the methodology used to produce this XML instance. This is optional. These instructions are written after the XMLDecl (if it exists) and before the DTD file (if it exists). See the following example for the syntax. In the Processing Instructions example, Line 1 is the XMLDecl, Line 2 is the processing instructions and Line 3 is the DTD file:
Example:
' With Processing Instructions?xml version="1.0" encoding="ISO-8859-1"?
<processing instructions>
!DOCTYPE PurchaseOrder SYSTEM "x:\xml\orders\filename.dtd"
' Without Processing Instructions
?xml version="1.0" encoding="ISO-8859-1"?
!DOCTYPE PurchaseOrder SYSTEM "x:\xml\orders\filename.dtd"
WriteDTD
T
Allows you to specify the type of DTD file to write when creating the target file. The available options are None, Internal, External, and Reference. The default is none.
WriteEmpty
T
Indicates whether or not to write elements as empty fields. The default is true to write the elements as empty. Select False to suppress writing of elements as empty fields.
WriteEmptyAttributes
T
Indicates whether or not to write attributes as empty fields. The default is true to write the attribute information in a tag. Select False to suppress writing the tag when it contains no data.
WriteXMLDecl
T
The XMLDecl code lists the XML version number and the type of Encoding used in the XML file. To turn off writing the XMLDecl, select False. The default is true.
RetainRecordOrder
T
The default is false. If set to true, record-type elements appear within their parent element in the order they were “put.” This has a couple of side-effects. When this property is true within a given element, all elements contained with a record data type appear after all elements that have any other data type. If this is not required, then the non-record elements must be transformed to record-type elements with only one field, using the same name as the record. This way, the order of all elements may be controlled. However, when this property is true, it is impossible to write an empty element of record data type.
Footer
T
Allows providing text that will get written at the end of the document.
UseEmptyTag
T
Specifies whether an empty element is written instead of a start element/end element tag pair.
IsChildTextNode
T
Specifies whether to use a child node as text node if name of the field is the same as name of the record. If you have a record named R1, and it has fields R1 and R2, it specifies whether to write
<R1>Text for R1 field
   <R2> Text for R2 field</R2>
</R1>
or
<R1>
   <R1>Text for R1 field</R1>
   <R2>Test for R2 field</R2>
</R1>
XML Support
Map Editor can read and write all dialects of XML, whether or not you have DTDs or schemas to define the file structure. You can use XML as your source or target connector and optionally specify any DTD or schema files in the Source or Target Schema box.
The following list does not include every XML flavor that Map Editor supports. You may be familiar with an industry standard that is not listed here. If the markup language uses the basic elements of standard XML markup, it is probably compatible with the XML metadata connector.
To find standards and specification information on any of the following markup languages, use the Web search engine:
ACORD XML – Association for Cooperative Operations Research and Development
Adex – Document type for newspaper classified ads
ADML – Astronomical Dataset Markup Language
AdXML – Advertising for XML
AIML – Astronomical Instrument Markup Language
AnthroGlobe Schema – Social Sciences metadata format
AppML – Application Markup Language
BoleroXML – Bolero.net’s cross-industry XML standard
BSML – Bioinformatic Sequence Markup Language
CDF – Channel Definition Format
CIDS – Component Information Dictionary Standard
CIDX – Chemical Industry Data Exchange
CML – Chemical Markup Language
cXML – Commerce XML
CoopML – Cooperation Markup Language
CWMI – Common Warehouse Metamodel Interchange
DAML+OIL – DARPA Markup Language
DITA – Darwin Information Typing Architecture
DocBook XML – DTD for books, computer documentation
ebXML – Electronic Business Markup Language
Eda XML – Electronic Design Automation XML
esohXML – Environmental, Safety and Occupational Health XML
FpML – Financial products Markup Language
GDML – Geometry Description Mark-up Language
GEML – Gene Expression Markup Language
HumanML – Human Markup Language
JSML – Jspeech Markup Language
LMML – Learning Material Markup Language
LOGML – Log Markup Language
MathML – Mathematical Markup Language
MCF – Meta Content Framework
MoDL – Molecular Dynamics Language
MusicXML – Music XML
NetBeans XML Project – Open source XML editor for NetBeans Integrated Development Environment
NewsML – NetFederation Showcase
NITF – News Industry Text Format
OMF – Weather Observation Definition Format
OAGI – Open Applications Group
OpenOffice.org XML Project – Sun Microsystem's XMLfile format used in the StarOffice suite
OSD – Open Software Description Format
PetroXML – Oil and Gas Field Operations DTD
P3P – Platform for Privacy Preferences
PMML – Predictive Model Markup Language
QEL – Quotation Exchange Language
rezML – XML DTD and style sheets for Resume and Job Listing structures
SMBXML – Small and Medium Business XML
SML – Spacecraft Markup Language
TranXML – XML Transportation & Logistics
UBL – Universal Business Language
UGML – Unification Grammar Markup Language
VCML – Value Chain Markup Language
VIML – Virtual Instruments Markup Language
VocML – Vocabulary Markup Language
WSCI – Web Service Choreography Interface
X3D – Extensible 3D
XBEL – XML Bookmark Exchange Language
XBRL – eXtensible Business Reporting Language
XFRML – Extensible Financial Reporting Markup Language
XGMML – Extensible Graph Markup and Modeling Language
XLF – Extensible Logfile Format
XML/EDI Group – XML and Electronic Data Interchange (EDI) e‑business initiative
XVRL – Extensible Value Resolution Language
XML DOM Support
XML DOM trees provide a way to determine and manipulate an XML file structure. Two XML data types are available in EZscript called DOMDocument and DOMNode. They can be used to create XML documents that are stored as a DOM tree. You can then access any node on the DOM tree, from the root element on up through the parent and child nodes.
For more information, see DOMNode Object Type and DOMDocument Object Type.
XML-DMS
This connector is used to read and write XML files. Reading and writing is done using a streaming methodology, attempting to keep as little data in memory at one time as possible to improve performance.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
This connector writes multimode output. Therefore, replace and append target modes does not apply.
Note:  Normally, schemas for target multimode connectors are not provided. However, for XML-DMS connector, if the target file exists, then the UI connects using a source XML-DMS Connection and then the schema is copied to the target.
Table Operations
This connector supports CreateTable and DropTable table operations. These operations create and delete the file that is provided in the connection information. When a file with that name already exists and if you try to create the file, then it is an error. Therefore, if these operations are used, DropTable must always precede CreateTable. The table name normally associated with these operations is not required, since they delete and create the file named in the connection information. If the table name is provided, it must match the file name or an error occurs.
It is not required to use CreateTable and DropTable. If they are not used, the first Insert record operation implicitly performs the drop and create operations.
Note:  Clear Table and Create Index table operations are not supported.
Record Operations
The Insert record operation is supported. It adds the record at the appropriate place in the hierarchy. If it cannot find an appropriate place, an error occurs. This connector does not require associating a table with a record or record operation, though the GUI may require this, and any table association provided is ignored.
Note:  Delete, update, and upsert record operations are not supported.
Namespace support
Field names include the namespace prefix (if it is provided). When reading, namespace declarations (xmlns attributes) are ignored. When writing, check is not performed. Therefore, the schema may provide namespace declarations as appropriately named or typed fields.
Property Options
These are properties that you may need to specify for your source or target data.
 
Property
Type
ST
Use
Batch Size
Number
S
Sets the number of records to be read at one time. A value of 0 means to read all. If a value greater than 0 is set, due to document structure constraints, more records may actually have to be read. Default is 1000.
Namespace Prefixes
Boolean
S
Determines whether to include namespace prefixes in generated field names. Default is true. Use of namespace prefixes requires that the schemaLocation and noNamespaceSchemaLocation properties are not set. If either of those properties is set, then set this property to false.
FlushFrequency
Number
T
Number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance.
Batch Response
Filename
T
Sets the path name for a batch response file.
Caution!  Do not use this property. It is included only for backward compatibility.
Formatted
Boolean
T
Sets whether the output should be formatted. If true, each element starts on a new line, with the hierarchy shown by indentation of two spaces per level. If false, output is written in the XML document body without line breaks or indentation. Default is true.
Use Empty Tag
Boolean
T
For empty elements, use an empty tag or start or end a tag pair. If true, empty elements are written as an empty element as <element/>. If false, empty elements are written as a start element/end element pair <element></element>. Default is true.
Write Empty Fields
Boolean
T
Write elements as empty fields. The default is true. If true, empty fields are written. Empty means they contain an empty string, are null, or are not set. If false, no elements are written for these fields. Default is true.
Write Empty Attributes
Boolean
T
Write attributes as empty fields. The default is true. If true, empty attribute fields will be written as an attribute. Empty means they contain an empty string, are null, or are not set. If false, no attribute is written for these fields.Select False to suppress writing the tag when it contains no data. Default is true.
HTTP Write Method
-
T
To write to an HTTP target, select POST or PUT. Default is POST.
Write Namespaces
Boolean
T
Enable writing of namespace attributes or prefixes. Default is true.
Namespace Map
Text
T
Enter the namespace mappings of the form pfx:namespace. Separate multiple values with newlines or spaces.
Default Namespace
Text
T
Optional. Sets the default namespace for the xmlns attribute.
Schema Location
Text
T
Optional. Sets the xsi:schemaLocation attribute on the root element.
No Namespace Schema Location
Text
T
Optional. Sets the xsi:noNamespaceSchemaLocation attribute on the root element.
Generate Prefixes
Boolean
T
Generate namespace prefixes or redefine the default namespace as needed.
Prefer Type Substitution
Boolean
T
Use type substitution or element substitution for polymorphism if both are usable.
Write XML Decl
Boolean
T
Write the first line of XML encoding declaration. Default is true.
Data Types
These are the data types supported for fields in the XML connector. For each type, there is a corresponding "attribute" type to indicate the field is an attribute rather than an element. An example of this is a Boolean attribute.
Boolean
Byte
Bytes (base64-encoded)
Char
Date
DateTime
Decimal
Double
Float
Int
Long
Short
String
Time
Error Conditions, Handling, and Recovery
The following are the most common errors when working with XML as a source or target.
Source
Schema mismatch: The dataset schema does not match the map schema.
Field type errors: The data type for a field was read incorrectly.
Target
Sequencing issue: Records are inserted in an incorrect order.
Cardinality issue: Multiple instances of a record type are written when only one record type is allowed.
Using a DJMessage Object as a Source or Target
You can use DJMessage objects as source and target files for the XML-DMS connector. When you create the dataset, use the DJMessage:///[name of message object] URI scheme in the File/URI field for the dataset. You cannot establish a connection to the data source because the integration platform engine is unable to connect directly to the message object.
You can save the dataset with the DJMessage URI reference or specify the DJMessage URL with a macro that points to the DJMessage object file. If you use a macro, you must change the macro value to the DJMessage URL in the run-time configuration. See DJMessage Object Type for more information.
Multiple Element Instances
If a source XML file contains multiple values for a single element, all instances of that element are mapped to the target if the target supports multiple values. If you map to a target using a connector that does not support multiple values, only the first value is mapped.
The data browser within the integration platform will only show the first value for each element within a record. You can use EZscript to access the other values within the file.
For example, the following source XML file contains two name elements for a single customer element:
<customer>
<name>John Smith</name>
<name>Jane Smith</name>
<phone>(512) 555-1234</phone>
</customer>
Only the first name value will be written to a target that does not support multiple values. The first name value element will be visible in the data browser.