Connectors : Source and Target Connectors : Source and Target Connectors A-G
 
Share this page                  
Source and Target Connectors A-G
This section provides information for source and target connectors from A to
AccountMate
AccountMate (formerly known as SourceMate Accounting) is an accounting application that uses an xBASE format. If the data file references a memo field file, the memo file must exist for the connection to occur. The primary data file usually has a .DBF extension and the memo file usually has an .FPT extension. AccountMate files are structured. For example, both the data and the file structure are stored inside the primary data file. AccountMate is available only as a source connector.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
See dBASE IV Connector-Specific Notes.
Property Options
See dBASE IV Property Options.
Actian Ingres (ODBC 3.x)
The integration platform can connect to the Actian Ingres (ODBC 3.x) program with ODBC 3.x.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Note:  The integration platform connects to Actian Ingres tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
Actian PSQL v11
Actian PSQL v11 is a relational database management system. You can use the Actian PSQL v11 connector to read and write PSQL data tables.
Note:  To connect to a data table in a PSQL v11 database, you must have PSQL v11 ODBC driver installed on your system and the PSQL v11 engine must be running.
Connector-Specific Notes
To connect to a PSQL v11 database, enter the name of the server, your user ID, password (if needed), and database name.
You can write data to an existing PSQL v11 target either in Replace or Append mode. The Replace output mode removes and recreates the target file or table and destroys any data contained in that file or table. The Append output mode adds data to your tables.
You can use an EZScript to auto-generate a GUID for a variable declared as the unique identifier data type. See Generating a Random Unique Identifier.
The WhereStmt property, which allows you to filter table data, is supported with only the Table/View option.
Note:  The Actian PSQL v11 connector connects to the PSQL v11 ODBC driver directly. No DSN is required.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
BulkOperations
T
Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, change this setting to true.
CommitFrequency
T
Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation.
ConstraintDDL
T
Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. No default exists for this property.
For example, you could have the statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements would create two indices on the table mytable. The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement preceded by an at sign (@) is sent straight to the DBMS.
The statement
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys in these cases, use native SQL.No default.
CursorType
S
Type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only.
Encoding
ST
Type of encoding to use with source and target files. Default is OEM.
IdentifierQuotes
ST
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. IdentifierQuotes has three options: Default, None, and ".
MaxDataLength
ST
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used.
Some ODBC drivers have limits on the maximum data length. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
PrimaryKey
T
The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property.
There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this.
SystemTables
ST
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
Note that this property is applicable only if the user is logged on to the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
ST
Allows you to specify any one of four isolation levels when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The four isolation levels are as follows:
Read uncommitted– Permits P1, P2, and P3.
Read committed – Permits P2 and P3. Does not permit P1.
Repeatable read – Permits P3. Does not permit P1 and P2.
Serializable – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation.
UseCursors
T
The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform falls back on the SQL INSERT mode of adding records.
For exports, cursor support is meant to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. Default is false
Views
ST
If set to true (default), allows you to see views. View names appear in the table list along with table names.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit the Where when you enter the clause. This property has no default.
When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statement. This property enables data filtering when you select a table.
Data Types
The following data types are supported:
BFLOAT4
BFLOAT8
BIGINT
BINARY
BIT
CHAR
CURRENCY
DATE
DATETIME
DECIMAL
DOUBLE
FLOAT
IDENTITY
INTEGER
LONGVARBINARY
LONGVARCHAR
MONEY
NUMERIC
NUMERICSA
NUMERICSLB
NUMERICSTB
NUMERICSTS
REAL
SMALLINT
SMALLIDENTITY
TIME
TIMESTAMP
TINYINT
UBIGINT
UINTEGER
USMALLINT
UNIQUEIDENTIFIER
UTINYINT
VARCHAR
Actian PSQL v11 Multimode
PSQL is a relational database management system that uses the ODBC 3.5 standard. You can use the Actian PSQL v11 Multimode connector to write PSQL v11 data tables. It is not available as a source connector.
This connector specifies field width within characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Note:  To connect to a data table in a PSQL v11 database, you must have PSQL v11 installed on your system and the PSQL v11 engine must be running. The Actian PSQL v11 Multimode connector connects to the PSQL v11 ODBC driver directly. No DSN is required.
Connector-Specific Notes
Target schema modification - Modification to column names, data types, and sizes are not permitted.
Query statement support - The target format must support query statements such as Oracle and SQL Server statements, for transformation to work with this connectors.
Auto-increment fields - If the target table contains an auto-increment column, you may get an error at run time. If this occurs, delete the field that is causing the problem.
Auto-generated GUID - You can use an EZScript to auto-generate a GUID for a variable declared as the unique identifier data type. See Generating a Random Unique Identifier.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AutoCommit
T
Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false.
Encoding
T
Type of encoding to use with source and target files. Default is OEM.
IdentifierQuotes
T
All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma.
IdentifierQuotes has three options: Default, None, and ".
MaxDataLength
T
Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value.
SQL Output
T
Allows you to select bound or unbound mode and specify whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used.
There are four output modes:
Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. Default.
Target Only (Unbound mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property.
SQL Log
T
The default is sql.log in the default installation directory. To use a different log, browse to the file or enter the path and file name.
SystemTables
T
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
This property is applicable only if the user is logged on to the database as the database administrator. Only the DBA has access to system tables.
TransactionIsolation
T
Allows you to specify any one of four isolation levels when reading from or writing to a database table with ODBC. The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The four isolation levels are as follows:
Read uncommitted – Permits P1, P2, and P3.
Read committed – Permits P2 and P3. Does not permit P1.
Repeatable read – Permits P3. Does not permit P1 and P2.
Serializable – Does not permit P1, P2 or P3.
For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation.
UpdateNullfields
T
Null values are sent to the database when inserting or updating records. The default is true. If you select false, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Views
T
If set to true (default), allows you to see views. View names appear in the table list along with table names.
Data Types
The following data types are supported:
BFLOAT4
BFLOAT8
BIGINT
BINARY
BIT
CHAR
CURRENCY
DATE
DATETIME
DECIMAL
DOUBLE
FLOAT
IDENTITY
INTEGER
LONGVARBINARY
LONGVARCHAR
MONEY
NUMERIC
NUMERICSA
NUMERICSLB
NUMERICSTB
NUMERICSTS
REAL
SMALLINT
SMALLIDENTITY
TIME
TIMESTAMP
TINYINT
UBIGINT
UINTEGER
USMALLINT
UNIQUEIDENTIFIER
UTINYINT
VARCHAR
Actian Vector
Actian Vector is a database application that you can connect natively to through ODBC 3.5. With this connector, the integration platform reads and writes to Actian Vector tables.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Before using this connector, you must install Actian Vector Database client — Ingres Drivers ODBC 3.5 on your system.
Note:  The integration platform connects to Actian Vector tables with ODBC 3.5. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.5.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Actian Vector Multimode
Actian Vector Multimode connector allows you to perform multiple operations (such as table drops and table inserts) directly on your target database.
This connector specifies field width in characters, which means the width of a field is literally that number of characters. For more details, see Determining Field Width in Characters or Bytes.
Before using this connector, you must install Actian Vector Database client — Ingres Drivers ODBC 3.50 on your system.
Note:  The integration platform connects to Actian Vector tables with ODBC 3.5. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.5 Multimode.
Actian Vectorwise (Delimited)
The integration platform can connect to Actian Vectorwise (Delimited) data using a Unicode (Delimited) connection.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
Note:  The integration platform connects to Actian Vectorwise tables with Unicode (Delimited). For the procedure, and information about the property options, limitations, and source and target schemas, see Unicode (Delimited).
Actian Vectorwise (ODBC 3.x)
The integration platform can connect to Actian Vectorwise (ODBC 3.x) data using an ODBC 3.x connection.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Note:  The integration platform connects to Actian Vectorwise tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
Actian Vectorwise (ODBC 3.x Multimode)
The integration platform can connect to Actian Vectorwise (ODBC 3.x Multimode) data using an ODBC 3.x Multimode connection.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Note:  The integration platform connects to Actian Vectorwise tables with ODBC 3.x Multimode. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x Multimode.
Acucobol (ODBC 3.x)
Acucobol is a record manager that many applications utilize for data storage. This Acucobol connector uses ODBC 3.x to connect to Acucobol files.
Tip:  You can connect to Acucobol tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas see ODBC 3.x.
If you use Vision number 4, use ODBC to connect to applications that use newer versions of Acucobol for backend data storage. If you use Vision number 2 or 3, refer to the ODBC Driver Information on our web site for a list of vendors (to determine whether the AcuODBC Vision Driver supports these versions).
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
ACUCOBOL-GT Version 7
COBOL (COmmon Business Oriented Language) is a second-generation programming language. This ACUCOBOL-GT connector uses the native Acucobol v7.0 API, in contrast to the Acucobol (ODBC 3.x) connector, which uses ODBC for connectivity.
The ACUCOBOL-GT Version 7 connector is supported on Windows and Linux platforms.
Connector-Specific Notes
Append Mode and Sequential Files: The Append mode is not available for sequential file types, because the Vision file system API does not support this mode. So for sequential files, the only available mode is Replace. Append mode is supported for indexed and relative files.
Property Options
You can set the following source (S) and target (T) properties.
Specifying source or target properties is important for you if:
Your connector type requires you to define properties.
Upon viewing your source or target data, you learn that the field names and data appear incorrectly, either in the wrong record, field, or containing the wrong data.
Source and Target Properties
 
Property
S/T
Description
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Allow duplicate keys
T
Allows duplicate records in an indexed type target file. If true, duplicate records are allowed; if set to false, a "duplicate record found" error is returned for all duplicate records.
Although this property is visible for relative and sequential file types, it is NOT supported (the default value is true for these types).
FileType
ST
This connector has one property, and the setting is not optional. The default is indexed.
From the list, choose one of the following file types:
indexed (default) – Contains records that have one or more key fields. Records in an indexed file are ordered by ascending values in these key fields. Each key of an indexed file represents an ordered sequence by which the records can be accessed. One of the key fields, known as the primary key, must contain a unique value for each record and is used to identify records uniquely.
relative – Contains records that are identified by their record number, where the first record in the file is record number one. Relative files are ordered by ascending record numbers.
sequential – Ordered by the historical order in which records are written to the file. For an important note on this file type, see Connector-Specific Notes.
If you leave the file type set to default and then select a relative or sequential file, an error displays that it cannot open the data source. Select the FileType setting first and then select your source or target file.
To append indexed data to an existing target file, that file must also be indexed.
Data Types
The following data types are supported:
Alphabetic
Alphanumeric
Comp
Comp-1
Comp-2
Comp-3
Comp-4
Comp-5
Comp-6
Comp-N
Comp-X
Display
Display sign leading
Display sign trailing
Double (8 bytes)
Float (4 bytes)
Adabas (NatQuery)
The Adabas (NatQuery) connector uses the ASCII Fixed format as its underlying structure. Fixed ASCII data can be described as any ASCII file that has no characters separating fields, records may or may not be separated and where each record in the file occupies the same number of bytes.
Because there are so many variations of fixed ASCII data files, you may have to modify the source or target properties. Details are discussed in the property options section.
With fixed ASCII source files, you must use a schema. Since the data has no field delimiters and possibly no record separators, it appears continuous until you specify a record separator or the record length, set up the field sizes and data types, and assign optional field names.
If the fixed ASCII file contains nonprintable characters, you may find it easier to import the data as binary. See Binary.
If the fixed ASCII file contains null characters or any form of packed data, you must use the Binary connector. For details, see the Connector-Specific Notes and Binary.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Multiple Record Structured Schemas Possible
Adabas (NatQuery) files can contain multiple record structured schemas. For information on defining multiple record structured schemas, search for "multiple record types" in the help topics.
Connector-Specific Notes
If there are null characters (hex 00) in a fixed ASCII data file, the integration platform stops reading the data at the first occurrence of a null character as it assumes that this is the end of the file. If you experience this, change the source connector to Binary.
If the alternate field or record separator is not listed
1. Highlight the default separator.
2. Enter the separator as an ASCII character.
Example: If the separator is a line feed, enter:
LF
To use a hex value instead, go to step 3.
3. Enter a backslash, an "x", and the hex value for the separator.
Example: If the separator is a line feed, enter:
\x0a
For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Property Options
Single record type data: Fixed ASCII has only one certainty – every record in the file has exactly the same number of bytes. If all single record type data is not the same length, the file is a variable length file and it cannot be read as Fixed ASCII. For other possibilities, see Content Extractor.
Multirecord Type Files: To work with variable-length multirecord files, with or without a record separator, search for "multiple record" in the documentation.
You can set the following source (S) and target (T) properties.
Property
S/T
Description
NumericFormatNormalization
S
When set to true, handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Default is false.
InsertEOFRecSep
S
This option inserts a record separator on the last record of the file, if it is missing. The default is false. If set to true, this property captures the last record (with no record separator) instead of discarding it.
If the property options define a specific separator (such as CR-LF, LF), the specified separator must exist at the end of all records, including the last record in the file. Any trailing data without that separator is ignored. To avoid losing your last line of data if it does not contain the appropriate record separator, we suggest either manually editing (for one or two affected files) or creating a program (for several affected files) that adds a record separator at the end of the last record.
If a terminating record separator already exists, a blank line is read at the end of the file. Depending on your target or export type, you may need to use a script to filter out blank lines and avoid errors in exported data.
Sample Size
S
Allows you to set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value is 1000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, but it may be necessary to analyze every record to insure no data is truncated.
To change the value, click StyleSampleSize, highlight the default value, and type a new value.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
This property is set in number of bytes, not characters.
StripLeadingBlanks
S
By default, leading blanks are left in fixed ASCII source data. To delete leading blanks, set StripLeadingBlanks to true.
StripTrailingBlanks
S
By default trailing blanks are left in fixed ASCII source data. To delete trailing blanks, set StripTrailingBlanks to true.
FieldSeparator
T
Allows you to choose a field separator character for your target file. The default is None. The other choices are comma (,), tab, space, carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), control-R, and pipe (|).
If the record separator is not one of the choices from the list and is a printable character, highlight None and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard.
If the field separator is not a printable character, replace None with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Fill Fields
T
Writes an ASCII data file where every field is variable length. If this property is set to false, all trailing spaces are removed from each field when the data is written. The default is true. The true setting pads all fields with spaces to the end of the field length to maintain the fixed length of the records.
Ragged Right
T
Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records.
You must set FillFields to false for the RaggedRight property to work properly. The Ragged Right property has no effect if you set FillFields to true. If you set FillFields to false, then the RaggedRight property determines whether blank fields and fields with only spaces as their data still appears at the end of the record.
DatatypeSet
S/T
Allows you to choose between standard and COBOL data types in your fixed ASCII data file. Standard is the default and means that all the data in the file is readable (lower) ASCII data.
If your fixed ASCII file contains (or needs, for target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you MUST change this property option to "COBOL" before connecting to the COBOL copybook in the External Structured Schema window.
RecordSeparator
S/T
A fixed ASCII file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters as the record separator or no record separator, click RecordSeparator for a list of choices, including system default, carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E, and no record separator. To use a separator other than one from the list, enter it here. The SystemDefault setting enables the same transformation to run with CR-LF on Windows systems and LF on Linux systems without having to change this property.
If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Tab Size
S/T
If your fixed ASCII source file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero. To change it, highlight the zero and then type a new value.
CodePage
S/T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
All data in ASCII files is Text, but you may wish to select other data types. The following data types are supported:
Boolean (parses and displays true or false values)
Name (parses and displays proper name into name parts, such as honorifics, titles, last name, middle initial, first name)
Number (parses and displays numeric and floating values)
Text (parses and displays alphanumeric values)
Decimal (parses and displays a proper fraction whose denominator is a power of 10)
Tip:  Specify the Name data type if you want to parse a name into its component pieces. Some examples are honorifics (such as Mr., Dr.), names (first, middle, last), and titles (PhD., Jr.). You can also display the component pieces according to an edit mask. For example, you can set the mask to display a name in a field as "lastname, firstname" (Jones, Joan).
Length
Size of field in the target.
Caution!  A field length shorter than the default may truncate data.
Alignment
Data in each field of your target file or table has an alignment property of general, left, center, or right.
Pad Character
This option is valid only for character fields. When data does not fill a field completely, the remainder of the field may be filled with some character.
Alpha Four
Alpha Four is a database application that uses a dBASE format. The primary data file usually has a .DBF extension and the memo file usually has a .DBT extension. Alpha Four files are structured; for example, both the data and the file structure are stored inside the primary data file. This Alpha Four connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
The Float data type is an available data type in DBASE IV but is not available for Alpha Four data.
Also see dBASE IV Connector-Specific Notes
Property Options
See dBASE IV Property Options.
Apache Common Logfile Format
The Apache Software Foundation develops open source software for Windows and Linux operating systems. One of the Foundation's most popular projects is the Apache HTTP Server Project, which provides HTTP services in accordance with current HTTP standards. The Apache HTTP Server produces several different log files, one of which is the Common Log Format (CLF) file. Map Editor can read Apache Common Logfile Format files using the ASCII (Delimited) connector but cannot write to these files as targets.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Common Logfile Format Webserver
Apache Common Logfile Format Parameters
The Apache Common Logfile contains a separate line for each request (LF or CR/LF record separators depending on platform). A line is composed of several tokens separated by spaces:
remotehost ident authuser date request status bytes
If a token does not have a value, then it is represented by a hyphen (-). The meanings and values of these tokens are as follows:
remotehost
The fully-qualified domain name of the client, or its IP number if the name is not available.
ident
If IdentityCheck is enabled and the client machine runs identd, then this is the identity information reported by the client.
authuser
If the request was for a password-protected document, then this is the user ID used in the request. May be bogus if status is 401.
date
The date and time of the request, encapsulated in square brackets, in the following format:
date = [day/month/year:hour:minute:second zone]
day = 2*digit
month = 3*letter
year = 4*digit
hour = 2*digit
minute = 2*digit
second = 2*digit
zone = ('+' | '-') 4*digit
request
The request line from the client, enclosed in double quotes (").
status
The three-digit status code returned to the client.
bytes
The number of bytes in the object returned to the client, not including any headers.
Example
http://www.interse.com/ - bob [08/Aug/1995:06:00:00 -0800]
"GET /analyst/ HTTP/1.0" 200 1067
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AlternateFieldSeparator
S
Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some of the fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave the setting at None (the default).
The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, and pipe (|). To select a separator from the list, click AlternateFieldSeparator. If you have an alternate field separator other than one from the list, you can type it here.
If the alternate field separator is not one of the listed choices and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk, type an asterisk from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
AutomaticStyling
S
By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, it reads and formats particular data types, such as numeric and date fields, automatically.
During the transformation process, autostyling insures that a date field in the source file is formatted as a date field in the target file, as opposed to character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling as false, so leading zeros in some zip codes in the eastern United States are not deleted.
FieldEndDelimiter
S
All Apache Common Logfile Format files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote (") because it is the most common. However, some files do not contain field delimiters, so this option is available for both source files and target files. To read from or write to a file with no delimiters, set FieldEndDelimiter to None.
FieldSeparator
S
An Apache Common Logfile Format file is presumed to have a space between each field. To specify some other field separator, click FieldSeparator to display the list of options. The options are comma (default), tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, a pipe (|), and no field separator. If you have or need an alternate field separator other than one from the list, you can type it here.
If the field separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
FieldStartDelimiter
S
All Apache Common Logfile Format files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote (") because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldStartDelimiter to None.
Header
S
In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. Default is false.
RecordFieldCount
S
If your Apache Common Logfile Format data file has field separators, but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero.
RecordSeparator
S
An Apache Common Logfile Format file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters for a record separator, click the RecordSeparator cell, click the arrow and select a record separator. The list box choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E, and no record separator.
If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
StripLeadingBlanks
S
Strips the leading blanks in all data fields if set to true. Default is false.
StripTrailingBlanks
S
Strips out leading blanks in all data fields if set to true. Default is false.
StyleSampleSize
S
Allows you to set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated.
To change the value, click StyleSampleSize, highlight the default value, and type a new one.
CodePage
S
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Field1lsrecTypeId
S
If the first field of each record in your source file contains the Record Type ID, you can select true for this property and Map Designer treats each record as a separate record type. Within each record, field names derived from the Record Type ID are automatically generated for each field.
For example, if your first record consisted of the following:
"Names", "Arnold", "Benton", "Cassidy", "Denton", "Exley", "Fenton"
Map Editor assigns these field names as follows:
Names_01: Names
Names_02: Arnold
Names_03: Benton
Names_04: Cassidy
Names_05: Denton
Names_06: Exley
Names_07: Fenton
NullIndicator
S
This property allows you to enter a special string used to represent null values. You can select predefined values or type any other string.
Target – When writing a null value, the contents of the null indicator string are always written.
Source – A check is made to see if the null indicator is set. If it is set, the data is compared to the null indicator. If the data and the null indicator match, the field is set to null.
EmptyFieldsNull
S
This property allows you to treat all empty fields as null.
NumericFormatNormalization
S
Setting this property to true handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Supported in 9.2.2 and later. Default is false.
Archive Queue
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
Archive Queue is a messaging component that enables you to access a compressed or uncompressed archive that is stored on disk or in a DJMessage object within a workflow. Archive Queue treats the archive like a read-only message queue. You can use the GetMessage action to retrieve files within an archive into a message object.
Archive Queue supports the following archive/compression formats:
ZIP
JAR
TAR
TAR.GZ
TAR.BZ2
TAR.XZ
Step Actions
This component supports the following step actions:
Action
Description
GetMessage
Reads a file that matches the given pattern into the supplied DJMessage.
Disconnect
Explicitly disconnects from the archive file, releasing all file handles.
Tip:  Use the Disconnect action before attempting to remove or overwrite an archive file.
Properties
Process Properties
This invoker uses the following process properties.
Property
Description
Archive Type
The archive/compression type. The supported types are ZIP/JAR, TAR, TAR.GZ, TAR.BZ2, and TAR.XZ.
Source
Indicates whether the source archive is a file on disk or contained within a DJMessage.
Note
When the source is contained in a DJMessage, you must first Base64 encode the message body. See B64Encode Function.
Source Message/File
The path to the source file or the name of the source message.
GetMessage Pattern
The pattern to use to detect files to be read from the archive file. An asterisk (*) represents a wildcard. This pattern will be matched only against the name of the file, excluding extensions. The queue will traverse subdirectories within the archive implicitly.
Examples: File* or *12
To retrieve a specific file, type the full name of the file, excluding extension.
The pattern is displayed in clear text. If you prefer to conceal the value, have your Administrator configure it as an encrypted macro.
GetMessage Extension
The pattern to use to detect files to be read from the archive file. An asterisk (*) represents a wildcard. This pattern will be matched against the file extension. The default is csv.
Binary
Indicates whether the files that are read from the archive are binary rather than text files. If set to True, a Base64-encoded message is returned.
Step Properties
This component uses the following step properties. When you use the GetMessage action, Archive Queue populates the supplied message object with the contents of the next file that matches the given file name and extension patterns. In addition to returning the contents of the file, the original FilePath and FileName values are returned as message properties.
Property
Description
FileName
Contains the original name of the file within the archive.
FilePath
Contains the original full path of the file within the archive.
Examples
logMessage("INFO", "The original name of my file:" & myMessage.properties("FileName"))
logMessage("INFO", "The original path of my file:" & myMessage.properties("FilePath"))
ASCII (Delimited)
ASCII stands for American Standard Code for Information Interchange. Delimited ASCII is a data format in which fields and records are separated by selected characters called delimiters. Field and record delimiters are distinct. You can set the delimiters in the source or target properties for this connector.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Multiple Structured Schemas Possible
Delimited ASCII files can contain multiple structured schemas. The procedure used for setting up a delimited ASCII multiple structured schema file is the same as for a fixed ASCII multiple structured schema. For more information, search for the keywords "multiple record types."
Delimited ASCII as Alternate Source or Target Connector
Delimited ASCII files are among the most universal types of data files used to transfer data between software applications. Many applications have the ability to import data from or export data to a delimited ASCII file. Even if the application is not directly supported, you can transform its exported delimited ASCII source data. On the target side, if an unsupported application can import delimited ASCII, use the integration platform to output delimited ASCII files for importing.
Using an External Schema to Override Source Structure
When ASCII (Delimited) is the source connector, the data structure is normally set by field delimiters and the header record of the source file. However, after connecting to a file, you can override this structure by selecting a schema, for example, to change field names, change their size, or even add additional fields for multiple record layouts.
Using an External Schema to Override Target Structure
When ASCII (Delimited) is the target connector, the data structure is normally set by field delimiters and the header record of the target file. However, after connecting to a file, you can override this structure by selecting a schema, for example, to change field names, change their size, or even add additional fields for multiple record layouts.
Note that applying an external schema with fewer fields than the target file prevents writing of fields beyond the range of the schema.
Delimiter Characters Occurring as Data within a Field
Characters that delimit the start and end of a field may also appear as data within the field. To ensure that a data character is not interpreted as a delimiter, the integration platform creates an escape sequence by doubling the character when it is assumed to be data.
Quotation marks are a common example of this escape sequence. As shown below, the quotation marks enclose quoted words in an Excel source field. In the mapping to a delimited ASCII target with a quotation mark selected as field delimiter, the quotation marks are doubled for the data but not for the delimiters enclosing the field.
Excel source
The customer said, "A penny saved is a penny earned."
Delimited ASCII target, with field delimiters
"The customer said, ""A penny saved is a penny earned."""
Connector-Specific Notes
HeaderRecord Property in the Target
If the HeaderRecord property is set to true, a header is written only for the first record type in a multiple record layout.
All records are written using the same properties.
HeaderRecord Property in the Source
If the HeaderRecord property is set to true, then a single header record is skipped at the beginning of the file. If there are later header records for the additional record types, they appear as data and can possibly cause errors.
All records are read using the same properties.
Unless truncation handling is turned off (set to Ignore), each record is read twice. To read a single source record requires reading the discriminator record, then if the discriminator indicates a different record type we must reread with the new record type. However, while reading the discriminator record, we have to momentarily turn truncation handling off, so even if the discriminator record indicates itself as the record type, if truncation is not set to ignore we must turn it back on and reread the record.
Simply connecting to a source file produces only one structured schema. If the file has multiple record types, you must use Structured Schema Designer to manually design the record types.
Property Options
These are properties that you may need to specify for your source or target data. After you have selected all needed settings for these options, apply them to the connection.
Delimited ASCII data has special characters between fields and records. Another name for delimited ASCII files is CSV (comma separated value) data. Excel spreadsheets, for instance, can be saved as .csv files.
When delimited ASCII is the source, the fields are assumed to be separated by commas and delimited by quotation marks. Records are assumed to be separated by a carriage return-line feed (CR-LF). If your source file contains different separators or delimiters, you must specify them by clicking the Source Properties icon in the toolbar and choosing the correct separators and delimiters for your source data file. If you do not know what the separators or delimiters are in your source file, right-click the ASCII (Delimited) Properties grid in the right panel and select Show Delimiters to open the Hex Browser.
When delimited ASCII is your target connector, the default field separators are commas (,) and the delimiters are quotes ("). The default record separator is a carriage return-line feed (CR-LF). If different separators or delimiters are needed, you can specify them by clicking the Target Properties icon in the toolbar and choosing the desired separators and delimiters for your target file. When you are creating a delimited ASCII file that is imported into another application, you should set each of the target property options as required by the other application before creating the file.
If the field or record separator is not listed
1. Highlight the default separator.
2. Enter the separator as an ASCII character.
3. Example: If the separator is a line feed, enter:
LF
4. To use a hex value instead, go to step 3.
5. Enter a backslash, an "x", and the hex value for the separator.
6. Example: If the separator is a line feed, enter:
\x0a
For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Source and Target Properties
You can set the following source (S) and target (T) properties.
Property
S/T
Description
AlternateFieldSeparator
S
Most data files have only one field separator between all the fields. However, it is possible to have more than one field separator. If your source file has one field separator between some fields and a different separator between others, you can specify the second field separator here. Otherwise, leave this set to None (the default).
To select an option other than the default, click AlternateFieldSeparator. Then click the arrow to the right of the box to choose from the list of available separators. To specify a field separator that isn't on the list, type it here.
AutomaticStyling
S
Changes the way the integration platform reads or writes ASCII data. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When you change the setting to true, the integration platform formats particular data types, such as numeric and date fields, automatically.
During transformation of ASCII source files, autostyling ensures that a date field in the source file is formatted as a date field in the target file, and not as character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling set to false so that leading zeros in some zip codes in the eastern US are not deleted.
For an ASCII target file, if you set FieldDelimitStyle to Text, you must also set AutomaticStyling to true so that delimiters are placed around only the nonnumeric fields.
CodePage
ST
Translation table that specifies which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
EmptyFieldsNull
S
Treats all empty fields as null.
Field1IsRecTypeId
S
If the first field of each record in your source file contains the Record Type ID, you can select true for this property and each record is treated as a separate record type. Within each record, field names derived from the Record Type ID are automatically generated for each field. For details, see Field1IsRecTypeId.
FieldDelimitStyle
T
For delimited ASCII target connectors, this option determines whether the specified FieldStartDelimiter and FieldEndDelimiter are used for all fields, only for fields containing a separator, or only for text fields:
All – Places the delimiters specified in FieldStartDelimiter and FieldEndDelimiter before and after every field. Default setting is All. For example: "Smith","12345","Houston"
Partial – Places the specified delimiters before and after fields only where necessary. A field that contains a character that is the same as the field separator will have the field delimiters placed around it. A common example is a memo field that contains quotes within the data: "Customer responded with "No thank you" to my offer".
Text – Places delimiters before and after text and name fields (non-numeric fields). Numeric and date fields have no FieldStartDelimiter or FieldEndDelimiter. For example: "Smith", 12345,"Houston", 11/13/04
Non-numeric – Places delimiters before and after all nonnumeric types, such as date fields. Non-numeric delimits date fields, while text does not.
FieldEndDelimiter
ST
Most delimited ASCII files have beginning-of-field and end-of-field delimiters. The default delimiter is a quotation mark. This option is available for source files and target files that do not have delimiters. To read from or write to a file with no delimiters, set FieldEndDelimiter to none.
FieldSeparator
ST
A delimited ASCII file is assumed to have a comma between each field. To specify another field separator, click FieldSeparator to select one from the list.
To specify a field separator that is not on the list and is a printable character, highlight the CR-LF and then type the character. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator.
FieldStartDelimiter
ST
Delimited ASCII files often have beginning-of-field and end-of-field delimiters. The default delimiter is a quotation mark. To read from or write to a file with no delimiters, set FieldStartDelimiter to none.
Header
ST
In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false.
Note:  If your target connector is ASCII (Delimited) and you are appending data to an existing file, leave Header set to false.
MaxDataLen
T
Specifies the maximum number of characters to write to a field. When set to zero (the default), the number of characters written to a field is determined by the length of the field. If you set this value to a number other than zero, data may be truncated.
NullIndicator
ST
Special string used to represent null values. Select a predefined value or type any other string.
Target – When writing a null value, the contents of the null indicator string are written.
Source – A check is made to see if the null indicator is set. If it is set, the data is compared to the null indicator. If the data and the null indicator match, the field is set to null.
NumericFormatNormalization
S
When set to true, this property handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Supported in 9.2.2 and later. Default is false.
RecordFieldCount
S
If your source file has field separators but no record separator or uses the same separator for both fields and records, follow these steps:
1. Set the RecordSeparator (most likely a blank line).
2. Leave the AlternateFieldSeparator option blank.
3. Enter the number of fields per record for RecordFieldCount.
4. The default value is zero.
RecordSeparator
ST
Most delimited ASCII files have a carriage return-line feed (CR-LF) between records. To use a different character, click the RecordSeparator cell, then click the arrow and select one from the list. The SystemDefault setting enables the same transformation to run with CR-LF on Windows and with LF on Linux systems without having to change this property.
To use a record separator that is not listed and is a printable character, highlight CR-LF and enter the character. For example, to use a pipe (|) character, enter a pipe from the keyboard.
If the record separator is not a printable character, enter escape CR-LF with a backslash, an X, and the hexadecimal value for the separator.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters.
StripLeadingBlanks
ST
Leading blanks occur in ASCII source files by default. To delete them, set StripLeadingBlanks to true.
Leading blanks are removed from ASCII target files by default. To retain them, set StripLeadingBlanks to false.
StripTrailingBlanks
ST
Trailing blanks occur in ASCII source files by default. To delete them, set StripTrailingBlanks to true.
Trailing blanks are removed from ASCII target files by default. To retain them, set StripTrailingBlanks to false.
StyleSampleSize
S
Allows you to set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between one and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure that no data is truncated.To change the value, click StyleSampleSize, highlight the default value, and type a new one.
TransliterateIn
T
Specifies a character or set of characters to be filtered out of the source data. For any character in TransliterateIn, the corresponding character from TransliterateOut is substituted. If there is no corresponding character, the source character is filtered out completely. TransliterateIn supports C-style escape sequences such as \n (new line), \r (carriage return), and \t (tab).
TransliterateOut
T
Specifies a character to be substituted for another character from the source data. For any character in TransliterateIn, the corresponding character from TransliterateOut is substituted. If you wish the source character to be filtered out completely, leave this field blank. If there are no characters to be transliterated, leave this field blank. TransliterateOut supports C-style escape sequences such as \n (new line), \r (carriage return), and \t (tab).
Field1IsRecTypeId
If your first record consists of the following:
"Names", "Arnold", "Benton", "Cassidy", "Denton", "Exley", "Fenton"
The following field names are assigned:
Names_01
Names_02
Names_03
Names_04
Names_05
Names_06
Names_07
Names
Arnold
Benton
Cassidy
Denton
Exley
Fenton
The default for this property is false.
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
For example, a field contains numbers and dollar values with two decimal places, such as 7122.50. The field type default is Text and because of values in other records, the field length default is 10. You are transforming the data to a database application in which you want the data in this field to be numeric. If you change the source field type to Float, the field length becomes blank, the precision default is 15, and the decimal changes to 2. This field automatically appears as an appropriate numeric data type in your target schema and is a numeric field in your target data file.
Data Types
The following data types are available:
Boolean
Date
Date/Time
Decimal
Float
Integer
Name (parses and displays a name into its parts, such as title, last name, middle initial)
Text
Time
By default, ASCII data is read as Text. Fields containing dates or numbers may be changed to another appropriate data type.
If you want to set different field delimiters for your fields that contain numeric data, see FieldDelimitStyle in the property options.
ASCII (Fixed)
Fixed ASCII data can be described as any ASCII file that has no characters separating fields, records may or may not be separated and where each record in the file occupies the same number of bytes. Because there are so many variations of fixed ASCII data files, you may have to modify the source or target properties for the ASCII (Fixed) connector. Details are discussed in the property options section.
With fixed ASCII source files, you must use a schema. Since the data has no field delimiters and possibly no record separators, it appears continuous until you specify a record separator or the record length, set up the field sizes and data types, and assign optional field names.
If the fixed ASCII file contains nonprintable characters, you may find it easier to import the data using the Binary format. See Binary.
If the fixed ASCII file contains Null characters or any form of packed data, you must use the Binary connector.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Multiple Structured Schemas Possible
Fixed ASCII files can contain multiple structured schemas. If your source or target fixed ASCII file has multiple structured schemas, you can find information by searching in the help for "multiple record types."
Connector-Specific Notes
If null characters (hex 00) are encountered in a fixed ASCII data file, the end of file is assumed and data reading stops. If your files contain null values, change the source connector to Binary.
If the alternate field or record separator is not listed
1. Highlight the default separator.
2. Enter the separator as an ASCII character.
Example: If the separator is a line feed, enter:
LF
To use a hex value instead, go to the next step.
3. Enter a backslash, an X, and the hexadecimal value for the separator.
Example: If the separator is a line feed, enter:
\x0a
For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Single record type data: Fixed ASCII has only one certainty: Every record in the file has exactly the same number of bytes. If all the single record type data is not exactly the same length, the file is a variable length file and it cannot be read as Fixed ASCII. See Content Extractor for other possibilities.
Multirecord type files: To work with variable-length multirecord files (with or without a record separator), search for "multiple record" in the documentation.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CodePage
ST
Translation table that specifies which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
DatatypeSet
ST
Allows you to choose between standard and COBOL data types in fixed ASCII data files. Standard (the default) specifies that all the data in the file is readable (lower) ASCII data. If your fixed ASCII file contains (or needs, for target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you MUST change this property option to "COBOL" before connecting to the COBOL copybook in the External Structured Schema window.
FieldSeparator
T
Allows you to choose a field separator character for your target file. The default is None. To select a record separator that is unlisted and is a printable character, highlight None and then type the correct character. For example, to select an asterisk ( * ), enter an asterisk from the keyboard.
If the record separator is not a printable character, replace None with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Fill Fields
T
Writes an ASCII data file in which every field is of variable length. When set to false, all trailing spaces are removed from each field when the data is written. When set to true (the default), the integration platform pads all fields with spaces to the end of the field length to maintain the fixed length of the records.
InsertEOFRecSep
S
Inserts a record separator on the last record of the file, if it is missing. The default is false. If set to true, this property captures the last record (with no record separator) instead of discarding it.
Note:  If property options define a specific separator (CR-LF, LF), the specified separator must exist at the end of all records, including the last record in the file. Any trailing data without that separator is ignored. Therefore, to avoid losing your last line of data (if it does not contain the appropriate record separator), we suggest either manually editing (for one or two affected files) or creating a program (for several affected files) that adds a record separator at the end of the last record.
Caution!  If a terminating record separator already exists, a blank line is read at the end of the file. Depending on your target or export type, you may need to filter out these blank lines to avoid errors in exported data.
NumericFormatNormalization
S
When set to true, handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Default is false.
Ragged Right
T
Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records.
Note:  You must set FillFields to false for the RaggedRight property to work properly. The Ragged Right property has no effect if you set FillFields to true. If you set FillFields to false, then the RaggedRight property determines whether blank fields and fields with only spaces as their data still appears at the end of the record.
RecordSeparator
ST
Most fixed ASCII files have a carriage return-line feed (CR-LF) between records. To use other characters as the record separator or no record separator, click the RecordSeparator cell for a list of choices. To use another separator, enter it here. The SystemDefault setting enables the same transformation to run with CR-LF on Windows systems and LF on Linux systems without having to change this property.
To use a record separator that is not listed and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), enter a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
SampleSize
S
Sets the number of records (starting with record one) that are analyzed to a default width for each source field. The default value is 5000. You can change the value to any number between one and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure that no data is truncated. To change the value, click StyleSampleSize, highlight the default value, and type a new one.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
StripLeadingBlanks
S
Leading blanks occur in fixed ASCII files by default. To them, set StripLeadingBlanks to true.
StripTrailingBlanks
S
Trailing blanks occur in ASCII fixed files by default. To delete them, set StripTrailingBlanks to true.
Tab Size
ST
If your fixed ASCII file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero. To change it, highlight the zero and enter a new value.
Data Types
All data in ASCII files is Text, but you may wish to select other data types. The following data types are available:
Boolean (parses and displays true or false values)
Name (parses and displays proper name into name parts, such as honorifics, titles, last name, middle initial, first name)
Number (parses and displays numeric and floating values)
Text (parses and displays alphanumeric values)
Decimal (parses and displays a proper fraction whose denominator is a power of 10)
Use the Name data type to parse a name into its component pieces. Some examples are honorifics (Mr., Dr.), names (first, middle, last) and titles (PhD., Jr.). You can also display the component pieces according to an edit mask. For example, you can set the mask to display a name in a field as "lastname, firstname" (Jones, Joan).
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Alignment
Data in each field of your target file or table has an alignment property of general, left, center, or right.
Pad Character
This option is valid ONLY for character fields. When data does not fill a field completely, the remainder of the field may be filled with some character.
Avro
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
The connector supports Apache Avro protocol for fast data extract/load operations.
Versions Supported
This connector has been tested with the following versions of the Avro protocol:
1.7.2
Operations
The following operations are supported by the Avro connector.
Read
Write
Data Types
The following data types are supported: INT, LONG, DOUBLE, FLOAT, BOOLEAN, BYTES, STRING.
Connect Information
To establish a connection to Avro source or target, the Avro file must be supplied in the Avro File field.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Avro File
S/T
Full path of source or target Avro file.
BatchSize
S/T
Maximum number of records to be processed at the time. Defaults to 1000.
Flush Frequency
T
The number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance.
Batch Response
T
Batch response entries are generated for each load performed to an Avro file.
Limitations
Currently only supports non-hierarchical (flat) Avro records.
Multimode Support
The connector does not support multiple targets.
BAF
The BAF connector views Binary files using an underlying Binary connector. Binary files can contain multiple structured schemas. If your source Binary file has multiple structured schemas, the BAF connector can read the multiple record types in the file.
For additional information on multiple record types, you can find information by searching in the help for "multiple record types."
This connector sets field width in bytes, where the number of characters that fit into a given field varies. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Property Options
If you have EBCDIC source data, you must set the proper EBCDIC code page before setting the field widths.
You can set the following source (S) and target (T) properties.
Property
S/T
Description
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
MaxRecordLength
S
Specifies the maximum record length of the data. The default is 32700 bytes.
RLF MSB first
S/T
This setting adjusts the byte order of the Record Length Field, also called the Record Descriptor Word (RDW). The default is true, which means the Most Significant Byte of the record length field is first.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
WordAlignRecord
S/T
Align records on a word (16-bit) boundary when False, the default.
CodePage
S/T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
RecordLengthInclusive
S/T
When true, this setting indicates that the record length indicator includes the bytes of the indicator itself. The default is false, meaning that the record length indicated does not include the bytes of the indicator itself.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character).
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display Date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display Time
Magic PC Date
Magic PC Extended
Magic PC Number
Magic PC Real
Magic PC Time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Binary
The Binary connector can be used to read or write almost any fixed-length single record type file. Whether the data is straight ASCII, some combination of EBCDIC and packed data, flat COBOL files, C, or BASIC files, you can define the record structure and transform the data to another format or application.
Because the integration platform is not a communication package, data that resides on a mainframe or mini-computer must first be downloaded to a local or network drive before the file can be read. When writing out a file to be used in an application on another platform, you write the file to a local or network drive. Then use a communication package to "move" that data file to the mainframe or mini system.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Multiple Record Types Possible
Binary files can contain multiple structured schemas. If your source Binary file has multiple structured schemas, you can find information by searching in the help for "multiple record types."
Connector-Specific Notes
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
If the alternate field or record separator is not listed
1. Highlight the default separator.
2. Enter the separator as an ASCII character.
3. Example: If the separator is a line feed, enter LF
4. To use a hex value instead, go to step 3.
5. Enter a backslash, an "x", and the hex value for the separator.
6. Example: If the separator is a line feed, enter \x0a.
For a list of the 256 standard and extended ASCII characters, search for "hex values" in the help.
Property Options
Note:  If you have EBCDIC source data, it is necessary to set the proper EBCDIC code page before setting the field widths.
You can set the following (S) and target (T) properties.
Property
S/T
Description
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length by selecting one of the following:
None (the default) – leaves the fields uneven.
End of Record – fills the remainder of the record with your specified pad character.
Within Group – fills the field with your specified pad character.
PageSize
ST
When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size, highlight the default value (zero), and type the correct page size value for your data.
RecordSeparator
ST
When a COBOL file is your source connector and you are using a 01 copybook to define the fields, you might have a record separator at the end of each record. If so, you may specify the record separator as None, which causes the map to ignore the record separator when it reads the source data. The default is None.
The separators are carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), form feed (FF), Empty Line, and None.
When writing a binary file, you may want to place a record separator at the end of each record (similar to a Fixed ASCII record separator). You may select a record separator from the list, or highlight the current value and type your own.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note
This property is set in number of bytes, not characters.
 
 
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
BCD
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display Date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display Time
Magic PC Date
Magic PC Extended
Magic PC Number
Magic PC Real
Magic PC Time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Binary (International)
The Binary (International) connector can be used to read or write almost any fixed-length data file. Whether the data is straight ASCII, some combination of EBCDIC and packed data, C, or BASIC files, the integration platform allows you to define the record structure and transform the data to another format or application.
Note:  This connector is not a Unicode-enabled connector. Because of this, to get the desired results, you must set your computer for your particular language type. For example, if you have traditional Chinese or Japanese EBCDIC data, you must set your computer system locale to recognize traditional Chinese or Japanese character sets.
Because the integration platform is not a communication package, data that resides on a mainframe or mini-computer must first be downloaded to a local or network drive before it can read the file. When writing a file to be used in an application on another platform, you write the file to a local or network drive. Then use a communication package to move that data file to the mainframe or mini system.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Multiple Record Types Possible
Binary files can contain multiple structured schemas. If your source file has multiple structured schemas, you can find information by searching in the help for "multiple record types."
Connector-Specific Notes
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
If the alternate field or record separator is not listed
1. Highlight the default separator.
2. Enter the separator in binary or hex.
ASCII: Enter the appropriate character code.
If the separator is a line feed, enter LF.
Hex: Enter a backslash, an "x", and the hex value for the separator.
If the separator is a line feed, enter \x0a.
Binary (International) Unicode Support
The Binary (International) connector is not a Unicode connector, but depending on your operating system it might support double-byte encoding. For example, if you are running an English operating system, you cannot process Japanese data.
If you select Unicode encoding and you need to set up a record separator, set the RecordSeparator property to None. Then, create an additional field to hold the Unicode bytes.
Supported Binary (International) Encoding
Name
CP930
EBCDIC host mixed Katakana-Kanji
CP933
EBCDIC mixed Korea
CP935
IBM EBCDIC Simplified Chinese, Combined (836 + 837)
CP937
EBCDIC Traditional Chinese
CP939
EBCDIC mixed Latin-Kanji
DEC
Digital VAX and Unix, DEC Kanji code
IBM78+EBCDIC
IBM mainframe and AS/400, 1978 Kanji and non-kana support
IBM78+EBCDIK
IBM mainframe and AS/400, 1978 Kanji and half-width kana
IBM83+EBCDIC
IBM mainframe and AS/400, 1983 Kanji and non-kana support
IBM83+EBCDIK
IBM mainframe and AS/400, 1983 Kanji and half-width kana
IBM937+EBCDIC
Traditional Chinese IBM mainframe and AS/400
ISO-2022-CN
Chinese ISO-2022
ISO-2022-CN-EXT
Chinese ISO-2022-EXT
ISO-2022-JP
Japanese ISO-2022
ISO-2022-JP-1
Japanese ISO-2022-1
ISO-2022-JP-2
Japanese ISO-2022-2
ISO-2022-KR
Korean ISO-2022
JEF78+EBCDIC
Fujitsu FACOM, 1978 Kanji and non-kana support
JEF78+EBCDIK
Fujitsu FACOM, 1978 Kanji and half-width kana
JEF83+EBCDIC
Fujitsu FACOM, 1983 Kanji and non-kana support
JEF83+EBCDIK
Fujitsu FACOM, 1983 Kanji and half-width kana
JIS78
Japanese Industrial Standard 1978
JIS83
Japanese Industrial Standard 1983
KEIS78+EBCDIC
Hitachi HITACH, 1978 Kanji and non-kana support
KEIS78+EBCDIK
Hitachi HITACH, 1978 Kanji and half-width kana
KEIS83+EBCDIC
Hitachi HITACH, 1983 Kanji and non-kana support
KEIS83+EBCDIK
Hitachi HITACH, 1983 Kanji and half-width kana
MELCOM
Mitsubishi MELCOM, MELCOM Kanji
NEC JIPS-E
NEC ACOS, NEC JIPS-E code
NEC JIPS-E (Int)
NEC ACOS, NEC JIPS-E internal code
NEC JIPS-J
NEC ACOS, NEC JIPS-J
NEC JIPS-J (Int)
NEC ACOS, NEC JIPS-J internal code
Unisys LETS-J
Unisys UNIVAC, UNIVAC LETS-J Kanji
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character)
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
Encoding
ST
Select the type of encoding used with your Binary files. The default encoding is OEM. To change the encoding to a different selection, click the arrow and select an encoding from the list. Available options are OEM (default), Shift-JIS, Unisys LETS-J, UCS-2, UTF-8, UTF-16, and US ASCII.
Page Size
ST
When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size, highlight the default value (zero), and type the correct page size value for your data.
RecordSeparator
ST
When a COBOL file is your source connector and you are using a 01 copybook to define the fields, you might have a record separator at the end of each record. If so, you may specify the record separator as None, which causes the map to ignore the record separator when it reads the source data. The default is None.
The separators are carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), form feed (FF), Empty Line, and none.
When writing out a binary file, you may want to place a record separator at the end of each record (similar to a Fixed ASCII record separator). You may select a record separator from the list, or highlight the current value and type your own.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
BCD
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display Date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display Time
Magic PC Date
Magic PC Extended
Magic PC Number
Magic PC Real
Magic PC Time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Binary Line Sequential
The Binary Line Sequential connector connects to binary files with variable length records. The integration platform then determines the record length by finding the record separator. The integration platform can both read and write to Binary Line Sequential files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
If the alternate field or record separator is not listed
1. Highlight the default separator.
2. Enter the separator in binary or hex.
ASCII: Enter the appropriate character code.
For example, if the separator is a line feed, enter LF.
Hex: Enter a backslash, an "x", and the hex value for the separator.
For example, if the separator is a line feed, enter \x0a.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
MaxRecordLength
S
Maximum record length of the data. The default is 32700 bytes.
OccursPad
S
When using COBOL files, you may have fields of variable lengths. If so, you may select one of the following to specify how to fill the field with pads to a fixed length:
None (the default) – leaves the fields uneven.
End of Record – fills the remainder of the record with your specified pad character.
Within Group – fills the field with your specified pad character.
RecordSeparator
ST
Specifies what sort of character is used to mark the end of a record. The default record separator is carriage return-line feed. To use other characters for a record separator, click the RecordSeparator cell and click once. Then click the arrow to the right of the box and click the desired record separator in the list box. The list box choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, and no record separator.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display Date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display Time
Magic PC Date
Magic PC Extended
Magic PC Number
Magic PC Real
Magic PC Time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
Btrieve 10
Btrieve is a transactional database interface. To use the Btrieve 10 connector, you must install Btrieve on both the system where the integration platform is used and on the system where transformations are run.
The following table shows read-write capabilities for the Btrieve connectors and Btrieve files.
Connector
v11
Btrieve v9.5 file
RW
Btrieve v9 file
RW
Btrieve v8 file
RW
Btrieve v7 file
RW
Btrieve v6 file
RW
Btrieve v5 file
R
Btrieve v4.11 file
R
Btrieve Data Dictionary Files
Btrieve is a transactional database interface. To use any of the Btrieve connectors, you must install the Btrieve version on both the system where Map Editor is used and on the system where transformations are run.
Btrieve can use data dictionary files (DDFs). If your Btrieve files have no DDFs, use Pervasive DDF Builder to create them before connecting to Btrieve source or target data.
This connector sets field width in bytes. What varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Single record type files
In general, Btrieve files must be fixed, sequential files. For example, each record must be the same number of bytes and each field must occupy the same set of bytes in each record. For example, if record 1 is 236 bytes long, every record must be 236 bytes long. If the first field in record 1 occupies 4 bytes and is a 32-bit binary data type, then the first field in every record must do the same.
The integration platform does not support Btrieve files with records of variable length except when the variable records are at the end of the file after the fixed records. If so, the integration platform reads only the fixed records.
After writing to a Btrieve target, transformations do not update the Btrieve DDF index file. You must do this in Btrieve.
Multirecord type files
It is possible to work with variable-length multirecord files, with and without a record separator, using certain guidelines. For more information, search for "multiple record" in the documentation.
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error, as do connectors that have truncation support.
Property Options
These are properties that you may need to specify for your source or target data.
Property Name
S/T
Description
Page Size
ST
This option sets the page size for Btrieve data. The default value is 4096. To change the page size, enter a new value in increments of 512. See the Btrieve documentation for your correct page size.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are supported for a Btrieve data file:
32-bit IEEE floating-point
64-bit IEEE floating-point
Autoinc (2 bytes)
Autoinc (4 bytes)
Bfloat (4 bytes)
Bfloat (8 bytes)
Bit
BLOB
Character (default)
CLOB
Comp
Comp-1
Comp-3
Comp-5
Comp-X
Currency
Date
Decimal
Float (4 bytes)
Float (8 bytes)
GUID (Btrieve 10 only)
Integer (1 byte)
Integer (2 bytes)
Integer (4 bytes)
Integer (8 bytes)
Logical
Logical (2 bytes)
Lstring
Lvar
Magic PC Date
Magic PC Extended
Magic PC Number
Magic PC Real
Magic PC Time
Microsoft BASIC double
Microsoft BASIC float
Money
Note
Numeric
NumericSA
NumericSTS
Packed decimal
Sales Ally time-2
Sales Ally date
Sales Ally time-1
Time
Timestamp
Unsigned (1 byte)
Unsigned (2 bytes)
Unsigned (4 bytes)
Unsigned (8 bytes)
Wstring
Wzstring
Zoned decimal
Zstring
Clarion
Clarion is a database application that has its own proprietary format. With the Clarion connector, Map Editor reads and writes Clarion versions 2.1 and 3.0.
You can connect to TopSpeed databases with ODBC 3.x. For details, see the SoftVelocity Web site.
Note:  Map Editor cannot read encrypted or password-protected Clarion data files. The files must be unencrypted and have the security removed before connecting with Map Editor.
See the special note about Clarion dates under Source Schemas.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Clarion stores dates as integers. To convert these integers to a more common date format, such as mm/dd/yyyy, after you select your target connector and target file, you need to write an expression for each date field at the Map tab. The expression you write may look similar to the following example:
DateAdd("d", Fields("FIELDNAME"), DateValue("12/28/1800"))
where FIELDNAME is the source field name in the Clarion source data file. The remainder of the expression should be written exactly as shown.
Limitations
The limitations are:
Field Names - Each field name must be unique. Field names may contain up to 10 characters, but the first character must be a letter.
Encrypted Passwords - Map Designer cannot read encrypted or password-protected Clarion data files. The files must be unencrypted and have the security removed before connecting with Map Designer.
Property Options
 
Property
ST
Description
IgnoreMemoErr
S
This option determines how Clarion memo files are handled. Choose you selection from the list that appears. The following options are available:
Never – Default. This option causes Map Designer to always look for and include any memo file fields when the source data file is read.
Errors – Selecting this option causes Map Designer to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file is not in the same directory as the data file, the memo file is ignored. This means that the memo fields cannot be included with the transformed data.
Always – Selecting this option causes Map Designer to always ignore the memo file completely. This means that the memo fields cannot be included with the transformed data.
CodePage
ST
The code page translation table tells Map Designer which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. The following code pages are available:
ANSI
OEM
0037 US(EBCDIC)
0273 Germany(EBCDIC)
0277 Norway(EBCDIC)
0278 Sweden(EBCDIC)
0280 Italy(EBCDIC)
0284 Spain (EBCDIC)
0285 UK(EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium(EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
Data Type
These are the data types of the fields in your target data table. The following data types are available:
Byte
Date
Decimal
Group
Long
Memo
Picture
Real
Short
String (Default)
Time
Output Modes
If you are writing to a new table (Replace mode), you may use any of the available data types from the list for the Type cell of a field.
If you are appending data to an existing table (Append mode), the data type of each field uses the data type in the selected table by default.
Updating Clarion Data
Although Clarion does not have Update as an Output mode, you can still update your Clarion target file. If you have an ODBC 3.x driver installed, you can select ODBC as the target connector and perform an update. Or, you can put the data into an Access database and use that as a staging table to perform the Update in a second transformation. Next, insert the results into Clarion. The transformation can then be tied together in a project.
Clipper
Clipper is a database application that uses a dBASE format. The Clipper connector primary data file usually has a .DBF extension and the memo file has a .DBT extension. Clipper files are structured; for example, both the data and the file structure are stored inside the primary data file.
The connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations specific to Clipper files are known at this time. For general dBASE limitation details, see dBASE IV.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
S/T
This option determines how dBASE memo files are handled. Choose you selection from the picklist. The following options are available:
Never – Default. This option causes the integration platform to look for and include any memo file fields when the source data file is read.
Errors – Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data.
If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always – Selecting this option causes the integration platform to ignore the memo file completely. This means memo fields are not included with the transformed data.
CodePage
S/T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Character – may contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes). (default)
Date – may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1999 would read 19990101.
Logical – may contain only one byte and is formatted to contain t, f, T, or F, for true or false.
Memo – may contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field. Numeric fields are used to store the exact value of a number with a fixed number of digits.
If you are writing to a new table, you may use any of these available data types from the list that appears when you click the arrow in the Type cell of a field.
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
Connectivity Pointers
As a general rule, when connecting to a database with ODBC, most require the database engine to be running when connection is attempted.
Tab to Server Name and, if applicable, type your the name of your remote database server. If the database that you want to connect to resides on your LAN or local PC, it is not necessary to enter the Server Name.
Tab to Data Source and select the desired source as it appears in the User DSN tab in the ODBC administrator, but without the information in parentheses. An example is My Data Source 1.
Cloudscape (ODBC)
The Cloudscape connector uses ODBC to connect to Cloudscape tables.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
Tip:  You can connect to Cloudscape tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
COBOL Flat File
This COBOL Flat File connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Property Options
Specifying Source properties is important for you if:
Your source connector requires you to define Source Properties;
Upon viewing your source data, you learn that the field names and data appear incorrectly, either in the wrong record, field, or contain the wrong data.
Property
S/T
Description
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
PageSize
ST
When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size, highlight the default value (zero), and type the correct page size value for your data.
ShortLastRecord
S
If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false.
OccursPad
S
When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None.
The following options are available:
None (which leaves the fields uneven) – Default
End of Record (which fills the remainder of the record with your specified pad character)
Within Group (which fills the field with your specified pad character)
Page Size: When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size and highlight the default value "0" (zero). Then type the correct page size value for your data.
RecordSeparator
ST
When a COBOL file is your source connector and you are using a 01 copybook to define the fields, you might have a record separator at the end of each record. If so, you may specify the record separator here. This causes the integration platform to automatically ignore the record separator when it reads the source data. The default is None.
The separators are carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), form feed (FF), Empty Line, and none.
When writing out a binary file, you may want to place a record separator at the end of each record (similar to a Fixed ASCII record separator). You may select a record separator from the list, or highlight the current value and type your own.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
16-bit binary
16-bit logical
24-bit binary
32-bit binary
32-bit IEEE floating-point
32-bit TEC binary
32-bit VAX floating-point
64-bit binary
64-bit IEEE floating point
64-bit VAX floating-point
8-bit binary
80-bit Intel floating-point
AccPac 41-bit binary
Binary
Boolean
Btrieve date
Btrieve time
Column binary alpha-numeric
Column binary multi-punch
Column binary numeric
Comp
Comp-1
Comp-2
Comp-3
Comp-5
Comp-6
Comp-X
Complex
Cray floating-point
Date
DateTime
dBASE Numeric
Display
Display Boolean
Display Date
Display Date/Time
Display justified
Display sign leading
Display sign leading separate
Display sign trailing
Display sign trailing separate
Display Time
Magic PC Date
Magic PC Extended
Magic PC Number
Magic PC Real
Magic PC Time
Microsoft BASIC double
Microsoft BASIC float
Name
Null-terminated C string
Packed decimal
Pascal 48-bit real
Pascal string (1 byte)
Pascal string (2 bytes)
Sales Ally date
Sales Ally time-1
Text
Time
Time (minutes past midnight)
Union
Variable length IBM float
Zoned decimal
See Binary for general binary file type information.
Common Logfile Format Webserver
All user access on Internet web sites can be registered by the webserver and be stored as a log file. The most common format is the Common Logfile Format (CLF), which uses an underlying ASCII (Delimited) connector.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Apache Common Logfile Format
Connector-Specific Notes
The Common Logfile Format Webserver connector is not available as a target.
Property Options
You can set the following source (S) properties.
Property
S/T
Description
AlternateFieldSeparator
S
Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some of the fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave the setting at None (the default).
The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, Ctrl-R, and pipe ( | ). To select a separator from a list, click AlternateFieldSeparator. If you have an alternate field separator other than one from the list, you can type it here.
If the alternate field separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk ( * ), type an asterisk from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB.
AutomaticStyling
S
Automatic styling changes the way Common Logfile Format Webserver data is read or written. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, data types such as numeric and date fields, are formatted automatically.
During the transformation process, autostyling insures that a date field in the source file is formatted as a date field in the target file, as opposed to character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling as false, so leading zeros in some zip codes in the eastern United States do not get deleted.
FieldEndDelimiter
S
All Common Logfile Format Webserver files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote ( " ) because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldEndDelimiter to None.
FieldSeparator
S
The integration platform assumes that a Common Logfile Format Webserver file should have a space between each field. To specify some other field separator, click FieldSeparator to display the options: comma (default), tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, a pipe ( | ), and no field separator. To use a field separator that is not on the list, type it here.
If the field separator you want to use is not on the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard.
If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB.
FieldStartDelimiter
S
All Common Logfile Format Webserver files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote ( " ) because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldStartDelimiter to None.
Header
S
In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false.
RecordFieldCount
S
If your Common Logfile Format Webserver data file has field separators, but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero.
RecordSeparator
S
A Common Logfile Format Webserver file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters for a record separator, click the RecordSeparator cell and click once. Then click the arrow to the right of the box and click the desired record separator in the list box. The choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E and no record separator. To use a separator other than one from the list, you can type it here.
If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard.
If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
StripLeadingBlanks
S
Allows you to determine if leading blanks are stripped from all data fields.
The default is false; leading blanks are not stripped from Common Logfile Format Webserver data. To remove them, set StripLeadingBlanks to true.
StripTrailingBlanks
S
Allows you to determine whether or not trailing blanks are stripped from the data fields.
The default is false; trailing blanks are not stripped from Common Logfile Format Webserver data. Set StripTrailingBlanks to true to remove trailing blanks.
StyleSampleSize
S
Set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated.
To change the value, click StyleSampleSize, highlight the default value, and type a new one.
CodePage
S
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Content Extractor
This connector uses extract scripts created in the Extract Schema Designer as source files. For more information about the Extract Schema Designer, ask your sales representative.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
You can apply extract scripts in one of two methods. You can use Extract Schema Designer or CXL to generate a script file outside of the integration platform or you can use the Extractor connector in the platform. The steps for each method are given in the procedure section.
Connectivity Pointers
If you need to go back into Extract Schema Designer and make changes to the script, save the changes in Extract Schema Designer, then press Refresh Script so that the integration platform rereads the script.
Extract Schema Designer and the integration platform browsers show the exported record types in correct order. Remember that the grid displays the schema in the order that Accept Records were defined in Extract Schema Designer.
Property Options
You can set the following source (S) properties.
Property
S/T
Description
ByteOrder
S
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ProgramVariables
S
Allows you to set or override program variable values. There is no default. To specify the value to override, click cell next to ProgramVariables, enter program variables and any override values, then click OK.
ReportReadingScriptFile
S
An extraction script file with a .cxl extension. To create and use a .cxl file with this connector, follow these steps:
1. In Data Integrator 9, use Extract Schema Designer to create an extraction script file for the source file that the v10 source dataset will read.
2. Back in v10, import the .cxl file as a Text artifact into the project that contains the source dataset.
3. Open the dataset, start the session, and select the .cxl file from the list for the ReportReadingScriptFile connection property.
Note:  You can use a .cxl file only with the source data file that was used to create it. It will not work with other data files.
StartOffset
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation.
Note:  This property is set in number of bytes, not characters.
Encoding
S
This allows you to select the type of encoding used with source and target files.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In your source or target file, change UCS-2 to UCS2.
Data Junction Log File
The Data Junction Log File connector gives you the ability to read the log file generated by the Map Editor. A script file created in the Extract Schema Designer is used for the source file structure. The log file information can then be written to a specified target file or table.
This connector is also called Integration Log File.
If you are interested in parsing log files that contain process errors and messages, see Process and Transformation LogFile Parser. With this connector, there are more available schema data types, and the ability to parse process logs as well as logs for transformations.
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Data Junction Log File is not available as a target connector.
Property Options
 
Property
S/T
Description
ByteOrder
S
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian, and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ProgramVariables
S
Type the program variable.
ReportReadingScriptFile
S
The script file used for the source structure. The default file is djwinlog.djp.
Encoding
S
Select the type of encoding used with source and target files. The default encoding is OEM.
Shift-JIS encoding is meaningful only in Japanese operating systems.
StartOffSet
S
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
DataEase
DataEase is an accounting package that contains a collection of tables that comprise the database. In a typical DataEase database, there are a series of files with .DBA and .DBM extensions. When DataEase is your source connector, all the database files must be present before the integration platform can connect to the database.
The integration platform supports DataEase versions 4.2 and 4.5.
Tip:  Use a DataEase ODBC driver for connection to more current versions of DataEase.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following source (S) and target (T) properties.
Property
ST
Description
Date Standard
S
This is where you should specify the method your source data uses to store date information. The default for this property is North American MM/DD/YY. The other two choices are International DD/MM/YY and Metric YY/MM/DD.
Version
T
The integration platform can write two versions of DataEase data files. Version 4.5 is the default. If you would prefer your target file to be written as version 4.2, select that from the list.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Choice
Date – may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1999 would read 19990101.
Fixed point number
Floating point number
Number (1-byte)
Number (2-byte)
Number (4-byte)
Number (8-byte)
Numeric string
Small Fixed Point
Small Floating Point
Text
Time – may contain only a time.
If you are writing to a new table, you may use any of the available data types from the list that appears when you click the arrow in the Type cell of a field.
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
DataFlex (ODBC 3.x)
DataFlex is a development tool that may be used to create custom applications. With the DataFlex connector, the integration platform can read and write DataFlex v2.3 and v3.x data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
The integration platform connects to DataFlex tables with ODBC 3.x. For the procedure, and information about the property options, limitations, and source and target schemas, see ODBC 3.x.
See Also
Installing an ODBC Driver
ODBC Connectivity Tips
DataFlex Connection Requirements
To connect to DataFlex files, you must own the necessary API files, purchase them from a software vendor, or use ODBC. If you attempt to connect to DataFlex without installing the required API files, you receive a 25547 error: Setting connector (DataFlex) name: Unable to find required module .dllname.
Required Configuration Files
The files FILELIST.CFG and TERMLIST.CFG must be available on your system. If this .CFG file resides in a directory other than the data directory, you should copy that .CFG file into the same directory where the data files reside or where you intend to write them. You must also have COLLATE.CFG in the same directory with your data. In addition, all of the associated files must be available, not the .DAT file but also the .TAG, .VLD and .DEF files.
Connector-Specific Notes
The integration platform does not transform overlap fields.
dBASE II, III+
dBASE versions II and III+ are database applications. With the dBASE II and III+ connector, the integration platform can read and write dBASE II and III+ files.
The primary data file usually has a .DBF extension and the memo file usually has a .DBT extension. dBASE files are structured; for example, both the data and the file structure are stored inside the primary data file.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Data File Formats
Connector-Specific Notes
Field Names: Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter. Examples include ACCOUNT, LAST_NAME, FIRST_NAME, PHONE_NO.
Field Length: Character fields can be no longer than 254 characters. If a field is longer than 254 characters, use the Memo data type.
Number of Fields: A maximum of 128 fields is allowed in dBASE II and III+.
Record Width: The maximum combined width of all fields in one record is 4000 bytes, excluding memo fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
ST
This option determines how dBASE memo files are handled. Choose your selection from the list.
Never – This is the default. This option causes the integration platform to look for and include any memo file fields when the source data file is read.
Errors – Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always – Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Character – may contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes).
Date – may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1999 would read 19990101.
Float – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). Float fields are used to store small and large numbers needed in scientific calculations, such as 9.1234e 12 or 9,123,400,000,000.
Logical – may contain only one byte and is formatted to contain t, f, T, or F, for true or false.
Memo – may contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field.
If you are writing to a new table, you may use any of these available data types from the list that appears when you click the arrow in the Type cell of a field.
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
Note:  B-Fast files include the data types above, with the exception of the Float data type.
dBASE IV
dBASE IV is a database application. With the dBASE IV connector, the integration platform can read and write dBASE IV files. The primary data file usually has a .DBF extension and the memo file usually has a .DBT extension. dBASE files are structured; for example, both the data and the file structure are stored inside the primary data file.
Other dBASE IV Applications
Many applications use the dBASE format for data storage. The following information is generally true for most of these applications. However, if you are transforming to or from one of these other applications, refer to it by its own name in the documentation for information specific to that application.
Some other supported dBASE IV format types include:
Alpha Four
Clipper
GoldMine
GoldMine Import File (DBF)
MAILERS+4 (dBASE)
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
See Also
Data File Formats
Connector-Specific Notes
Field Names: Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter. Examples include ACCOUNT, LAST_NAME, FIRST_NAME, PHONE_NO.
Field Length: Character fields can be no longer than 254 characters. If a field is longer than 254 characters, use the Memo data type.
Number of Fields: A maximum of 255 fields are allowed.
Record Width: The maximum combined width of all fields in one record is 4000 bytes, excluding memo fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
ST
This option determines how dBASE memo files are handled. Choose you selection from the list. The following options are available:
Never - Default. This option causes the integration platform to look for and include any memo file fields when the source data file is read.
Errors - Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always - Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are available:
Character - may contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes).
Date - may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1999 would read 19990101.
Float - may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). Float fields are used to store small and large numbers needed in scientific calculations, such as 9.1234e 12 or 9,123,400,000,000.
Logical - may contain only one byte and is formatted to contain t, f, T, or F, for true or false.
Memo - may contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric - may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field. Numeric fields are used to store the exact value of a number with a fixed number of digits.
If you are writing to a new table, you may use any of these available data types from the list that appears when you click the arrow in the Type cell of a field.
If you are appending data to an existing table, the data type of each field uses the data type in the selected table by default.
Derby
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
This connector allows users to retrieve, insert, update, and delete data from Derby 10.4.2.0.
Derby Versions Supported
This connector supports the Derby 10.
Properties
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Batch Response
T
Destination of the batch response file.
Batch Size
S
Optional. Number of source records the connector caches before processing them in a map. Default is zero.
Additional Properties
ST
Optional. This string is appended to the connection string when establishing a connection to Derby. Specify one property or several properties separated by semi-colons.
Example: create=true
The entire list of properties can be found here: http://db.apache.org/derby/docs/10.4/ref/rrefattrib24612.html
Show System Tables
S
If True, all system tables are displayed in the table list. If False (the default), system tables are not displayed.
Show Views
ST
If True (the default), all views are displayed in the table list. If False, views are not displayed.
Ignore Null Fields
T
If True (the default), fields that haven't been mapped are ignored. If False, unmapped fields are in target operations and null values are inserted or updated.
Transaction Isolation
T
Sets the transaction isolation level. Options are Read Uncommitted, Read Committed (the default), Repeatable Read, and Serializable.
Scrollability of Results
S
Scrolling behavior, one of the following:
Forward Only (the default)
Scrollable - Insensitive to changes.
Scrollable - Sensitive to changes.
Operations and Entity Support
The Derby connector supports Create, Read, Update, and Delete operations on all entities.
Data Types
All Derby data types are supported. For a complete list, see http://db.apache.org/derby/docs/10.4/ref/crefsqlj31068.html.
The following table shows the mapping of Map Designer data types to Derby data types.
Map Designer Data Type
Derby Data Type
Boolean
SMALLINT
Byte
BLOB
Bytes
BLOB
Char
CHAR
Date
DATE
Datetime
TIMESTAMP
Decimal
DECIMAL
Double
DOUBLE
Float
FLOAT
Int
INTEGER
Long
BIGINT
Short
SMALLINT
String
CHAR
Time
TIME
Additional Information
Creating an Index
When creating an index on a column "c" in a table "t," the following naming convention is used for the name of the new index: "IDX_c_t".
Creating a Table
When creating a table, the length of the field can only be specified for the following types in Map Designer: Byte, Bytes, String, Char, and Decimal. The default length for Char is 1.
Derby (Client)
Derby is an open source relational database management system (RDBMS) that is available under version 2.0 of the Apache License and implemented using Java Virtual Machine (JVM). In client/server mode the Derby database runs as a separate process in the Java Virtual Machine (JVM) and is capable of managing multiple database requests from local and remote applications.
The integration platform supports insert, update, and delete operations from the 10.4.2.0 version of the Derby database in client/server mode.
This connector writes multimode output, so you can perform table drop and table insert operations directly on your target database.
Connector-Specific Notes
The naming convention to use when creating a new index is IDX_<column name>_<table name>. For example, use IDX_c_t to create an index on column "c" located in table "t."
When creating a table in the Map window, you can only specify field length for the following data types: byte, bytes, string, char, and decimal. The default field length for char is 1.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Batch Size
S
The number of source records the connector caches before processing them in a map. The default is zero, which means to read all.
Port
ST
The integration platform uses the Derby database in client/server mode with the default port number (1527). You must use a non-standard port or a different server when using the Derby database in client/server mode.
Additional Properties
ST
Name=value pairs that is required for the connection. Separate multiple pairs with semi colons.
Show System Tables
S
Required. Only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to s system tables.
If set to True, allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is False.
Show Views
ST
If set to True (default), all views are displayed in the table list.
Scrollability of Results
S
Required. Determines how to set result set.
The three options:
Forward Only (default) - Result set is not scrollable. The system reads the next row in the result set.
Scrollable - Insensitive to changes - Result set is scrollable. The system can data read data forward or backward. Any changes to the result set that are made while it is open are not visible.
Scrollable - Sensitive to changes - Result set is scrollable. Any changes that are made to the result set while it is open are visible.
Transaction Isolation
T
Allows you to specify any one of four isolation levels when reading from or writing to a Derby database table. The default value is Read Committed.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following lists the four supported isolation levels:
READ_UNCOMMITTED—Permits P1, P2, and P3.
READ_COMMITTED—Permits P2 and P3. Does not permit P1.
REPEATABLE_READ—Permits P3. Does not permit P1 and P2.
SERIALIZABLE—Does not permit P1, P2 or P3.
Flush Frequency
T
Number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance.
Batch Response
T
This property creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases:
Capturing system-generated object IDs for use in future updates.
Correlating an error with its object and have enough information about the error for exception handling and error diagnosis.
For more information, see Batch Response File.
Ignore Null Fields
T
If set to False (default), null values are included when updating records.
If set to True, ignores null data values when updating records
Additional Properties
The integration platform supports the following additional properties to connect to a Derby database in client/server mode. Separate multiple properties with a semicolon.
bootPassword=key
collation=collation
create=true
createFrom=path
databaseName=nameofDatabase
dataEncryption=true
encryptionKey=key
encryptionProvider=providerName
encryptionAlgorithm=algorithm
failover=true
logDevice=logDirectoryPath
newEncryptionKey=key
newBootPassword=newPassword
password=userPassword
restoreFrom=path
rollForwardRecoveryFrom=path
securityMechanism=value
shutdown=true
slaveHost=hostname
slavePort=portValue
startMaster=true
startSlave=true
stopMaster=true
stopSlave=true
territory=ll_CC
traceDirectory=path
traceFile=path
traceFileAppend=true
traceLevel=value
upgrade=true
user=userName
ssl=sslMode
Data Types
The integration platform supports all Derby data types, which are listed below.
BIGINT
BLOB
CHAR
CHAR FOR BIT DATA
CLOB
DATE
DECIMAL
DOUBLE
DOUBLE PRECISION
FLOAT
INTEGER
LONG VARCHAR
LONG VARCHAR FOR BIT DATA
NUMERIC
REAL
SMALLINT
TIME
TIMESTAMP
VARCHAR
VARCHAR FOR BIT DATA
XML
Derby (Embedded)
Derby is an open source relational database management system (RDBMS) that is available under version 2.0 of the Apache License and implemented using Java Virtual Machine (JVM). In embedded mode, the Derby database resides in a Java application and runs in the same JVM as the application. All connection processes and database requests are handled through the Java application.
The integration platform supports insert, update, and delete operations from the 10.4.2.0 version of the Derby database in embedded mode.
This connector writes multimode output, so you can perform table drop and table insert operations directly on your target database.
Connector-Specific Notes
In embedded mode, a Derby database can accept only one connection at a time. For instance, running more than one map that uses this connector generates an error.
The naming convention to use when creating a new index is IDX_<column name>_<table name>. For example, use IDX_c_t to create an index on column "c" located in table "t."
When creating a table in the Map window, you can only specify field length for the following data types: byte, bytes, string, char, and decimal. The default field length for char is 1.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Batch Size
S
The number of source records the connector caches before processing them in a map. The default is zero, which means to read all.
Additional Properties
ST
Name=value pairs that is required for the connection. Separate multiple pairs with semi colons.
Show System Tables
S
Required. Only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to system tables.
If set to True, allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is False.
Show Views
ST
If set to True, all views are displayed in the table list. Default value is True.
Scrollability of Results
S
Required. Determines how to set result set.
The three options:
Forward Only (default) - Result set is not scrollable. The system reads the next row in the result set.
Scrollable - Insensitive to changes - Result set is scrollable. The system can data read data forward or backward. Any changes to the result set that are made while it is open are not visible.
Scrollable - Sensitive to changes - Result set is scrollable. Any changes that are made to the result set while it is open are visible.
Transaction Isolation
T
Allows you to specify any one of four isolation levels when reading from or writing to a database table with Derby. The default value is Read Committed.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
The following lists the four supported isolation levels:
READ_UNCOMMITTED—Permits P1, P2, and P3.
READ_COMMITTED—Permits P2 and P3. Does not permit P1.
REPEATABLE_READ—Permits P3. Does not permit P1 and P2.
SERIALIZABLE—Does not permit P1, P2 or P3.
FlushFrequency
T
Number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance.
Batch Response
T
This property creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases:
Capturing system-generated object IDs for use in future updates.
Correlating an error with its object and have enough information about the error for exception handling and error diagnosis.
For more information, see Batch Response File.
Ignore Null Fields
T
If set to False (default), null values are included when updating records.
If set to True, ignores null data values when updating records.
Additional Properties
The integration platform supports the following additional properties to connect to a Derby database in embedded mode. When specifying multiple properties, separate each with a semicolon.
bootPassword=key
collation=collation
create=true
createFrom=path
databaseName=nameofDatabase
dataEncryption=true
encryptionKey=key
encryptionProvider=providerName
encryptionAlgorithm=algorithm
failover=true
logDevice=logDirectoryPath
newEncryptionKey=key
newBootPassword=newPassword
password=userPassword
restoreFrom=path
rollForwardRecoveryFrom=path
securityMechanism=value
shutdown=true
slaveHost=hostname
slavePort=portValue
startMaster=true
startSlave=true
stopMaster=true
stopSlave=true
territory=ll_CC
traceDirectory=path
traceFile=path
traceFileAppend=true
traceLevel=value
upgrade=true
user=userName
ssl=sslMode
Data Types
The integration platform supports all Derby data types, which are listed below.
BIGINT
BLOB
CHAR
CHAR FOR BIT DATA
CLOB
DATE
DECIMAL
DOUBLE
DOUBLE PRECISION
FLOAT
INTEGER
LONG VARCHAR
LONG VARCHAR FOR BIT DATA
NUMERIC
REAL
SMALLINT
TIME
TIMESTAMP
VARCHAR
VARCHAR FOR BIT DATA
XML
DIALOG
DIALOG connectors support databases available from the DIALOG Corporation.
DIALOG Biographical
DIALOG Business and News
DIALOG Commerce Business Daily
DIALOG Company Directory
DIALOG Federal Register
DIALOG Market Research
DIALOG Patents
DIALOG Sci Tech and Bio Med
DIALOG Trade Names and Products
DIALOG Trademarks
This connector specifies field width in characters. The width of a field is the number of characters. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations at this time.
Property Options
You can set the following source (S) properties.
Property
Description
ByteOrder
Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation.
ProgramVariables
Allows you to set or override program variable values. There is no default. To specify the value to override, click cell next to ProgramVariables, enter the program variables and any override values, then click OK.
ReportReadingScriptFile
The extract schema file, with a .cxl extension.
StartOffset
If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser.
Note:  This property is set in number of bytes, not characters.
Encoding
Select the type of encoding used with source and target files. The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
DIF
DIF is an acronym for Data Interchange Format. DIF is a somewhat universal format and is similar to a delimited ASCII file in that it contains fields and records of data that are separated by specific field and record separators. Many applications can read or write a DIF file. Therefore, it may be used as an intermediate format when transforming data to or from applications that may not be supported "directly." With the DIF connector, you can read and write DIF data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Header
S
If the first data record in your source file contains column names, you can read this record and use the information as field names. The default is false. To change the setting to true, click the Value cell, highlight the default value, and type the required value.
Note:  If the Header property is set to true, the first row of data is used as field names. If the property is false and the source file contains header labels, these labels are used. If the property is false and there are no header labels, the column headings appear as Field1, Field2, and so on.
StyleSampleSize
S
This is the number of records to examine for automatic sizing. The default is 1000.
Reverse
T
By default, DIF vectors are written before the tuples, therefore the default is false. If you want to reverse the vectors and tuples, change this option to true.
TableTitle
T
The default text reads "created by third party conversion product." According to the DIF specifications, this property is not a table name; instead, it is a "title that describes the data." To change the default text, click the text box, and replace the original text with your own description.
Use Cr-LF
T
By default, this option is set to true and a carriage return-line feed is returned at the end of each line in your DIF file.
CodePage
ST
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
Only two data types are available for DIF data files.
Text
Numeric
Dodge Reports
Four Dodge Reports are available as source connectors:
Dodge Bidders
Dodge Firms
Dodge Bidders and Firms
Dodge Projects
All Dodge Reports use an underlying Content Extractor connector, which reads Custom Report Reader Scripts. See Intermediate Methods to Connect to Data Formats. The properties are displayed based on the selected Dodge Report type.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
EDI (X12)
EDI, or Electronic Data Interchange, uses standard formats to pass data between the disparate business systems. Third parties provide EDI services that enable organizations with different equipment to connect. Although interactive access may be a part of it, EDI implies direct computer to computer transactions into vendors' databases and ordering systems. The EDI standard is ANSI X12, developed by the Data Interchange Standards Association (DISA). With the EDI X12 connector, Map Designer can read and write EDI X12 data files.
An EDI message contains a string of data elements, each of which represents a singular fact, such as a price, product model number, and so forth, separated by delimiters. The entire string is called a data segment. One or more data segments framed by a header and trailer form a transaction set, which is the EDI unit of transmission (equivalent to a message). A transaction set often consists of what is usually contained in a typical business document or form. The parties who exchange EDI transmissions are referred to as trading partners.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
EDI Structure
EDI document structure contains Headers that identify the transaction type. Within the document, Segments and Elements further define the data and its structure.
Eligibility/Coverage/Benefits Inquiry-270
Eligibility/Coverage/Benefit Inquiry 271
Patient Information – 275
Claim Status Inquiry - 276
Claim Status Notification - 277
Health Care Services Review Information- 278
Benefit Enrollment and Maintenance - 834
Health care Claim Payment/Advice - 835
Health Claims or Equivalent Encounter Information
Dental,Professional,Institutional-837
HIPPA Structure
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) requires the use of certain standard formats for the documentation of medical claims related to services and other transactions associated with medical record keeping. Following is a list of HIPAA form titles and the corresponding form number:
Health Claims or Equivalent Encounter Information (Dental, Professional, Institutional) - 837
Enrollment and Disenrollment in a Health Plan- 834
Eligibility for a Health Plan - 270, 271
Health Care Payment and Remittance Advice - 835
Health Plan Premium Payments - 820
Health Claim Status - 276, 277
Referral Certification and Authorization - 278
Coordination of Benefits- 837
Addenda schemas use the following naming structure:
HIPAA_Health_Care_Claim_Remittance_Advice.X091.A1.X12.4010.835.1.0.ds.xml
Health Care Payment and Remittance Advice - formal document name
X091- HIPPA designation (describes form as HIPPA- Compliment)
A1 - Addenda designation;implementation guide
X12- ANSI project designation
4010- Version of Document
835 - form number of primary schema
Note:  Before July 2003, addenda schemas used a "0.<n>.ds.xml" convention. Later naming structures use "1.<n>.ds.xml
Example:
Definitions of each of the naming structure components:
Health Care Payment and Remittance Advice – formal document name
X091 – HIPAA designation (describes form as HIPAA-compliant)
A1 – Addenda designation; implementation guide
X12 – ANSI project designation
4010 – version of the document
835 – form number of the primary schema
Before July 2003, addenda schemas used a "0.<n>.ds.xml" convention. Later naming structures use "1.<n>.ds.xml".
Version Mismatch Errors
The ANSI standards reflect generic supersets designed to address a wide range of data exchange requirements. You may decide to define your own "subset" and develop rules that describe the data you submit. If you elect to develop your own schema, be aware that many errors can be traced to schema "version mismatch". It is important that all trading partners agree on the specific version of the schema to be used in transactions. Addenda schemas especially, should be examined to verify version match with all parties involved in the transaction.
Note:  To verify the version of a schema, open the schema in a text editor. Check the header information. In the header, notice the date created and date (last) modified fields. These fields offer a convenient method to determine precise version match.
Caution!  Changes to, or customization of the schema must be communicated among and mutually agreed upon by all trading partners prior to "live" data exchange. A customized or modified schema should be verified by each partner to ensure version match at each point of exchange within the trading group. Thorough preparation, well ahead of time, is the key to minimize production errors.
Recommended Practices
Begin with schema versions that are standard and current.
Agree with each trading partner on which schema version to use.
Check schema addenda for version match at each point-of-exchange.
Test the data exchange and test it frequently, well ahead of production deadline.
Under HIPAA regulations, the service provider, not the vendor, is responsible for the accuracy and timely submission of claims and other reporting documents. If you are a service provider, make it a point to be conversant in matters related to HIPAA requirements. Participate actively with your vendor to develop solutions that satisfy these requirements with a minimum impact on your professional practice. If you are a vendor, understand that the complicated regulations and documentation can be overwhelming to the service provider, whose focus is on the care and treatment of their patients. Understand the other party’s perspective and you have already initiated a business plan.
Note:  The Compliance Deadline for Electronic Claims Submission is October 16, 2003. Compliance requires that claims must be submitted using ANSI X12 837 v4010A1 commencing with that date. Older versions of ANSI X12 837 and all NSF forms are retired after October 16, 2003. The Centers for Medicare and Medicaid Services has implemented a Contingency Plan to accept Compliant and Non-compliant claims after the deadline. See http://www.cms.hhs.gov for complete and accurate information regarding claims submission regulations.
Limitations
The limitations are:
Invalid EDI - EDI segments with no elements that contain data or mandatory elements that contain no data causes invalid EDI files to be generated. By default, Map Designer collapses data elements that contain no data. If a ClearMapPut Action results in a segment with no elements that contain data, Map Designer writes a segment ID with no supporting elements (invalid EDI). If a mandatory element is mapped from a source field that contains no data, Map Designer writes the empty element to the Target (also invalid EDI). You must ensure that empty segments are not written to the target file and that mandatory elements actually contain data before running transformations.
Real-time Messaging Support - The EDI X12 connector is designed to work with only a discrete set (message) at a time. Files containing two or more separate messages must be split to process them with Map Designer. Each message begins with a MSH segment and the MSH must exist only once in a message as the first segment. EDI was designed to be a real-time messaging interface where discrete messages are generated and routed immediately to their destination. Some users employ a batch mode process, where multiple messages are written to a single file before transport occurs. This is not the intended use for EDI X12; Map Designer provides real-time messaging support (for example, processing one message at a time). Therefore, batch files must first be split into discrete messages for processing.
Property Options
 
Property
ST
Description
Skip
S
When the Skip property is set to true, segments that do not match those in the applied schema file are not read. The default is false. If you are reading large EDI files and have many mismatches, then Map Designer parsing time increases – in which case, keep the default setting.
CodeSetFile
ST
This is the file containing the definition of allowed code set values for fields, used mainly for validation. Click the ellipsis , browse to your file, and click Open to apply the schema file.
QualifiedFieldNames
ST
This is the name path of the parents of the segment parents in the schema tree. If QualifiedFieldNames is true, Qualified Name is put before a field name. The default is false.
SchemaFile
ST
Allows you to choose a schema file for your source or target file. Click the ellipsis , browse to your schema file and click Open to apply the schema file.
Validation
ST
None is the default setting. Click the arrow for a list of the following validation methods:
Composite/Element – Validate maxuse, requirement, min/max length and default value.
Loop/Segment – Validate maxuse and requirement.
Composite/Element+Loop/Segment – Validate both composite/element and loop segment.
This property works differently on source and target files. Validation on the source side occurs when you select the Apply button in Source Properties. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source-side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run time.
SegmentTerminator
T
In this property, you may select a Segment Terminator from the list. See the ElementSeparator list for your options.
ElementSeparator
T
In other connectors, such as ASCII (Fixed), this property is called the FieldSeparator. Here is a list of element separators from which you can choose:
CR-LF
STX (0001)
SOT(0002)
ETX(0003)
EOT(0004)
ENQ(0005)
ACK(0006)
BEL(0007)
BS(0008)
HT(0009)
LF(000A)
 
 
VT(000B)
FF(000C)
CR(000D)
SO(000E)
S1(000F)
DLE(0010)
DC1(0011)
DC2(0012)
DC3(0013)
DC4(0014)
NAK(0015)
SYN(0016)
 
 
ETB(0017)
CAN(0018)
EM(0019)
SUB(001A)
ESC(001B)
FS(001C)
GS(001D)
RS(001E)
US (001F)
SP(0020)
! (0021)
" (0022)
#(0023)
$(0024)
%(0025)
 
 
& (0026)
'(0027)
( (0028)
) (0029)
* (002A)
+ (002B)
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
 
 
< (003C)
= (003D)
>(003E)
? (003F)
@(0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
‘ (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
LineWrap
T
Sets the type of line wrap you want to use in your data file. The default is None.
Some EDI files may consist of a constant stream of data with no line separators. LineWrap forces a temporary line-wrapping behavior. When you attempt to connect to your EDI file and you receive parse error messages, try changing the setting to CR-LF, CR, LF, or LF-CR.
WrapLength
T
Sets the length of line wrap you want to use in your target file. The default is 80.
SubElementSeparator
T
In this property, you can select a subelement separator from the list. See ElementSeparator above for the complete list of separators. The SubElementSeparator property value supersedes any value mapped in the ISA segment. Allows overrides of values from the command line or through the API.
RepetitionSeparator
T
Sets the character used to separate multiple occurrences of a field. Supported for HIPAA 5010 schemas.
Data Type
The following data types are available:
a
AN
an
B
DT
ID
Loop
n
N0
N1
n10
N11
N12
N13
N14
N15
N16
N17
N18
N19
N2
N20
N3
N4
N5
N6
N7
N8
N9
R
Segment
TM
Note:  The data types that you see under Type in the target schema (Map tab) are EDI data type names. However, these data types are read as Text by Map Designer. To use a DT EDI data type in an expression, you would need to treat the data type as Text that must be converted to a DateTime data type. For more information about converting data types, search for "commonly used functions" in the documentation.
EDIFACT
EDI uses standard formats to pass data between the disparate business systems. Third parties provide EDI services that enable organizations with different equipment to connect. Although interactive access may be a part of it, EDI implies direct computer-to-computer transactions into vendors' databases and ordering systems. The EDI standard is ANSI X12, developed by the Data Interchange Standards Association (DISA). ANSI X12 is either closely coordinated with or is being merged with an international standard, EDIFACT. With the EDIFACT connector, Map Designer can read and write EDIFACT data files.
An EDI message contains a string of data elements, each of which represents a singular fact, such as a price, product model number, and so forth, separated by delimiters. The entire string is called a data segment. One or more data segments framed by a header and trailer form a transaction set, which is the EDI unit of transmission (equivalent to a message). A transaction set often consists of what is usually contained in a typical business document or form. The parties who exchange EDI transmissions are referred to as trading partners.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Note:  EDIFACT is a Unicode-enabled connector.
Connector-Specific Notes
EDIFACT structure has similarities to EDI X12 structure, but there are some special considerations that relate only to EDIFACT structural components. The EDIFACT connector has been tailored to recognize the unique characteristics of EDIFACT so that you can have more control over your EDIFACT data. See the following notes for details:
Default separators
Segment UNA is used to define EDIFACT separators: Element separator, Data element separator, Decimal notation, Release indicator, Segment terminator. In EDIFACT, Segment UNA is optional and dependent on charsets (UNOA, UNOB, UNOC, UNOD, UNOE and UNOF). If the EDIFACT file does not contain a UNA segment, it uses default separators. Only UNOA and UNOB define default separators. Other charsets have to define separators.
UNOA Default separators
Element Separator +
Data Element Separator :
Decimal Notation .
Release Indicator ?
Segment Terminator '
UNOB Default separators
Element Separator \x1D (1S3)
Data Element Separator \x1F (1S1)
Decimal Notation .
Release Indicator
Segment Terminator \x1C (1S4)
Note:  If you want to use other separators, you must define the UNA segment.
Limitations
The limitations are:
Real-Time Messaging Support - The EDIFACT connector is designed to work with only a discrete set (message) at a time. Files containing two or more separate messages must be split to process them with Map Designer. Each message begins with a MSH segment and the MSH must exist only once in a message as the first segment. EDIFACT was designed to be a real-time messaging interface where discrete messages are generated and routed immediately to their destination. Some users employ a batch mode process, where multiple messages are written to a single file before transport occurs. This is not the intended use for EDIFACT; Map Designer provides real-time messaging support (for example, processing one message at a time). Therefore, batch files must first be split into discrete messages for processing.
Double-Byte Character Support - You may need to produce double-byte Excel output from a double-byte EDIFACT source. To do this, create and run two maps in sequence:
EDIFACT to Unicode (Delimited) - Set the Unicode Encoding property to UTF-8.
Unicode (Delimited) to Excel: Set the Encoding property for both source and target to UTF-8.
To create double-byte EDIFACT output from a double-byte Excel source, see Excel XP.
Property Options
 
Property Name
S/T
Description
Encoding
ST
The selected code page translation table tells Map Designer which tables to use when reading in the source data and writing out the target data. The default value is OEM, which allows Map Designer to use whatever code page is the active one on your system. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252.Shift-JIS encoding is meaningful only in Japanese operating systems.
SchemaFile
ST
Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use.
To upload a schema file that is not in your current project: In Design Manager, click the icon to import projects or upload a single file and select Upload a single file. Select the schema file and click Open. Click Next. At Type, select Schema. Click Save. Once the file is uploaded, you can then select it from the list.
CodesetFile
ST
This is the file containing the definition of allowed code set values for fields, used mainly for validation. Click the ellipsis, browse to your file and click Open to apply the schema file.
SegmentTerminator
ST
A character that terminates a segment. Select a Segment Terminator from the list. For the list of options, see DataElementSeparator.
CompositeSeparator
ST
Character used to separate adjacent composites of a field. See DataElementSeparator below for a list of Composite Separators from which you can choose from the list.
DataElementSeparator
ST
This is a complete list of Element Separators from which you can choose from the list:
CR-Lf
SOH(0001)
STX (0002)
ETX(0003)
EOT(0004)
ENQ(0005)
ACK(0006)
BEL(0007)
BS(0008)
FF(000C)
CR(000D)
SO(000E)
S1(000F)
 
 
DLE(0010)
DC1(0011)
DC2(0012)
DC3(0013)
DC4(0014)
NAK(0015)
SYN(0016)
ETB(0017)
CAN(0018)
EM(0019)
SUB(001A)
ESC(001B)
FS/IS4(001C)
 
 
GS/IS3(001D)
RS/IS2 (001E)
US/IS1 (001F)
SP (0020)
! (0021)
" (0022)
# (0023)
$ (0024)
% (0025)
& (0026)
‘ (0027)
( (0028)
) (0029)
* (002A)
 
 
+ (002B) - Default
, (002C)
- (002D)
. (002E)
/ (002F)
: (003A)
; (003B)
< (003C)
= (003D)
> (003E)
? (003F)
 
 
@ (0040)
[ (005B)
\ (005C)
] (005D)
^ (005E)
_ (005F)
‘ (0060)
{ (007B)
| (007C)
} (007D)
~ (007E)
DEL (007F)
ReleaseIndicator
ST
If you want to use the Element separator, Data element separator and Segment terminator as regular characters, you have to pretend with release indicator. In the EDIFACT connector, the Source removes all release indicators, such as ?+?:?'123 to +:'123. The Target automatically adds the release indicator, such as +:'123 to ?+?:?'123.
DecimalNotation
ST
You may use any character as decimal notation in EDIFACT. The Source replaces all decimal notation with a . (such as 1,34 to 1.34). The Target replaces the period with defined Decimal notation, for example, 1.34 to 1,34. The default is ,(002C).
Validation
ST
None is the default setting. Click the arrow for a list of the following validation methods:
Composite/Element- Validate maxuse, requirement, min/max length and default value
Loop/segment - Validate maxuse and requirement.
Composite/Element+Loop/Segment - Validate both composite/element and loop segment.
LineWrap
ST
Sets the type of line wrap you want to use in your source or target file. The default is None.
Some EDIFACT files may consist of a constant stream of data with no line separators. LineWrap forces a temporary line-wrapping behavior. When you attempt to connect to your EDIFACT file and you receive parse error messages, try changing the setting to CR-LF, CR, LF, or LF-CR.
WrapLength
T
Sets the length of line wrap you want to use in your target file. The default is 80.
QualifiedFieldNames
ST
This is the name path of the parents of the segment parents in the schema tree. If QualifiedFieldNames is true, Qualified Name is put before a field name. The default is false.
Skip
S
When the Skip property is set to true, segments that do not match those in the applied schema file are not read. The default is false. If you are reading large EDIFACT files and have many mismatches Map Designer parsing time increases (in this case, keep this setting at its default of False).
Data Type
The following data types are available:
a
AN
an
B
DT
ID
Loop
n
NO
N1
N10
N11
N12
N13
N14
N15
N16
N17
N18
N19
N2
N20
N3
N4
N5
N6
N7
N8
N9
R
Segment
TM
Note:  The data types that you see under Type in the target schema (on the Map tab) are EDI data type names. However, these data types are read as Text by Map Designer. To use a DT EDI data type in an expression, you would need to treat the data type as Text that must be converted to a DateTime data type. For more information about converting data types, search for "commonly used functions" in the documentation.
Enable
Enable is a database application that uses a dBASE format. The primary data file usually has a .DBF extension and the memo file usually has a .DBT extension. Enable files are structured. For example, both the data and the file structure are stored inside the primary data file. With the Enable connector, the integration platform can read and write Enable data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
See dBASE II, III+ Connector-Specific Notes.
Property Options
See dBASE II, III+ Property Options.
Excel
Excel is a spreadsheet application that stores and displays data in a proprietary file format. With the Excel connector, the integration platform reads and writes Excel versions 2 through XP.
This section covers the Excel v2, v3, v4, v5, 95, and 97 connectors.
Password Protected Sheets - Microsoft does not expose the password component of the Excel API, therefore the integration platform cannot connect to password-protected Excel worksheets. If you encounter an error when trying to connect to a protected worksheet, open the Excel application, unprotect the worksheet and resave the file. Then reconnect to the worksheet in the integration platform.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
The Excel General data type is not supported and should be changed to Text. Excel also uses cell width to determine the field size when it exports data. When the cell is too small to display all the data it contains, some data is "hidden" and can only be seen when you tab into that cell. This hidden data is truncated on exports. For more information, see the limitations section.
Versions 95/97/2000: If you select the sheet named <Default> in the table list, the connection automatically reverts to the sheet that was selected the last time the file was saved. If you specify a number (and no sheet with that number as a name exists), then that number is used as a (0-based) relative sheet number. For example, if a spreadsheet contains "Sheet1", "Sheet2" and "Sheet3" (in that order) then entering a table/sheet name of "2" selects Sheet3. On the Target side, if the number does not reference an existing sheet, then a new sheet with that number as its name is created. (So, for example, using the above spreadsheet, if you entered "5" on the Target, you would get a spreadsheet containing sheets names "Sheet1", "Sheet2", "Sheet3" and "5").
Connector-Specific Notes
Some basic limitations apply when transforming data to or from an Excel spreadsheet.
File Open
If you have an .xls file open in the Excel application and then attempt to connect to the same file in the Dataset window, the connection fails. You receive a 25519 error that says the file is "locked by another process." Close the .xls file and return to the Dataset window. Choose a Sheet name.
Write Permissions
In Windows Explorer, you can right-click a file and change the file permissions settings. If you set Write permissions to Deny on an .xls file and then try to connect to the file in a Dataset window, you receive a 25519 error. You must remove the Deny Write permissions on the file and reconnect.
Data Types
The Excel General data type is not supported. Change the data type to Text in the Excel application before using the spreadsheet in a map.
Most custom formats are not supported. If the format of your data is not recognized, befor you connect to the file, set the data type of the column and any formatting that is required.
Date Modified by Source File Connection
Because of the way that Excel handles file locking, connecting to a source file may change its last modified date to the current date.
Field Length
Excel has a limitation of 255 characters in a single cell. If you try to transform data to Excel that has more than 255 characters in any field, that data is truncated at the 255th character.
The integration platform does not automatically set the field length to 255 characters when transforming to a spreadsheet. You may either change the length to 255 in the schema or you may leave the default value. If you leave the value greater than 255 characters, when you open the spreadsheet in your application, the data is truncated.
Spreadsheets store data in such a manner that source connections provide inexact metadata, sometimes causing source truncation errors. If you find a truncation error and the log indicates that it is source truncation, you can adjust the source field size. To do this, in the source schema, find the size column and the row that represents the truncated field. Adjust the field length and run the transformation.
Field length in Excel is actually a measure of the column width of a cell and not a measure of character length. When you view the length that is read in the integration platform, you are seeing the column width set in the Excel application rounded to the nearest whole number. If you have truncated data, you can set the length to a higher number in the schema.
Formatting
It is important to remember that the integration platform transforms field and record (column and row) oriented data. Title sections (headers/footers), formulas, macros and display formatting are not transformed. Column headings (in a single row only) can be transformed as field names. Superscript and subscript text can be transformed, since these are format effects applied to "normal" column and row text.
Number of Fields
Excel has a limitation of 254 fields. If you transform data to Excel with more than 254 fields, it is truncated at the 254th field. Excel 2000 has a higher limit of 256 fields.
Number of Records
Excel has a limitation of 16,384 records. If you transform data to Excel with more than 16,384 records, it is truncated at the 16,384th record.
Excel 97 (v8) and Excel 2000 have a higher limit of 65,536 records.
Appending Data
Data cannot be appended to an existing Excel spreadsheet. This Excel limitation exists because there are a fixed number of records in every Excel spreadsheet (65,536 records in Excel 97 and Excel 2000). Additional records cannot be added to the end of a spreadsheet as they can be added to a table.
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Double-Byte Character Support
You may need to produce double-byte EDIFACT output from a double-byte Excel source. To do this, create and run two maps in sequence:
Excel to Unicode (Delimited): Set the Unicode Encoding property to UTF-8.
Unicode (Delimited) to EDIFACT: Set the Encoding property for both source and target to UTF-8.
Excel v2, 3, 4 Property Options
Header Record Row
For Excel source files, this is the record number (row number) of any column headings. For Excel target files, it sets the row number to which column headings are written.
If no column headings are involved in the data transformation, the value should be left to the default setting (0). If the column heading is the first record (row 1), the value should be changed to 1.
Caution!  If you set the Header Record Row Property with a value that exceeds the number of records in the source file, the Header Record Row is not written to the target.
Auto Type Text Fields
Automatic styling changes the way Excel data is read or written. For example, when Delimited ASCII is the source connector, all data is Text (except when you change the Delimited ASCII source property AutomaticStyling to true or when you change the Type from Text to a numeric data type in the Delimited ASCII source schema grid). For details on these options, see ASCII (Delimited). When that data is transformed to a spreadsheet, you may need the fields that contain numbers to be formatted as Numeric to perform calculations in the spreadsheet.
The default for this setting is true. To transform the number fields as Text in a target Excel file, change the setting to false.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Excel v5, v7 (95) Property Options
Header Record Row
For Excel source files, this is the record number (row number) of any column headings. For Excel target files, it sets the row number to which column headings are written.
If there are no column headings involved in the data transformation, the value should be left to the default setting (0). If the column heading is the first record (row 1), the value should be changed to 1.
Caution!  If you set the Header Record Row Property with a value that exceeds the number of records in the source file, the Header Record Row is not written to the target.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Excel v97 Property Options
Header Record Row
For Excel source files, this is the record number (row number) of any column headings. For Excel target files, it sets the row number to which column headings are written.
If there are no column headings involved in the data transformation, the value should be left to the default setting (0). If the column heading is the first record (row 1), the value should be changed to 1.
Caution!  If you set the Header Record Row Property with a value that exceeds the number of records in the source file, the Header Record Row is not written to the target.
Encoding
To specify encoding for reading source and writing target data, select a code page translation table. The default value is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252. To use ANSI, select ISO8859-1.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
Special Notes about Excel Dates
Excel v3
To convert Excel v3 date strings into valid date formats for other database applications, use the following target field expression:
DateValue(Fields("FIELDNAME"))
where ("FIELDNAME") is the source field name in the Excel source data file.
Excel v4 and v5
To convert Excel v4 and v5 integer dates into valid date formats for other database applications, write the following expression:
DateAdd("d", Fields("FIELDNAME"), DateValue("12/30/1899"))
where ("FIELDNAME") is the Source field name in the Excel source data file. The remainder of the expression should be written exactly as shown above.
Data Types
By default, all data is transformed to text. For alternatives, see Auto Type Text Fields under Excel v2, 3, 4 Property Options.
Four data types are supported. The first three can be manually set.
Autotext
Date
Numeric
Text
The Excel General data type is not supported. Change all General data types to Text in Excel before connecting to the spreadsheet.
Length
The field length.
Note:  The integration platform does not automatically set field length to 255 characters when writing data to a spreadsheet. You may either change the length to 255 in the schema or you may leave it with the default value. If you have a length greater than 255 characters, when you open the spreadsheet in your application, the data is truncated.
Excel 2000
Excel is a spreadsheet application that stores and displays data in a proprietary file format. With the Excel 2000 connector, the integration platform can read and write Excel 2000 data files.
Password Protected Sheets - Microsoft does not expose the password component of the Excel API, therefore the integration platform cannot connect to password-protected Excel worksheets. If you encounter an error when trying to connect to a protected worksheet, open the Excel application, unprotect the worksheet and resave the file. Then reconnect to the worksheet in the integration platform.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Special Considerations
Excel 2000 can open and save any version of Excel files from Excel 2.1 to the latest version. If you save the file in the same format you opened it in, the file can still be opened in the version of the program in which it was produced. If you save the file to an earlier version format, features used in the workbook that are unique to Excel 2000 may be lost. However, if you choose to update to the Excel 2000 format, you are limited to opening the file in Excel 97 and 2000 only.
Note:  The integration platform does not recognize the Accounting data type. Before attempting to connect to a source file with this data type, change the Accounting data type to Currency (within your Excel application).
Special Hybrid File Format Available
In Excel 2000, you can use a hybrid file format that allows you to save a file in two workbook formats: Excel 97/2000 workbook format and Excel 5.0/7.0 workbook format. This dual file format allows you to open and view the file in Excel 5.0 or 7.0. It also allows you to maintain the advanced features in Excel 2000. This hybrid workbook contains two file formats, consequently, the file size is roughly twice as large as the Excel 2000 format. Be aware that Excel 5.0 and 7.0 do not recognize the Excel 2000 portion of the file. When you open and revise the file in Excel 2000, the changes are automatically updated in the Excel 5.0 and 7.0 portion of the file.
Note:  If you edit the hybrid file and save the changes in Excel 5.0 or 7.0, the Excel 97/2000 file portion is permanently lost. The file contains only the remaining Excel 5.0 and 7.0 workbook file format data.
Other Compatibility Issues
Microsoft has added several new features in Excel 2000 that do not appear in Excel 97. The integration platform may not read workbooks that contain one or more of the following features:
Dynamic OLAP PivotTables
Indented PivotTable reports
PivotChart reports
AutoRefresh, column formatting, filtering and sorting on external data ranges
ScreenTips for hyperlinks
Date Modified by Source File Connection
Because of the way that Excel handles file locking, connecting to a source file may change its last modified date to the current date.
Special Considerations for Linux
The integration platform reads and writes the native Excel file format on Windows. There is no native Excel connector for Linux. To run an integration on Linux, the Excel source file should first be converted to a Linux-compatible format such as ASCII. The following are suggestions for converting Excel files so that your transformation can run in Linux production.
Open the source file in Excel on Windows, save the Excel data as an ASCII file, then transport this file to Linux for processing.
Deploy the integration engine to act as host on the Windows system where the Excel source data resides. Deploy the engine on Linux. The engine can extract and reformat the Excel source data.
COM Object - Open a process window and create a batch file as a pre-process step and execute this batch file with the engine on Linux. In Excel, the user can save the spreadsheet data into an ASCII .csv file.
Visual Basic for Application - Embed a VBA script into the Excel spreadsheet that automatically saves the data to an ASCII .csv file every time the user saves the data in Excel.
Auto-save Excel data to HTML – From the Microsoft website, download the free Excel Internet Assistant Wizard (HTML.exe file). Use the wizard to save Excel spreadsheets directly to HTML format.
Note:  For Versions 95/97/2000: If you select the sheet named <Default> from the table list, the connection automatically reverts to the sheet that was selected the last time the file was saved. If you specify a number and no sheet with that number as a name exists, then that number is used as a (0-based) relative sheet number. For example, if a spreadsheet contains Sheet1, Sheet2, and Sheet3 (in that order), then entering a table/sheet name of 2 selects Sheet3. On the target side, if the number does not reference an existing sheet, then a new sheet is created with that number as its name. For example, using the above spreadsheet, if you entered 5 on the target, you would get a spreadsheet containing sheets with names Sheet1, Sheet2, Sheet3, and 5.
For more detailed information about the Excel 2000 file format, see the Microsoft web site http://www.microsoft.com.
For details on using the Excel 2000 connector, see Excel.
Excel 2007, Excel 2010, and Excel (xlsx)
The Excel 2007, Excel 2010, and Excel (xlsx) connectors allows users to retrieve, insert, update, and delete data from excel files that follow the OpenXML format. The Excel (xslx) connector supports the xlsx format of Microsoft Excel.
The Excel 2007, Excel 2010, and Excel (xlsx) are the same connectors with different aliases.
Properties
You can set the following source (S) and target (T) properties.
Property
S/T
Description
HeaderRecordRow
ST
Optional. Row number used to read the column header names. Default is 0. The row number of the first row is 1.
Batch Size
S
Optional. Number of records the connector caches before processing them.
Flush Frequency
T
Optional. The number of records buffered in memory before being written to the Excel file. Default is 0 (all records are written at once).
Batch Response
T
Optional. The file to which CRUD operation results are written. The format is one entry per record, indicating success or failure. If operation fails, information about cause of failure is returned.
Operations
The Excel 2007, Excel 2010, and Excel (xlsx) connectors supports the following operations:
Create
Read
Update
Delete
Upsert
Data Types
The following data types are available:
Text
Numeric
Debugging Tips
The system log contains helpful information about exceptions.
Additional Information
Query Statement and Query File
Query Statement and Query File source options are not supported.
Temporary Files
The connectors uses the System temp directory to save temporary files. The temp directory must have read and write permissions.
Header Record
The connectors cannot read a source file if a column name of the header record contains any of the following special characters: &, (, ), single quotation mark, or double quotation mark.
Optimal Performance Settings
Optimal performance settings for using this connector with large data sets are as follows (determined against data sets of 100,000 records):
Flush Frequency: 10,000
Target Batch Size: 25,000
Source Batch Size: 10,000
Excel XP
Excel XP is a spreadsheet application. The integration platform can read and write Excel version 2.x, 3.x, 4.x, 5.x, Excel 95, Excel 97, Excel 2000 and Excel XP files. With the Excel XP connector, the integration platform can read and write Excel XP data files.
Password Protected Sheets - Microsoft does not expose the password component of the Excel API, therefore the integration platform cannot connect to password-protected Excel worksheets. If you encounter an error when trying to connect to a protected worksheet, open the Excel application, unprotect the worksheet and resave the file. Then reconnect to the worksheet in the integration platform.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Special Considerations
Working with Excel XP and Previous Versions of Excel
Excel XP can open and save any version of Excel files from Excel 2.1 to the latest version. If you save the file in the same format you opened it in, the file can still be opened in the version of the program in which it was produced. If you save the file to an earlier version format, features used in the workbook that are unique to Excel XP may be lost. However, if you choose to update to the Excel XP format, you are limited to opening the file in Excel 2000 or Excel XP only.
Graphic Display in Excel XP and Excel 97 Workbooks
The Excel XP file format is backward-compatible with Excel 97. However, some features unique to Excel XP may not be displayed the same way in Excel 97. For example, a workbook that contains an Excel XP PivotChart report displays the PivotChart report correctly in Excel XP, but displays a regular chart in Excel 97. Also, Visual Basic® for Applications macros that use commands new to Excel XP may result in compile errors when run in Excel 97.
Data Types
The integration platform does not recognize the Accounting data type. Before attempting to connect to a source file with this data type, change the Accounting data type to Currency in the Excel application.
Special Hybrid File Format Available
In Excel XP, you can use a hybrid file format that allows you to save a file in two workbook formats: Excel 97/2000 workbook format and Excel 5.0/7.0 workbook format. This dual file format allows you to open and view the file in Excel 5.0 or 7.0. It also allows you to maintain the advanced features in Excel XP. This hybrid workbook contains two file formats, consequently, the file size is roughly twice as large as the Excel XP format. Be aware that Excel 5.0 and 7.0 do not recognize the Excel XP portion of the file. When you open and revise the file in Excel XP, the changes are automatically updated in the Excel 5.0 and 7.0 portion of the file.
Caution!  If you edit the hybrid file and save the changes in Excel 5.0 or 7.0, the Excel 97/2000 file portion is permanently lost. The file contains only the remaining Excel 5.0 and 7.0 workbook data.
Other Compatibility Issues
Microsoft has added several new features in Excel XP that do not appear in Excel 97. The integration platform may not read workbooks that contain one or more of the following features:
Dynamic OLAP PivotTables
Indented PivotTable reports
PivotChart reports
AutoRefresh, column formatting, filtering and sorting on external data ranges
ScreenTips for hyperlinks
Date Modified by Source File Connection
Because of the way that Excel handles file locking, connecting to a source file may change its last modified date to the current date.
Special Considerations for Linux
The integration platform reads and writes the native Excel file format on Windows. There is no native Excel connector for Linux. To run an integration on Linux, the Excel source file should first be converted to a Linux-compatible format such as ASCII. The following are suggestions for converting Excel files so that your transformation can run in Linux production.
Open the source file in Excel on Windows, save the Excel data as an ASCII file, then transport this file to Linux for processing.
Deploy the integration engine to act as host on the Windows system where the Excel source data resides. Deploy the engine on Linux. The engine can extract and reformat the Excel source data.
COM Object - Open a process window and create a batch file as a pre-process step and execute this batch file with the engine on Linux. In Excel, the user can save the spreadsheet data into an ASCII .csv file.
Visual Basic for Application - Embed a VBA script into the Excel spreadsheet that automatically saves the data to an ASCII .csv file every time the user saves the data in Excel.
Auto-save Excel data to HTML – From the Miscrosoft website, download the free Excel Internet Assistant Wizard (HTML.exe file). Use the wizard to save Excel spreadsheets directly to HTML format.
For more detailed information about the Excel XP file format, refer to the Microsoft web site.
Connector-Specific Notes
Some basic limitations should be considered when transforming data to an Excel spreadsheet.
File Open
If you have an .xls file open in Excel and then attempt to connect to the same file in a dataset, the connection fails and you receive a 25519 error that says the file is "locked by another process." Close the .xls file, return to the Dataset window, and choose a Sheet name.
Write Permissions
In Windows Explorer, you can right-click a file and change the file's permissions settings. If you set Write permissions to Deny on an .xls file and then try to connect to the file in a Dataset window, you receive a 25519 error. You must remove the Deny Write permissions on the file and reconnect.
Formatting
It is important to remember that the integration platform transforms field and record (column and row) oriented data. Title sections (headers/footers), formulas, macros and display formatting are not transformed. Column headings (in a single row only) can be transformed as field names. (To transform Column headings, see Property Options, under Header Record Row.) Superscript and subscript text can be transformed, since these are format effects applied to "normal" column and row text.
Field Length
Excel has a limitation of 255 characters in a single cell. If you try to transform data to Excel that has more than 255 characters in any field, that data is truncated at the 255th character.
The integration platform does not automatically set the field length to 255 characters when transforming to a spreadsheet. You may either change the field size to 255 in the target schema grid, or you may leave the field size at the default value. If you leave the field Size greater than 255 characters, when you open the spreadsheet in your application, the data is truncated.
Field length in Excel is actually a measure of the column width of a cell and not a measure of character length. When you view the size that is read in, you are seeing the column width set in the Excel application rounded to the nearest whole number. If you have truncated data, set the length to a higher value in the schema.
Number of Fields
Excel XP has a limitation of 256 fields per worksheet.
Number of Records
Excel has a limitation of 65,536 records. If you try to transform data to Excel that has more than 16,384 records, the data is truncated at the 65,537th record.
Appending Data
Data cannot be appended to an existing Excel spreadsheet. This Excel limitation exists because there are a fixed number of records in every Excel spreadsheet (65,536 records in Excel XP). Additional records cannot be added to the end of a spreadsheet as they can be added to a table.
Data Types
The General data type is not supported. Change the data type to Text within your Excel application before connecting to the spreadsheet.
The integration platform does not recognize the Accounting data type. Before attempting to connect to a source file with this data type, change the Accounting data type to Currency in Excel.
Truncation Error Trapping
This connector does not support truncation error trapping. If the target field size is too small for the data written to it, the offending record may be skipped or incomplete data may be written to the target. The transformation does not abort due to a truncation error.
Double-Byte Character Support
You may need to produce double-byte EDIFACT output from a double-byte Excel source. To do this, create and run two maps in sequence:
Excel to Unicode (Delimited): Set the Unicode Encoding property to UTF-8.
Unicode (Delimited) to EDIFACT: Set the Encoding property for both source and target to UTF-8.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
Header Record Row
ST
For Excel source files, this is the record number (row number) of any column headings. For Excel target files, this designates which record number (row number) of headings to which the integration platform writes.
If there are no column headings involved in the data transformation, the value should be left to the default setting (0). If the column heading is the first record (row 1), the value should be changed to 1. To change the setting, click the Current Value cell and highlight the default value. Then type the desired value.
Caution!  If you set the Header Record Row Property with a value that exceeds the number of records in the source file, the Header Record Row is not written to the target.
Encoding
ST
To specify encoding for reading source and writing target data, select a code page translation table. The default value is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252.
Encoding Notes
Shift-JIS encoding is meaningful only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the XML file, change UCS-2 to UCS2.
Data Types
By default, the integration platform transforms all data is transformed to Text. For alternatives, see Property Options, under Auto Type Text Fields.
The first three data types below can be manually set.
Autotext
Date
Numeric
Text
Note:  The integration platform does not support the Excel General data type. Change all General data types to Text within your Excel application before connecting to the spreadsheet.
Length
The field length.
A field length shorter than the default may truncate data.
Caution!  The integration platform does not automatically set the field length to 255 characters when transforming to a spreadsheet. You may either change the length to 255 in the schema or you may leave it at the default value. If the length is greater than 255 characters, when you open the spreadsheet in your application, data is truncated.
FIX
The Financial Information eXchange (FIX) Protocol is a "language" which defines specific kinds of electronic messages for communicating securities transactions between two parties. FIX defines only the format of the messages and the session-level interaction between two applications -- it is not a software application in its own right. With the FIX connector, Map Designer can read and write FIX data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Note:  For details about creating a new file, replacing or appending to an existing file, and updating or deleting records, see Mapping Target from Source Data.
Limitations
No known limitations at this time.
Property Options
 
Property Options
S/T
Description
Unicode
S
This property is important if your HCFA file is Unicode encoded. The default is false, therefore if your target HCFA file is Unicode encoded, change this setting to true.
Schema File
ST
Allows you to choose a schema file for your source or target file.
Validation
ST
False is the default setting. When Validation is true, the adapter validates:
Field requirement
Field default value if this value is defined
Validates the CheckSum
Validation works differently on source and target files. Validation on the source side occurs when you apply the source property options. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source-side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run-time.
Data Type
The data types are:
amt
Boolean
char
Country
Currency
data
day-of-month
Exchange
Field
float
Group
int
Length
LocalMktDate
month-year
MultipleValueString
NumInGroup
Percentage
Price
PriceOffset
Qty
SeqNum
String
TagNum
UTCDate
UTCTimeOnly
UTCTimestamp
Folio Flat File
A Folio Flat File is a specific file format that can be imported into an application called Folio Views. A Folio Flat File has a .FFF extension. With the Folio Flat File connector, the integration platform can only write data to a Folio Flat File format.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
No known limitations.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
CodePage
T
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are supported:
Character
Date
Floating Point
Integer
Time
Foxbase+
Foxbase is a database application that uses a dBASE format. The primary data file usually has a .DBF extension and the memo file usually has a .DBT extension. Foxbase files are structured. For example, both the data and the file structure are stored inside the primary data file. With the Foxbase connector, the integration platform reads and writes Foxbase and Foxbase+ files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Field Names
Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter. Examples include ACCOUNT, LAST_NAME, FIRST_NAME, PHONE_NO.
Size of Fields
Character fields can be no longer than 254 characters. If a field is longer than 254 characters, it must be defined as a Memo field.
Number of Fields
A maximum of 255 fields are allowed.
Record Width
The maximum combined width of all fields in one record is 4000 bytes, excluding memo fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
S
This option determines how dBASE memo files are handled. Choose you selection from the list that appears. The following options are available:
Never – this is the default. This option causes the integration platform to look for and include any memo file fields when the source data file is read.
Errors – Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always – Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data.
CodePage
S
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are supported.
Character – may contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes).
Date – may contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1999 would read 19990101.
Float – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). Float fields are used to store small and large numbers needed in scientific calculations, such as 9.1234e 12 or 9,123,400,000,000.
Logical – may contain only one byte and is formatted to contain a t, f, T, or F, for true or false.
Memo – may contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric – may contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field.
FoxPro
FoxPro is a database application that uses a dBASE format. The primary data file usually has a .dbf extension and the memo file usually has a .dbt extension. FoxPro files are structured; for example, both the data and the file structure are stored inside the primary data file. With the FoxPro connector, the integration platform can read and write FoxPro data files.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
Field Names
Each field name must be unique. Field names must be all uppercase characters with an underscore between two words. Field names may contain up to 10 characters, but the first character must be a letter. Examples include ACCOUNT, LAST_NAME, FIRST_NAME, PHONE_NO.
Field Length
Character fields can be no longer than 254 characters. If a field is longer than 254 characters, use the Memo data type in the schema.
Number of Fields
A maximum of 255 fields is allowed.
Record Width
The maximum combined width of all fields in one record is 4000 bytes, excluding Memo fields.
Property Options
You can set the following source (S) and target (T) properties.
Property
S/T
Description
IgnoreMemoErr
ST
This option determines how xBASE memo files are handled. Choose you selection from the picklist. The following options are available:
Never – This option (the default) instructs the integration platform to look for and include any memo file fields when reading the source data file.
Errors – Instructs the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.fpt) is not in the same directory as the data file (.dbf), the memo file is ignored. This means that the memo fields are not included with the transformed data.
Always – Instructs the integration platform to ignore the memo file completely. The memo fields are not included with the transformed data.
CodePage
ST
This translation table determines which encoding is used for reading and writing data. The default is ANSI, the standard in the US.
Data Types
The following data types are supported:
Character – May contain alpha or numeric information and may have a field width of 1 to 254 bytes. Use a character field to store numbers that are not used in calculations, such as phone numbers, check numbers, account numbers and zip codes (number fields delete the leading zeros in some zip codes).
Date – May contain only a date and the date is formatted as yyyymmdd, for a four-digit year, a two-digit month and a two-digit day. Example: The date January 1, 1995 would read 19950101.
Float – May contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). Float fields are used to store small and large numbers needed in scientific calculations, such as 9.1234e 12 or 9,123,400,000,000.
Logical – May contain only one byte and is formatted to contain a t, f, T, or F, for true or false.
Memo – May contain alpha or numeric information and may have a field width of 1 to 16,000 bytes.
Numeric – May contain only positive or negative numbers and may have a field width of 1 to 20 bytes, including the decimal point, minus sign (–), or plus sign (+). A numeric field may contain decimal places up to 19, but the number of decimal places must be set at one byte less than the total width of the field. Numeric fields are used to store the exact value of a number with a fixed number of digits.
GoldMine
GoldMine is a contact manager that uses a .DBF format. GoldMine files are structured; for example, both the data and the file structure are stored inside the data file. This connector is available as a source connector only.
The integration platform can transform GoldMine data files because they use a dBASE format. The account number field is a unique field in the contact files and in each of the other files (activities, history, etc.).
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
See dBASE IV Connector-Specific Notes.
Property Options
See dBASE IV Property Options.
GoldMine Import File (DBF)
GoldMine Import File (dbf) is a contact manager that uses a .DBF format. GoldMine files are structured; for example, both the data and the file structure are stored inside the data file. This connector is available as a target connector only.
The integration platform can transform GoldMine Import File data files because they use a dBASE format. The account number field is a unique field in the contact files and in each of the other files (activities, history, etc.).
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For more details, see Determining Field Width in Characters or Bytes.
Connector-Specific Notes
See dBASE IV Connector-Specific Notes.
Property Options
See dBASE IV Property Options.
Data Types
The following field data types are supported:
ADDRESS1 Character
ADDRESS2 Character
CITY Character
COMPANY Character
CONTRACT Character
COUNTRY Character
CREATBY Character
DEAR Character
DEPARTMENT Character
EXT1 Character
EXT2 Character
EXT3 Character
EXT4 Character
FAX Character
KEY1 Character
KEY2 Character
KEY3 Character
KEY4 Character
KEY5 Character
DATE Date
LASTDATE Character
LASTNAME Character
LASTTIME Character
LASTUSER Character
NOTES Character
OWNER Character
PHONE1 Character
PHONE2 Character
PHONE3 Character
SECR Character
SOURCE Character
STATE Character
STATUS Character
TITLE Character
ZIP Character
ACTIONON Date
CALLBACKAT Date
CALLBACKON Numeric
CLOSEDATE Date
COMMENTS Date
LASTATMPAT Date
LASTATMPON Character
LASTCONTAT Date
LASTCONTON Date
MEETDATEON Date
MEETDATEAT Character
NXTACTIONO Date
PREVRESULT Character
USERDEF 1-10
Additional Data Types
The following data types are available in the user-defined fields:
Character
Date
Float
Logical
Memo
Numeric
GoodData
This is an add-on connector and is not included in the default installation of Actian DataConnect Studio IDE. The connector is available at esd.actian.com.
Note:  This topic covers connections to GoodData 2011 and later.
GoodData is a web-based cloud computing platform that provides a complete business intelligence platform-as-a-service (BI PaaS) that brings the benefits of cloud computing to the world of business intelligence and data warehousing. Built as a complete integrated platform and offered as a service, GoodData delivers on the fundamental promise of the cloud – on-demand, self-service to deploy and use, and easily adaptable to business requirements. This connector sets field width in characters.
Connector-Specific Notes
Deletes
The GoodData API does not support Delete operations.
Uperts
The GoodData API supports Upsert only when there is a CONNECTION_POINT column in the target dataset, otherwise the Upsert operation will be recognized as Insert.
Non-SLI Data Sets
This connector supports loading data into data sets that are in SLI mode. The new created/replaced data set is also in SLI mode after transformation is complete.
Case Sensitivity
All metadata, including names of projects, datasets and columns, is case-insensitive.
Date Types
If you want to set the value of your target field type as Date, you must use the function DateValue to format it.
Maximum Number of Columns
The maximum number of columns for a single dataset is 64.
Upload Mode
The upload mode "Append" and "Clear and append" work.
Restriction of GoodData Type
In all fields, you must make sure that at least one field's type is Fact or Connection_Point.
Property Options
You can set the following target (T) properties.
Property
S/T
Description
Batch Response
T
Not used.
FlushFrequency
T
Not used.
Upload Mode
T
How to load data into target dataset:
Append (default) – Insert records directly into the target dataset.
Clear and append – Clear all data inthe target dataset and append new records.
Create/Replace – If the specified project does not exist, it will be created. If the target data set does not exist in the specified project, a new data set will be created. If the data set exists, then drop it first and create a new one. Then insert records into the new data set.
Data Types
These are the allowed data types for fields in the target dataset:
BigDecimal
BigInteger
Short
Double
Integer
Long
Float
String
Character
Date
Text
Additional Information
This section includes information about the following:
Custom Field Types
Parent and Child Records
Entity Support
Mapping
Custom Field
The GoodData connector supports the following custom fields.
Custom Field Type
GoodData Display Name
BigDecimal/Double/ Float
DECIMAL
Date
DATE
Short/Integer/Long/ BigInteger
BIGINT
String/Character
VARCHAR
Parent and Child Records
Multi-level records are not allowed in GoodData connectors.
Entity Support
For record entities, only Insert and Upsert are allowed. For a dataset in GoodData, the operation is Insert or Upsert is determined by the CONNECTION_POINT. The records with the same CONNECTION_POINT value replace the existing records in the dataset if the Upload Mode is Append.
Mapping
For the mapping relationship between GoodData Type and DI Type, you can refer to the table below.
For example, if you expect the GoodData Type is Connection_Point, you will fill up the Description as Connection_Point and select one of the Type Text, Character and String in the Map Designer.
Description
DI Type
GoodData Type
Reference
Text, Character, String
Reference
Date
Date
Date
Connection_Point
Text, Character, String
Connection_Point
Attribute
Text, Character, String
Attribute
Fact
Short,Integer,Long,Double,BigInteger,BigDecimal,Float
Fact
null
Text, Character, String
Attribute
 
Short,Integer,Long,Double,BigInteger,BigDecimal,Float
Fact
 
Date
Date
Table Mapping
The "null" means you don't fill up the Description. You will use it for map Fields in the Map Designer.
In the field value for the target reference field, the user must enter a value like: "(schema:reference:value)".if the user not enter a value like that, it will cause error.
For example, the value of the Reference field like: "store:storeid:" & Records("R1").Fields("Store")
"store" is the schema name, in other word, it is the dataset name which is relate to the current dataset.
"storeid" is the name of the connection point field.
Operation
This section includes the following information
Uploading a file with a new model
Updating a file and model by changing existing field types
Updating a file and model by adding/removing a new field
Adding a new Data Set
Uploading a file with a new model
To create a project and dataset in GoodData
1. Select the Connection as ASCII(Delimited).sc.xml file.
2. In the Source File/URI field, choose the source file with .csv extension.
3. Set the Header property to True.
4. Enter the following Connection information:
5. Server: https://secure.gooddata.com
6. Username: your own username
7. Password: your own password that corresponds to the username
8. To create a new project in GoodData, enter the value as null in the table.
9. Choose the Upload Mode as Create/Replace.
10. Drag all the fields to the Target Record position.
11. Choose the Field Type that you want to use. Use the table mapping that is mentioned above.
12. Since the source file is file1.csv and the Field Count's data type is Integer, select Integer Type for Count Field.
13. Select AfterEveryRecord from the left.
14. Add the ClearMapInsert action.
15. To create a table name, ensure that you use the format (project name).(dataset name). For example, the table name for a for a project named Pervasive and a dataset named file1 is Pervasive.file1.
16. Click the Run Map button.
Updating a file and model by changing existing field types
To update the existing field types for a project and dataset
1. Select the connection ASCII(Delimited).sc.xml
2. In the Source File/URI field, choose the source file with .csv extension.
3. Set the Header to True.
4. Update the following connection information:
5. Server: https://secure.gooddata.com
6. Username: your own username
7. Password: your own password that corresponds to the username
8. From the Table, select the dataset that you want to change.
9. Choose the Create/Replace upload mode.
10. Drag all the fields to the Target Record position
11. Choose the Field Type that you want to use. Use the table mapping that is mentioned above.
12. Add the ClearMapInsert action and select the table name.
13. Click the Run button.
Updating a file and model by adding/removing a new field
To add or remove a field from an existing project or dataset
1. Select the connection as ASCII(Delimited).sc.xml
2. In the Source File/URI, choose the source file with .csv extension. If you want to remove a field, ensure that the source file does not contain the field content.
3. Set the Header to True.
4. Update the following connection information:
5. Server: https://secure.gooddata.com
6. Username: your own username
7. Password: your own password that corresponds to the username
8. From the Table, select the dataset that you want to change.
9. Choose the Create/Replace upload mode.
10. Drag all the Fields to the Target Record position.
11. Choose the Field Type that you want to use. Use the table mapping that is mentioned above.
12. Add the ClearMapInsert action and select the table name.
13. Click the Run button.
Adding a new Dataset
To add a dataset
1. Select the Connection as ASCII(Delimited).sc.xml
2. In the Source File/URI, choose the source file with .csv extension.
3. Set the Header to True.
4. Update the following connection information:
5. Server: https://secure.gooddata.com.
6. Username: your own username
7. Password: your own password that corresponds to the username
8. To create a new dataset in GoodData, enter the value as null in the table.
9. Choose the Create/Replace upload mode.
10. Drag all the Fields to the Target Record position.
11. Choose the Field Type that you want to use. Use the table mapping that is mentioned above.
12. To create a table name, ensure that you use the format (project name).(dataset name).
13. Click the Run button.