Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
BulkOperations | T | Use bulk operations for faster insert. This property determines if an insert statement is executed for each record or a bulk add is executed for each record. The default is false, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, change this setting to true. |
CommitFrequency | T | Controls how often data is committed to the database. Default is zero, meaning that data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When doing large transformations, this is not practical as it may produce too many transaction log entries. Setting the CommitFrequency to some nonzero value tells the connector to do a database commit after inserting or updating the specified number of records. This keeps the transaction log from getting too large but limits the restartability of the transformation. |
ConstraintDDL | T | Allows you to specify additional SQL data definition language statements to be executed after their target table is created. This is similar to the support we provide for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement. No default exists for this property. For example, you could have the statements: CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC) CREATE INDEX index2 ON mytable (Field2, Field3) These statements would create two indices on the table mytable. The first one does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3. The ConstraintDDL is executed only if the replace mode is used for the target. If there are any errors, the errors are written to the error and event log file. An error during transformation brings up the transformation error dialog box. If you want to ignore the DDL errors, you may continue the transformation. ConstraintDDL also supports an escaping mechanism that allows users to specify DDL in the native SQL of the DBMS. Any statement preceded by an at sign (@) is sent straight to the DBMS. The statement @CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY is a DDL statement for creating a primary key for the table mytable. Some ODBC drivers do not support the SQL extensions needed to create a primary key with the ODBC variant of the SQL CREATE statement. So to create primary keys in these cases, use native SQL.No default. |
CursorType | S | Type of cursor to use for retrieving records from the source table. The choices available are Forward Only, Static and Dynamic. The default setting is Forward Only. |
Encoding | ST | Type of encoding to use with source and target files. Default is OEM. |
IdentifierQuotes | ST | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. For example, Oracle uses double quotes for column and table names in SQL statements and uses single quotes for character data. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. IdentifierQuotes has three options: Default, None, and ". |
MaxDataLength | ST | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements. When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used. Some ODBC drivers have limits on the maximum data length. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
PrimaryKey | T | The PrimaryKey property allows you to specify a list of field names that are used to make the primary key. Field names are delimited by commas. If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in replace mode. No default exists for this property. There is one additional requirement for using the PrimaryKey property. The ODBC driver must support integrity enhancement facility (IEF). Only the more advanced ODBC drivers support this. |
SystemTables | ST | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. Note that this property is applicable only if the user is logged on to the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | ST | Allows you to specify any one of four isolation levels when reading from or writing to a database table with ODBC. The default is serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The four isolation levels are as follows: • Read uncommitted– Permits P1, P2, and P3. • Read committed – Permits P2 and P3. Does not permit P1. • Repeatable read – Permits P3. Does not permit P1 and P2. • Serializable – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation. |
UseCursors | T | The UseCursors property allows you to turn cursor support on or off. The default is false. If set to true and the specified ODBC driver does not support cursor inserts, the integration platform falls back on the SQL INSERT mode of adding records. For exports, cursor support is meant to enhance the performance of inserting records. This appears to be the case for desktop databases. For database servers, there is no noticeable change in insert speed. Another complication of cursor inserts is that some drivers require that the target table be indexed, otherwise positioned updates (cursors) are not allowed. Two additional properties in the ODBC export connector address this issue: PrimaryKey and ConstraintDDL. Default is false |
Views | ST | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
WhereStmt | S | Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query. It is also used as an alternative to writing lengthy query statements in the Query Statement box. Consider using this statement to instruct the SQL database server to filter data based upon a condition before it is sent to the integration platform. Omit the Where when you enter the clause. This property has no default. When the source connection is a Select statement, do not apply the WhereStmt. Instead, include the Where clause in your Select statement. This property enables data filtering when you select a table. |
Property | S/T | Description |
AutoCommit | T | Allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction. If AutoCommit is set to true, you cannot roll back changes once they have been made. Default is false. |
Encoding | T | Type of encoding to use with source and target files. Default is OEM. |
IdentifierQuotes | T | All databases have what are called quoted identifiers. You use quoted identifiers to make the SQL statement parseable and distinguish between columns and character data in SQL statements. In a SQL statement, you should enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement: SELECT * FROM "Accounts Receivable" If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma. IdentifierQuotes has three options: Default, None, and ". |
MaxDataLength | T | Specifies the maximum number of characters to write to a field. The maximum data length for long data types. Default is 1 MB. Reset this number based upon your available memory and target requirements. When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, the MaxDataLength value is used. Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. Under those conditions, do not set the MaxDataLength property to a higher value. |
SQL Output | T | Allows you to select bound or unbound mode and specify whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used. There are four output modes: • Target Only - Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. Default. • Target Only (Unbound mode) - Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. • Target and SQL Log - Sends SQL statements to the Target and to the SQL log specified in the SQL Log property. • SQL Log Only - Sends SQL statements only to the SQL log file specified in the SQL Log property. |
SQL Log | T | The default is sql.log in the default installation directory. To use a different log, browse to the file or enter the path and file name. |
SystemTables | T | If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false. This property is applicable only if the user is logged on to the database as the database administrator. Only the DBA has access to system tables. |
TransactionIsolation | T | Allows you to specify any one of four isolation levels when reading from or writing to a database table with ODBC. The default is serializable. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The four isolation levels are as follows: • Read uncommitted – Permits P1, P2, and P3. • Read committed – Permits P2 and P3. Does not permit P1. • Repeatable read – Permits P3. Does not permit P1 and P2. • Serializable – Does not permit P1, P2 or P3. For further details about TransactionIsolation levels, refer to IBM DB2 Universal Database ODBC documentation. |
UpdateNullfields | T | Null values are sent to the database when inserting or updating records. The default is true. If you select false, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance. |
Views | T | If set to true (default), allows you to see views. View names appear in the table list along with table names. |
Property | S/T | Description |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Allow duplicate keys | T | Allows duplicate records in an indexed type target file. If true, duplicate records are allowed; if set to false, a "duplicate record found" error is returned for all duplicate records. Although this property is visible for relative and sequential file types, it is NOT supported (the default value is true for these types). |
FileType | ST | This connector has one property, and the setting is not optional. The default is indexed. From the list, choose one of the following file types: • indexed (default) – Contains records that have one or more key fields. Records in an indexed file are ordered by ascending values in these key fields. Each key of an indexed file represents an ordered sequence by which the records can be accessed. One of the key fields, known as the primary key, must contain a unique value for each record and is used to identify records uniquely. • relative – Contains records that are identified by their record number, where the first record in the file is record number one. Relative files are ordered by ascending record numbers. • sequential – Ordered by the historical order in which records are written to the file. For an important note on this file type, see Connector-Specific Notes. If you leave the file type set to default and then select a relative or sequential file, an error displays that it cannot open the data source. Select the FileType setting first and then select your source or target file. To append indexed data to an existing target file, that file must also be indexed. |
Property | S/T | Description |
NumericFormatNormalization | S | When set to true, handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Default is false. |
InsertEOFRecSep | S | This option inserts a record separator on the last record of the file, if it is missing. The default is false. If set to true, this property captures the last record (with no record separator) instead of discarding it. • If the property options define a specific separator (such as CR-LF, LF), the specified separator must exist at the end of all records, including the last record in the file. Any trailing data without that separator is ignored. To avoid losing your last line of data if it does not contain the appropriate record separator, we suggest either manually editing (for one or two affected files) or creating a program (for several affected files) that adds a record separator at the end of the last record. • If a terminating record separator already exists, a blank line is read at the end of the file. Depending on your target or export type, you may need to use a script to filter out blank lines and avoid errors in exported data. |
Sample Size | S | Allows you to set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value is 1000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, but it may be necessary to analyze every record to insure no data is truncated. To change the value, click StyleSampleSize, highlight the default value, and type a new value. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. This property is set in number of bytes, not characters. |
StripLeadingBlanks | S | By default, leading blanks are left in fixed ASCII source data. To delete leading blanks, set StripLeadingBlanks to true. |
StripTrailingBlanks | S | By default trailing blanks are left in fixed ASCII source data. To delete trailing blanks, set StripTrailingBlanks to true. |
FieldSeparator | T | Allows you to choose a field separator character for your target file. The default is None. The other choices are comma (,), tab, space, carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), control-R, and pipe (|). If the record separator is not one of the choices from the list and is a printable character, highlight None and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard. If the field separator is not a printable character, replace None with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
Fill Fields | T | Writes an ASCII data file where every field is variable length. If this property is set to false, all trailing spaces are removed from each field when the data is written. The default is true. The true setting pads all fields with spaces to the end of the field length to maintain the fixed length of the records. |
Ragged Right | T | Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records. You must set FillFields to false for the RaggedRight property to work properly. The Ragged Right property has no effect if you set FillFields to true. If you set FillFields to false, then the RaggedRight property determines whether blank fields and fields with only spaces as their data still appears at the end of the record. |
DatatypeSet | S/T | Allows you to choose between standard and COBOL data types in your fixed ASCII data file. Standard is the default and means that all the data in the file is readable (lower) ASCII data. If your fixed ASCII file contains (or needs, for target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you MUST change this property option to "COBOL" before connecting to the COBOL copybook in the External Structured Schema window. |
RecordSeparator | S/T | A fixed ASCII file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters as the record separator or no record separator, click RecordSeparator for a list of choices, including system default, carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E, and no record separator. To use a separator other than one from the list, enter it here. The SystemDefault setting enables the same transformation to run with CR-LF on Windows systems and LF on Linux systems without having to change this property. If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard. If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
Tab Size | S/T | If your fixed ASCII source file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero. To change it, highlight the zero and then type a new value. |
CodePage | S/T | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
AlternateFieldSeparator | S | Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some of the fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave the setting at None (the default). The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, and pipe (|). To select a separator from the list, click AlternateFieldSeparator. If you have an alternate field separator other than one from the list, you can type it here. If the alternate field separator is not one of the listed choices and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk, type an asterisk from the keyboard. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
AutomaticStyling | S | By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, it reads and formats particular data types, such as numeric and date fields, automatically. During the transformation process, autostyling insures that a date field in the source file is formatted as a date field in the target file, as opposed to character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling as false, so leading zeros in some zip codes in the eastern United States are not deleted. |
FieldEndDelimiter | S | All Apache Common Logfile Format files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote (") because it is the most common. However, some files do not contain field delimiters, so this option is available for both source files and target files. To read from or write to a file with no delimiters, set FieldEndDelimiter to None. |
FieldSeparator | S | An Apache Common Logfile Format file is presumed to have a space between each field. To specify some other field separator, click FieldSeparator to display the list of options. The options are comma (default), tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, a pipe (|), and no field separator. If you have or need an alternate field separator other than one from the list, you can type it here. If the field separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
FieldStartDelimiter | S | All Apache Common Logfile Format files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote (") because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldStartDelimiter to None. |
Header | S | In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. Default is false. |
RecordFieldCount | S | If your Apache Common Logfile Format data file has field separators, but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero. |
RecordSeparator | S | An Apache Common Logfile Format file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters for a record separator, click the RecordSeparator cell, click the arrow and select a record separator. The list box choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E, and no record separator. If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), type a pipe from the keyboard. If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
StripLeadingBlanks | S | Strips the leading blanks in all data fields if set to true. Default is false. |
StripTrailingBlanks | S | Strips out leading blanks in all data fields if set to true. Default is false. |
StyleSampleSize | S | Allows you to set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated. To change the value, click StyleSampleSize, highlight the default value, and type a new one. |
CodePage | S | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Field1lsrecTypeId | S | If the first field of each record in your source file contains the Record Type ID, you can select true for this property and Map Designer treats each record as a separate record type. Within each record, field names derived from the Record Type ID are automatically generated for each field. For example, if your first record consisted of the following: "Names", "Arnold", "Benton", "Cassidy", "Denton", "Exley", "Fenton" Map Editor assigns these field names as follows: Names_01: Names Names_02: Arnold Names_03: Benton Names_04: Cassidy Names_05: Denton Names_06: Exley Names_07: Fenton |
NullIndicator | S | This property allows you to enter a special string used to represent null values. You can select predefined values or type any other string. • Target – When writing a null value, the contents of the null indicator string are always written. • Source – A check is made to see if the null indicator is set. If it is set, the data is compared to the null indicator. If the data and the null indicator match, the field is set to null. |
EmptyFieldsNull | S | This property allows you to treat all empty fields as null. |
NumericFormatNormalization | S | Setting this property to true handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Supported in 9.2.2 and later. Default is false. |
Action | Description |
GetMessage | Reads a file that matches the given pattern into the supplied DJMessage. |
Disconnect | Explicitly disconnects from the archive file, releasing all file handles. |
Property | Description |
Archive Type | The archive/compression type. The supported types are ZIP/JAR, TAR, TAR.GZ, TAR.BZ2, and TAR.XZ. |
Source | Indicates whether the source archive is a file on disk or contained within a DJMessage. Note When the source is contained in a DJMessage, you must first Base64 encode the message body. See
B64Encode Function. |
Source Message/File | The path to the source file or the name of the source message. |
GetMessage Pattern | The pattern to use to detect files to be read from the archive file. An asterisk (*) represents a wildcard. This pattern will be matched only against the name of the file, excluding extensions. The queue will traverse subdirectories within the archive implicitly. Examples: File* or *12 To retrieve a specific file, type the full name of the file, excluding extension. The pattern is displayed in clear text. If you prefer to conceal the value, have your Administrator configure it as an encrypted macro. |
GetMessage Extension | The pattern to use to detect files to be read from the archive file. An asterisk (*) represents a wildcard. This pattern will be matched against the file extension. The default is csv. |
Binary | Indicates whether the files that are read from the archive are binary rather than text files. If set to True, a Base64-encoded message is returned. |
Property | Description |
FileName | Contains the original name of the file within the archive. |
FilePath | Contains the original full path of the file within the archive. |
Property | S/T | Description |
AlternateFieldSeparator | S | Most data files have only one field separator between all the fields. However, it is possible to have more than one field separator. If your source file has one field separator between some fields and a different separator between others, you can specify the second field separator here. Otherwise, leave this set to None (the default). To select an option other than the default, click AlternateFieldSeparator. Then click the arrow to the right of the box to choose from the list of available separators. To specify a field separator that isn't on the list, type it here. |
AutomaticStyling | S | Changes the way the integration platform reads or writes ASCII data. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When you change the setting to true, the integration platform formats particular data types, such as numeric and date fields, automatically. During transformation of ASCII source files, autostyling ensures that a date field in the source file is formatted as a date field in the target file, and not as character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling set to false so that leading zeros in some zip codes in the eastern US are not deleted. For an ASCII target file, if you set FieldDelimitStyle to Text, you must also set AutomaticStyling to true so that delimiters are placed around only the nonnumeric fields. |
CodePage | ST | Translation table that specifies which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
EmptyFieldsNull | S | Treats all empty fields as null. |
Field1IsRecTypeId | S | If the first field of each record in your source file contains the Record Type ID, you can select true for this property and each record is treated as a separate record type. Within each record, field names derived from the Record Type ID are automatically generated for each field. For details, see Field1IsRecTypeId. |
FieldDelimitStyle | T | For delimited ASCII target connectors, this option determines whether the specified FieldStartDelimiter and FieldEndDelimiter are used for all fields, only for fields containing a separator, or only for text fields: • All – Places the delimiters specified in FieldStartDelimiter and FieldEndDelimiter before and after every field. Default setting is All. For example: "Smith","12345","Houston" • Partial – Places the specified delimiters before and after fields only where necessary. A field that contains a character that is the same as the field separator will have the field delimiters placed around it. A common example is a memo field that contains quotes within the data: "Customer responded with "No thank you" to my offer". • Text – Places delimiters before and after text and name fields (non-numeric fields). Numeric and date fields have no FieldStartDelimiter or FieldEndDelimiter. For example: "Smith", 12345,"Houston", 11/13/04 • Non-numeric – Places delimiters before and after all nonnumeric types, such as date fields. Non-numeric delimits date fields, while text does not. |
FieldEndDelimiter | ST | Most delimited ASCII files have beginning-of-field and end-of-field delimiters. The default delimiter is a quotation mark. This option is available for source files and target files that do not have delimiters. To read from or write to a file with no delimiters, set FieldEndDelimiter to none. |
FieldSeparator | ST | A delimited ASCII file is assumed to have a comma between each field. To specify another field separator, click FieldSeparator to select one from the list. To specify a field separator that is not on the list and is a printable character, highlight the CR-LF and then type the character. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. |
FieldStartDelimiter | ST | Delimited ASCII files often have beginning-of-field and end-of-field delimiters. The default delimiter is a quotation mark. To read from or write to a file with no delimiters, set FieldStartDelimiter to none. |
Header | ST | In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false. Note: If your target connector is ASCII (Delimited) and you are appending data to an existing file, leave Header set to false. |
MaxDataLen | T | Specifies the maximum number of characters to write to a field. When set to zero (the default), the number of characters written to a field is determined by the length of the field. If you set this value to a number other than zero, data may be truncated. |
NullIndicator | ST | Special string used to represent null values. Select a predefined value or type any other string. • Target – When writing a null value, the contents of the null indicator string are written. • Source – A check is made to see if the null indicator is set. If it is set, the data is compared to the null indicator. If the data and the null indicator match, the field is set to null. |
NumericFormatNormalization | S | When set to true, this property handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Supported in 9.2.2 and later. Default is false. |
RecordFieldCount | S | If your source file has field separators but no record separator or uses the same separator for both fields and records, follow these steps: 1. Set the RecordSeparator (most likely a blank line). 2. Leave the AlternateFieldSeparator option blank. 3. Enter the number of fields per record for RecordFieldCount. 4. The default value is zero. |
RecordSeparator | ST | Most delimited ASCII files have a carriage return-line feed (CR-LF) between records. To use a different character, click the RecordSeparator cell, then click the arrow and select one from the list. The SystemDefault setting enables the same transformation to run with CR-LF on Windows and with LF on Linux systems without having to change this property. To use a record separator that is not listed and is a printable character, highlight CR-LF and enter the character. For example, to use a pipe (|) character, enter a pipe from the keyboard. If the record separator is not a printable character, enter escape CR-LF with a backslash, an X, and the hexadecimal value for the separator. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
StripLeadingBlanks | ST | Leading blanks occur in ASCII source files by default. To delete them, set StripLeadingBlanks to true. Leading blanks are removed from ASCII target files by default. To retain them, set StripLeadingBlanks to false. |
StripTrailingBlanks | ST | Trailing blanks occur in ASCII source files by default. To delete them, set StripTrailingBlanks to true. Trailing blanks are removed from ASCII target files by default. To retain them, set StripTrailingBlanks to false. |
StyleSampleSize | S | Allows you to set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between one and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure that no data is truncated.To change the value, click StyleSampleSize, highlight the default value, and type a new one. |
TransliterateIn | T | Specifies a character or set of characters to be filtered out of the source data. For any character in TransliterateIn, the corresponding character from TransliterateOut is substituted. If there is no corresponding character, the source character is filtered out completely. TransliterateIn supports C-style escape sequences such as \n (new line), \r (carriage return), and \t (tab). |
TransliterateOut | T | Specifies a character to be substituted for another character from the source data. For any character in TransliterateIn, the corresponding character from TransliterateOut is substituted. If you wish the source character to be filtered out completely, leave this field blank. If there are no characters to be transliterated, leave this field blank. TransliterateOut supports C-style escape sequences such as \n (new line), \r (carriage return), and \t (tab). |
Names_01 | Names_02 | Names_03 | Names_04 | Names_05 | Names_06 | Names_07 |
Names | Arnold | Benton | Cassidy | Denton | Exley | Fenton |
Property | S/T | Description |
CodePage | ST | Translation table that specifies which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
DatatypeSet | ST | Allows you to choose between standard and COBOL data types in fixed ASCII data files. Standard (the default) specifies that all the data in the file is readable (lower) ASCII data. If your fixed ASCII file contains (or needs, for target file) COBOL display type fields and you are using a COBOL 01 copybook (fd) to define the fields, you MUST change this property option to "COBOL" before connecting to the COBOL copybook in the External Structured Schema window. |
FieldSeparator | T | Allows you to choose a field separator character for your target file. The default is None. To select a record separator that is unlisted and is a printable character, highlight None and then type the correct character. For example, to select an asterisk ( * ), enter an asterisk from the keyboard. If the record separator is not a printable character, replace None with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
Fill Fields | T | Writes an ASCII data file in which every field is of variable length. When set to false, all trailing spaces are removed from each field when the data is written. When set to true (the default), the integration platform pads all fields with spaces to the end of the field length to maintain the fixed length of the records. |
InsertEOFRecSep | S | Inserts a record separator on the last record of the file, if it is missing. The default is false. If set to true, this property captures the last record (with no record separator) instead of discarding it. Note: If property options define a specific separator (CR-LF, LF), the specified separator must exist at the end of all records, including the last record in the file. Any trailing data without that separator is ignored. Therefore, to avoid losing your last line of data (if it does not contain the appropriate record separator), we suggest either manually editing (for one or two affected files) or creating a program (for several affected files) that adds a record separator at the end of the last record. Caution! If a terminating record separator already exists, a blank line is read at the end of the file. Depending on your target or export type, you may need to filter out these blank lines to avoid errors in exported data. |
NumericFormatNormalization | S | When set to true, handles thousands-separators according to usage for locale when numeric strings are converted to numeric type. This property overrides any individual field settings. Default is false. |
Ragged Right | T | Writes an ASCII data file where the last field in each record is variable length when set to true. The default is false. The false setting pads the last field with spaces to the end of the record length to maintain the fixed length of the records. Note: You must set FillFields to false for the RaggedRight property to work properly. The Ragged Right property has no effect if you set FillFields to true. If you set FillFields to false, then the RaggedRight property determines whether blank fields and fields with only spaces as their data still appears at the end of the record. |
RecordSeparator | ST | Most fixed ASCII files have a carriage return-line feed (CR-LF) between records. To use other characters as the record separator or no record separator, click the RecordSeparator cell for a list of choices. To use another separator, enter it here. The SystemDefault setting enables the same transformation to run with CR-LF on Windows systems and LF on Linux systems without having to change this property. To use a record separator that is not listed and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe (|), enter a pipe from the keyboard. If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. |
SampleSize | S | Sets the number of records (starting with record one) that are analyzed to a default width for each source field. The default value is 5000. You can change the value to any number between one and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure that no data is truncated. To change the value, click StyleSampleSize, highlight the default value, and type a new one. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
StripLeadingBlanks | S | Leading blanks occur in fixed ASCII files by default. To them, set StripLeadingBlanks to true. |
StripTrailingBlanks | S | Trailing blanks occur in ASCII fixed files by default. To delete them, set StripTrailingBlanks to true. |
Tab Size | ST | If your fixed ASCII file has embedded tab characters representing white space, you can expand those tabs to set a number of spaces. The default value is zero. To change it, highlight the zero and enter a new value. |
Property | S/T | Description |
Avro File | S/T | Full path of source or target Avro file. |
BatchSize | S/T | Maximum number of records to be processed at the time. Defaults to 1000. |
Flush Frequency | T | The number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance. |
Batch Response | T | Batch response entries are generated for each load performed to an Avro file. |
Property | S/T | Description |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
MaxRecordLength | S | Specifies the maximum record length of the data. The default is 32700 bytes. |
RLF MSB first | S/T | This setting adjusts the byte order of the Record Length Field, also called the Record Descriptor Word (RDW). The default is true, which means the Most Significant Byte of the record length field is first. |
ShortLastRecord | S | If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false. |
WordAlignRecord | S/T | Align records on a word (16-bit) boundary when False, the default. |
CodePage | S/T | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
RecordLengthInclusive | S/T | When true, this setting indicates that the record length indicator includes the bytes of the indicator itself. The default is false, meaning that the record length indicated does not include the bytes of the indicator itself. |
OccursPad | S | When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None. The following options are available: • None (which leaves the fields uneven) – Default • End of Record (which fills the remainder of the record with your specified pad character) • Within Group (which fills the field with your specified pad character). |
Property | S/T | Description |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
OccursPad | S | When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length by selecting one of the following: • None (the default) – leaves the fields uneven. • End of Record – fills the remainder of the record with your specified pad character. • Within Group – fills the field with your specified pad character. |
PageSize | ST | When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size, highlight the default value (zero), and type the correct page size value for your data. |
RecordSeparator | ST | When a COBOL file is your source connector and you are using a 01 copybook to define the fields, you might have a record separator at the end of each record. If so, you may specify the record separator as None, which causes the map to ignore the record separator when it reads the source data. The default is None. The separators are carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), form feed (FF), Empty Line, and None. When writing a binary file, you may want to place a record separator at the end of each record (similar to a Fixed ASCII record separator). You may select a record separator from the list, or highlight the current value and type your own. |
ShortLastRecord | S | If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note This property is set in number of bytes, not characters. |
Supported Binary (International) Encoding | Name |
CP930 | EBCDIC host mixed Katakana-Kanji |
CP933 | EBCDIC mixed Korea |
CP935 | IBM EBCDIC Simplified Chinese, Combined (836 + 837) |
CP937 | EBCDIC Traditional Chinese |
CP939 | EBCDIC mixed Latin-Kanji |
DEC | Digital VAX and Unix, DEC Kanji code |
IBM78+EBCDIC | IBM mainframe and AS/400, 1978 Kanji and non-kana support |
IBM78+EBCDIK | IBM mainframe and AS/400, 1978 Kanji and half-width kana |
IBM83+EBCDIC | IBM mainframe and AS/400, 1983 Kanji and non-kana support |
IBM83+EBCDIK | IBM mainframe and AS/400, 1983 Kanji and half-width kana |
IBM937+EBCDIC | Traditional Chinese IBM mainframe and AS/400 |
ISO-2022-CN | Chinese ISO-2022 |
ISO-2022-CN-EXT | Chinese ISO-2022-EXT |
ISO-2022-JP | Japanese ISO-2022 |
ISO-2022-JP-1 | Japanese ISO-2022-1 |
ISO-2022-JP-2 | Japanese ISO-2022-2 |
ISO-2022-KR | Korean ISO-2022 |
JEF78+EBCDIC | Fujitsu FACOM, 1978 Kanji and non-kana support |
JEF78+EBCDIK | Fujitsu FACOM, 1978 Kanji and half-width kana |
JEF83+EBCDIC | Fujitsu FACOM, 1983 Kanji and non-kana support |
JEF83+EBCDIK | Fujitsu FACOM, 1983 Kanji and half-width kana |
JIS78 | Japanese Industrial Standard 1978 |
JIS83 | Japanese Industrial Standard 1983 |
KEIS78+EBCDIC | Hitachi HITACH, 1978 Kanji and non-kana support |
KEIS78+EBCDIK | Hitachi HITACH, 1978 Kanji and half-width kana |
KEIS83+EBCDIC | Hitachi HITACH, 1983 Kanji and non-kana support |
KEIS83+EBCDIK | Hitachi HITACH, 1983 Kanji and half-width kana |
MELCOM | Mitsubishi MELCOM, MELCOM Kanji |
NEC JIPS-E | NEC ACOS, NEC JIPS-E code |
NEC JIPS-E (Int) | NEC ACOS, NEC JIPS-E internal code |
NEC JIPS-J | NEC ACOS, NEC JIPS-J |
NEC JIPS-J (Int) | NEC ACOS, NEC JIPS-J internal code |
Unisys LETS-J | Unisys UNIVAC, UNIVAC LETS-J Kanji |
Property | S/T | Description |
OccursPad | S | When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None. The following options are available: • None (which leaves the fields uneven) – Default • End of Record (which fills the remainder of the record with your specified pad character) • Within Group (which fills the field with your specified pad character) |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
ShortLastRecord | S | If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false. |
Encoding | ST | Select the type of encoding used with your Binary files. The default encoding is OEM. To change the encoding to a different selection, click the arrow and select an encoding from the list. Available options are OEM (default), Shift-JIS, Unisys LETS-J, UCS-2, UTF-8, UTF-16, and US ASCII. |
Page Size | ST | When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size, highlight the default value (zero), and type the correct page size value for your data. |
RecordSeparator | ST | When a COBOL file is your source connector and you are using a 01 copybook to define the fields, you might have a record separator at the end of each record. If so, you may specify the record separator as None, which causes the map to ignore the record separator when it reads the source data. The default is None. The separators are carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), form feed (FF), Empty Line, and none. When writing out a binary file, you may want to place a record separator at the end of each record (similar to a Fixed ASCII record separator). You may select a record separator from the list, or highlight the current value and type your own. |
Property | S/T | Description |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
MaxRecordLength | S | Maximum record length of the data. The default is 32700 bytes. |
OccursPad | S | When using COBOL files, you may have fields of variable lengths. If so, you may select one of the following to specify how to fill the field with pads to a fixed length: • None (the default) – leaves the fields uneven. • End of Record – fills the remainder of the record with your specified pad character. • Within Group – fills the field with your specified pad character. |
RecordSeparator | ST | Specifies what sort of character is used to mark the end of a record. The default record separator is carriage return-line feed. To use other characters for a record separator, click the RecordSeparator cell and click once. Then click the arrow to the right of the box and click the desired record separator in the list box. The list box choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, and no record separator. |
ShortLastRecord | S | If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
Connector | v11 |
Btrieve v9.5 file | RW |
Btrieve v9 file | RW |
Btrieve v8 file | RW |
Btrieve v7 file | RW |
Btrieve v6 file | RW |
Btrieve v5 file | R |
Btrieve v4.11 file | R |
Property Name | S/T | Description |
Page Size | ST | This option sets the page size for Btrieve data. The default value is 4096. To change the page size, enter a new value in increments of 512. See the Btrieve documentation for your correct page size. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | ST | Description |
IgnoreMemoErr | S | This option determines how Clarion memo files are handled. Choose you selection from the list that appears. The following options are available: • Never – Default. This option causes Map Designer to always look for and include any memo file fields when the source data file is read. • Errors – Selecting this option causes Map Designer to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file is not in the same directory as the data file, the memo file is ignored. This means that the memo fields cannot be included with the transformed data. • Always – Selecting this option causes Map Designer to always ignore the memo file completely. This means that the memo fields cannot be included with the transformed data. |
CodePage | ST | The code page translation table tells Map Designer which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. The following code pages are available: • ANSI • OEM • 0037 US(EBCDIC) • 0273 Germany(EBCDIC) • 0277 Norway(EBCDIC) • 0278 Sweden(EBCDIC) • 0280 Italy(EBCDIC) • 0284 Spain (EBCDIC) • 0285 UK(EBCDIC) • 0297 France (EBCDIC) • 0437 MSDOS United States • 0500 Belgium(EBCDIC) • 0850 MSDOS Multilingual (Latin 1) • 0860 MSDOS Portuguese • 0861 MSDOS Icelandic • 0863 MSDOS Canadian French • 0865 MSDOS Nordic • 1051 Roman-8 |
Property | S/T | Description |
IgnoreMemoErr | S/T | This option determines how dBASE memo files are handled. Choose you selection from the picklist. The following options are available: • Never – Default. This option causes the integration platform to look for and include any memo file fields when the source data file is read. • Errors – Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. • If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data. • Always – Selecting this option causes the integration platform to ignore the memo file completely. This means memo fields are not included with the transformed data. |
CodePage | S/T | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
PageSize | ST | When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size, highlight the default value (zero), and type the correct page size value for your data. |
ShortLastRecord | S | If set to true, short reads are ignored on the last record of the file. In other words, the last record is processed even if the End of File (EOF) is reached before reading the end of the record. The default is false. |
OccursPad | S | When using COBOL files, you may have fields of variable length. If so, you may specify how to fill the field with pads to a fixed length. The default is None. The following options are available: • None (which leaves the fields uneven) – Default • End of Record (which fills the remainder of the record with your specified pad character) • Within Group (which fills the field with your specified pad character) • Page Size: When data records are arranged in blocks and the last record in each block is padded to the end of the block, it is necessary to set Page Size. This causes the pad characters to be stripped from the file during the data transfer. To set page size, click Page Size and highlight the default value "0" (zero). Then type the correct page size value for your data. |
RecordSeparator | ST | When a COBOL file is your source connector and you are using a 01 copybook to define the fields, you might have a record separator at the end of each record. If so, you may specify the record separator here. This causes the integration platform to automatically ignore the record separator when it reads the source data. The default is None. The separators are carriage return-line feed (CR-LF), line feed (LF), carriage return (CR), line feed-carriage return (LF-CR), form feed (FF), Empty Line, and none. When writing out a binary file, you may want to place a record separator at the end of each record (similar to a Fixed ASCII record separator). You may select a record separator from the list, or highlight the current value and type your own. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
AlternateFieldSeparator | S | Most data files have only one field separator between all the fields; however, it is possible to have more than one field separator. If your source file has one field separator between some of the fields and a different separator between other fields, you can specify the second field separator here. Otherwise, you should leave the setting at None (the default). The alternate field separators available from the list are none (default), comma, tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, Ctrl-R, and pipe ( | ). To select a separator from a list, click AlternateFieldSeparator. If you have an alternate field separator other than one from the list, you can type it here. If the alternate field separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk ( * ), type an asterisk from the keyboard. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. |
AutomaticStyling | S | Automatic styling changes the way Common Logfile Format Webserver data is read or written. By default, AutomaticStyling is set to false, causing all data to be read or written as Text. When set to true, data types such as numeric and date fields, are formatted automatically. During the transformation process, autostyling insures that a date field in the source file is formatted as a date field in the target file, as opposed to character or text data. If your source file contains zip code data, you may want to leave AutomaticStyling as false, so leading zeros in some zip codes in the eastern United States do not get deleted. |
FieldEndDelimiter | S | All Common Logfile Format Webserver files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote ( " ) because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldEndDelimiter to None. |
FieldSeparator | S | The integration platform assumes that a Common Logfile Format Webserver file should have a space between each field. To specify some other field separator, click FieldSeparator to display the options: comma (default), tab, space, carriage return-line feed, line feed, carriage return, line feed-carriage return, ctrl-R, a pipe ( | ), and no field separator. To use a field separator that is not on the list, type it here. If the field separator you want to use is not on the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is an asterisk (*), type an asterisk from the keyboard. If the field separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. |
FieldStartDelimiter | S | All Common Logfile Format Webserver files are presumed to have beginning-of-field and end-of-field delimiters. The default delimiter is a quote ( " ) because it is the most common. However, some files do not contain field delimiters, so this option is available for both your source files and your target files. To read from or write to a file with no delimiters, set FieldStartDelimiter to None. |
Header | S | In some files, the first record is a header record. For source data, you can remove it from the input data and cause the header titles to be used automatically as field names. For target data, you can cause the field names in your source data to automatically create a header record in your target file. To identify a header record, set Header to true. The default is false. |
RecordFieldCount | S | If your Common Logfile Format Webserver data file has field separators, but no record separator, or if it has the same separator for both the fields and the records, you should specify the RecordSeparator (most likely a blank line), leave the AlternateFieldSeparator option blank and enter the exact number of fields per record in this box. The default value is zero. |
RecordSeparator | S | A Common Logfile Format Webserver file is presumed to have a carriage return-line feed (CR-LF) between records. To use other characters for a record separator, click the RecordSeparator cell and click once. Then click the arrow to the right of the box and click the desired record separator in the list box. The choices are carriage return-line feed (default), line feed, carriage return, line feed-carriage return, form feed, empty line, ctrl-E and no record separator. To use a separator other than one from the list, you can type it here. If the record separator is not one of the choices from the list and is a printable character, highlight the CR-LF and then type the correct character. For example, if the separator is a pipe ( | ), type a pipe from the keyboard. If the record separator is not a printable character, replace CR-LF with a backslash, an X, and the hexadecimal value for the separator. For example, if the separator is a check mark, then enter \XFB. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
StripLeadingBlanks | S | Allows you to determine if leading blanks are stripped from all data fields. The default is false; leading blanks are not stripped from Common Logfile Format Webserver data. To remove them, set StripLeadingBlanks to true. |
StripTrailingBlanks | S | Allows you to determine whether or not trailing blanks are stripped from the data fields. The default is false; trailing blanks are not stripped from Common Logfile Format Webserver data. Set StripTrailingBlanks to true to remove trailing blanks. |
StyleSampleSize | S | Set the number of records (starting with record 1) that are analyzed to set a default width for each source field. The default value for this option is 5000. You can change the value to any number between 1 and the total number of records in your source file. As the number gets larger, more time is required to analyze the file, and it may be necessary to analyze every record to ensure no data is truncated. To change the value, click StyleSampleSize, highlight the default value, and type a new one. |
CodePage | S | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
ByteOrder | S | Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation. |
ProgramVariables | S | Allows you to set or override program variable values. There is no default. To specify the value to override, click cell next to ProgramVariables, enter program variables and any override values, then click OK. |
ReportReadingScriptFile | S | An extraction script file with a .cxl extension. To create and use a .cxl file with this connector, follow these steps: 1. In Data Integrator 9, use Extract Schema Designer to create an extraction script file for the source file that the v10 source dataset will read. 2. Back in v10, import the .cxl file as a Text artifact into the project that contains the source dataset. 3. Open the dataset, start the session, and select the .cxl file from the list for the ReportReadingScriptFile connection property. Note: You can use a .cxl file only with the source data file that was used to create it. It will not work with other data files. |
StartOffset | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. For a list of the 256 standard and extended ASCII characters, search for "hex values" in the documentation. Note: This property is set in number of bytes, not characters. |
Encoding | S | This allows you to select the type of encoding used with source and target files. |
Property | S/T | Description |
ByteOrder | S | Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian, and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation. |
ProgramVariables | S | Type the program variable. |
ReportReadingScriptFile | S | The script file used for the source structure. The default file is djwinlog.djp. |
Encoding | S | Select the type of encoding used with source and target files. The default encoding is OEM. Shift-JIS encoding is meaningful only in Japanese operating systems. |
StartOffSet | S | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
Property | ST | Description |
Date Standard | S | This is where you should specify the method your source data uses to store date information. The default for this property is North American MM/DD/YY. The other two choices are International DD/MM/YY and Metric YY/MM/DD. |
Version | T | The integration platform can write two versions of DataEase data files. Version 4.5 is the default. If you would prefer your target file to be written as version 4.2, select that from the list. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
IgnoreMemoErr | ST | This option determines how dBASE memo files are handled. Choose your selection from the list. • Never – This is the default. This option causes the integration platform to look for and include any memo file fields when the source data file is read. • Errors – Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data. • Always – Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
IgnoreMemoErr | ST | This option determines how dBASE memo files are handled. Choose you selection from the list. The following options are available: • Never - Default. This option causes the integration platform to look for and include any memo file fields when the source data file is read. • Errors - Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data. • Always - Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
Batch Response | T | Destination of the batch response file. |
Batch Size | S | Optional. Number of source records the connector caches before processing them in a map. Default is zero. |
Additional Properties | ST | Optional. This string is appended to the connection string when establishing a connection to Derby. Specify one property or several properties separated by semi-colons. Example: create=true The entire list of properties can be found here:
http://db.apache.org/derby/docs/10.4/ref/rrefattrib24612.html |
Show System Tables | S | If True, all system tables are displayed in the table list. If False (the default), system tables are not displayed. |
Show Views | ST | If True (the default), all views are displayed in the table list. If False, views are not displayed. |
Ignore Null Fields | T | If True (the default), fields that haven't been mapped are ignored. If False, unmapped fields are in target operations and null values are inserted or updated. |
Transaction Isolation | T | Sets the transaction isolation level. Options are Read Uncommitted, Read Committed (the default), Repeatable Read, and Serializable. |
Scrollability of Results | S | Scrolling behavior, one of the following: • Forward Only (the default) • Scrollable - Insensitive to changes. • Scrollable - Sensitive to changes. |
Map Designer Data Type | Derby Data Type |
Boolean | SMALLINT |
Byte | BLOB |
Bytes | BLOB |
Char | CHAR |
Date | DATE |
Datetime | TIMESTAMP |
Decimal | DECIMAL |
Double | DOUBLE |
Float | FLOAT |
Int | INTEGER |
Long | BIGINT |
Short | SMALLINT |
String | CHAR |
Time | TIME |
Property | S/T | Description |
Batch Size | S | The number of source records the connector caches before processing them in a map. The default is zero, which means to read all. |
Port | ST | The integration platform uses the Derby database in client/server mode with the default port number (1527). You must use a non-standard port or a different server when using the Derby database in client/server mode. |
Additional Properties | ST | Name=value pairs that is required for the connection. Separate multiple pairs with semi colons. |
Show System Tables | S | Required. Only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to s system tables. If set to True, allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is False. |
Show Views | ST | If set to True (default), all views are displayed in the table list. |
Scrollability of Results | S | Required. Determines how to set result set. The three options: • Forward Only (default) - Result set is not scrollable. The system reads the next row in the result set. • Scrollable - Insensitive to changes - Result set is scrollable. The system can data read data forward or backward. Any changes to the result set that are made while it is open are not visible. • Scrollable - Sensitive to changes - Result set is scrollable. Any changes that are made to the result set while it is open are visible. |
Transaction Isolation | T | Allows you to specify any one of four isolation levels when reading from or writing to a Derby database table. The default value is Read Committed. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following lists the four supported isolation levels: • READ_UNCOMMITTED—Permits P1, P2, and P3. • READ_COMMITTED—Permits P2 and P3. Does not permit P1. • REPEATABLE_READ—Permits P3. Does not permit P1 and P2. • SERIALIZABLE—Does not permit P1, P2 or P3. |
Flush Frequency | T | Number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance. |
Batch Response | T | This property creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases: • Capturing system-generated object IDs for use in future updates. • Correlating an error with its object and have enough information about the error for exception handling and error diagnosis. For more information, see
Batch Response File. |
Ignore Null Fields | T | If set to False (default), null values are included when updating records. If set to True, ignores null data values when updating records |
Property | S/T | Description |
Batch Size | S | The number of source records the connector caches before processing them in a map. The default is zero, which means to read all. |
Additional Properties | ST | Name=value pairs that is required for the connection. Separate multiple pairs with semi colons. |
Show System Tables | S | Required. Only applicable if the user is logged onto the database as the database administrator. Only the DBA has access to system tables. If set to True, allows you to see all the tables created by the DBA. The system table names appear in the table list. The default is False. |
Show Views | ST | If set to True, all views are displayed in the table list. Default value is True. |
Scrollability of Results | S | Required. Determines how to set result set. The three options: • Forward Only (default) - Result set is not scrollable. The system reads the next row in the result set. • Scrollable - Insensitive to changes - Result set is scrollable. The system can data read data forward or backward. Any changes to the result set that are made while it is open are not visible. • Scrollable - Sensitive to changes - Result set is scrollable. Any changes that are made to the result set while it is open are visible. |
Transaction Isolation | T | Allows you to specify any one of four isolation levels when reading from or writing to a database table with Derby. The default value is Read Committed. The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms). The following lists the four supported isolation levels: • READ_UNCOMMITTED—Permits P1, P2, and P3. • READ_COMMITTED—Permits P2 and P3. Does not permit P1. • REPEATABLE_READ—Permits P3. Does not permit P1 and P2. • SERIALIZABLE—Does not permit P1, P2 or P3. |
FlushFrequency | T | Number of record inserts to buffer before sending a batch to the connector. Default is zero. If you are inserting many records, change the default to a higher value to improve performance. |
Batch Response | T | This property creates a batch response file, which serves as a reporting mechanism to the connector. The file provides detailed results for each object in a batch where the batch size is greater than 1. Obtaining detailed results is useful in the following cases: • Capturing system-generated object IDs for use in future updates. • Correlating an error with its object and have enough information about the error for exception handling and error diagnosis. • For more information, see Batch Response File. |
Ignore Null Fields | T | If set to False (default), null values are included when updating records. If set to True, ignores null data values when updating records. |
Property | Description |
ByteOrder | Allows you to specify the byte order of Unicode (wide) characters. The default is Auto and is determined by the architecture of your computer. The list box options are Auto (default), Little Endian and Big Endian. Little Endian byte order is generally used by Intel machines and DEC Alphas and places the least significant portion of a byte value in the left portion of the memory used to store the value. Big Endian byte order is used by IBM 370 computers, Motorola microprocessors and most RISC-based systems and stores the values in the same order as the binary representation. |
ProgramVariables | Allows you to set or override program variable values. There is no default. To specify the value to override, click cell next to ProgramVariables, enter the program variables and any override values, then click OK. |
ReportReadingScriptFile | The extract schema file, with a .cxl extension. |
StartOffset | If your source data file starts with characters that need to be excluded from the transformation, set the StartOffset option to specify at which byte of the file to begin. The default value is zero. The correct value may be determined by using the Hex Browser. Note: This property is set in number of bytes, not characters. |
Encoding | Select the type of encoding used with source and target files. The Encoding property is not the encoding of the database that you connect to, but rather the encoding in which the connector expects to receive SQL query statements to be sent to the database. |
Property | S/T | Description |
Header | S | If the first data record in your source file contains column names, you can read this record and use the information as field names. The default is false. To change the setting to true, click the Value cell, highlight the default value, and type the required value. Note: If the Header property is set to true, the first row of data is used as field names. If the property is false and the source file contains header labels, these labels are used. If the property is false and there are no header labels, the column headings appear as Field1, Field2, and so on. |
StyleSampleSize | S | This is the number of records to examine for automatic sizing. The default is 1000. |
Reverse | T | By default, DIF vectors are written before the tuples, therefore the default is false. If you want to reverse the vectors and tuples, change this option to true. |
TableTitle | T | The default text reads "created by third party conversion product." According to the DIF specifications, this property is not a table name; instead, it is a "title that describes the data." To change the default text, click the text box, and replace the original text with your own description. |
Use Cr-LF | T | By default, this option is set to true and a carriage return-line feed is returned at the end of each line in your DIF file. |
CodePage | ST | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | ST | Description |
Skip | S | When the Skip property is set to true, segments that do not match those in the applied schema file are not read. The default is false. If you are reading large EDI files and have many mismatches, then Map Designer parsing time increases – in which case, keep the default setting. |
CodeSetFile | ST | This is the file containing the definition of allowed code set values for fields, used mainly for validation. Click the ellipsis , browse to your file, and click Open to apply the schema file. |
QualifiedFieldNames | ST | This is the name path of the parents of the segment parents in the schema tree. If QualifiedFieldNames is true, Qualified Name is put before a field name. The default is false. |
SchemaFile | ST | Allows you to choose a schema file for your source or target file. Click the ellipsis , browse to your schema file and click Open to apply the schema file. |
Validation | ST | None is the default setting. Click the arrow for a list of the following validation methods: • Composite/Element – Validate maxuse, requirement, min/max length and default value. • Loop/Segment – Validate maxuse and requirement. • Composite/Element+Loop/Segment – Validate both composite/element and loop segment. This property works differently on source and target files. Validation on the source side occurs when you select the Apply button in Source Properties. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source-side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run time. |
SegmentTerminator | T | In this property, you may select a Segment Terminator from the list. See the ElementSeparator list for your options. |
ElementSeparator | T | In other connectors, such as ASCII (Fixed), this property is called the FieldSeparator. Here is a list of element separators from which you can choose: • CR-LF • STX (0001) • SOT(0002) • ETX(0003) • EOT(0004) • ENQ(0005) • ACK(0006) • BEL(0007) • BS(0008) • HT(0009) • LF(000A) |
• VT(000B) • FF(000C) • CR(000D) • SO(000E) • S1(000F) • DLE(0010) • DC1(0011) • DC2(0012) • DC3(0013) • DC4(0014) • NAK(0015) • SYN(0016) | ||
• ETB(0017) • CAN(0018) • EM(0019) • SUB(001A) • ESC(001B) • FS(001C) • GS(001D) • RS(001E) • US (001F) • SP(0020) • ! (0021) • " (0022) • #(0023) • $(0024) • %(0025) | ||
• & (0026) • '(0027) • ( (0028) • ) (0029) • * (002A) • + (002B) • , (002C) • - (002D) • . (002E) • / (002F) • : (003A) • ; (003B) | ||
• < (003C) • = (003D) • >(003E) • ? (003F) • @(0040) • [ (005B) • \ (005C) • ] (005D) • ^ (005E) • _ (005F) • ‘ (0060) • { (007B) • | (007C) • } (007D) • ~ (007E) • DEL (007F) | ||
LineWrap | T | Sets the type of line wrap you want to use in your data file. The default is None. Some EDI files may consist of a constant stream of data with no line separators. LineWrap forces a temporary line-wrapping behavior. When you attempt to connect to your EDI file and you receive parse error messages, try changing the setting to CR-LF, CR, LF, or LF-CR. |
WrapLength | T | Sets the length of line wrap you want to use in your target file. The default is 80. |
SubElementSeparator | T | In this property, you can select a subelement separator from the list. See ElementSeparator above for the complete list of separators. The SubElementSeparator property value supersedes any value mapped in the ISA segment. Allows overrides of values from the command line or through the API. |
RepetitionSeparator | T | Sets the character used to separate multiple occurrences of a field. Supported for HIPAA 5010 schemas. |
Property Name | S/T | Description |
Encoding | ST | The selected code page translation table tells Map Designer which tables to use when reading in the source data and writing out the target data. The default value is OEM, which allows Map Designer to use whatever code page is the active one on your system. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252.Shift-JIS encoding is meaningful only in Japanese operating systems. |
SchemaFile | ST | Used for parsing data and generating a new schema during design time. Select a schema file that contains the schema you want to use. To upload a schema file that is not in your current project: In Design Manager, click the icon to import projects or upload a single file and select Upload a single file. Select the schema file and click Open. Click Next. At Type, select Schema. Click Save. Once the file is uploaded, you can then select it from the list. |
CodesetFile | ST | This is the file containing the definition of allowed code set values for fields, used mainly for validation. Click the ellipsis, browse to your file and click Open to apply the schema file. |
SegmentTerminator | ST | A character that terminates a segment. Select a Segment Terminator from the list. For the list of options, see DataElementSeparator. |
CompositeSeparator | ST | Character used to separate adjacent composites of a field. See DataElementSeparator below for a list of Composite Separators from which you can choose from the list. |
DataElementSeparator | ST | This is a complete list of Element Separators from which you can choose from the list: • CR-Lf • SOH(0001) • STX (0002) • ETX(0003) • EOT(0004) • ENQ(0005) • ACK(0006) • BEL(0007) • BS(0008) • FF(000C) • CR(000D) • SO(000E) • S1(000F) |
• DLE(0010) • DC1(0011) • DC2(0012) • DC3(0013) • DC4(0014) • NAK(0015) • SYN(0016) • ETB(0017) • CAN(0018) • EM(0019) • SUB(001A) • ESC(001B) • FS/IS4(001C) | ||
• GS/IS3(001D) • RS/IS2 (001E) • US/IS1 (001F) • SP (0020) • ! (0021) • " (0022) • # (0023) • $ (0024) • % (0025) • & (0026) • ‘ (0027) • ( (0028) • ) (0029) • * (002A) | ||
• + (002B) - Default • , (002C) • - (002D) • . (002E) • / (002F) • : (003A) • ; (003B) • < (003C) • = (003D) • > (003E) • ? (003F) | ||
• @ (0040) • [ (005B) • \ (005C) • ] (005D) • ^ (005E) • _ (005F) • ‘ (0060) • { (007B) • | (007C) • } (007D) • ~ (007E) • DEL (007F) | ||
ReleaseIndicator | ST | If you want to use the Element separator, Data element separator and Segment terminator as regular characters, you have to pretend with release indicator. In the EDIFACT connector, the Source removes all release indicators, such as ?+?:?'123 to +:'123. The Target automatically adds the release indicator, such as +:'123 to ?+?:?'123. |
DecimalNotation | ST | You may use any character as decimal notation in EDIFACT. The Source replaces all decimal notation with a . (such as 1,34 to 1.34). The Target replaces the period with defined Decimal notation, for example, 1.34 to 1,34. The default is ,(002C). |
Validation | ST | None is the default setting. Click the arrow for a list of the following validation methods: • Composite/Element- Validate maxuse, requirement, min/max length and default value • Loop/segment - Validate maxuse and requirement. • Composite/Element+Loop/Segment - Validate both composite/element and loop segment. |
LineWrap | ST | Sets the type of line wrap you want to use in your source or target file. The default is None. Some EDIFACT files may consist of a constant stream of data with no line separators. LineWrap forces a temporary line-wrapping behavior. When you attempt to connect to your EDIFACT file and you receive parse error messages, try changing the setting to CR-LF, CR, LF, or LF-CR. |
WrapLength | T | Sets the length of line wrap you want to use in your target file. The default is 80. |
QualifiedFieldNames | ST | This is the name path of the parents of the segment parents in the schema tree. If QualifiedFieldNames is true, Qualified Name is put before a field name. The default is false. |
Skip | S | When the Skip property is set to true, segments that do not match those in the applied schema file are not read. The default is false. If you are reading large EDIFACT files and have many mismatches Map Designer parsing time increases (in this case, keep this setting at its default of False). |
Property | S/T | Description |
HeaderRecordRow | ST | Optional. Row number used to read the column header names. Default is 0. The row number of the first row is 1. |
Batch Size | S | Optional. Number of records the connector caches before processing them. |
Flush Frequency | T | Optional. The number of records buffered in memory before being written to the Excel file. Default is 0 (all records are written at once). |
Batch Response | T | Optional. The file to which CRUD operation results are written. The format is one entry per record, indicating success or failure. If operation fails, information about cause of failure is returned. |
Property | S/T | Description |
Header Record Row | ST | For Excel source files, this is the record number (row number) of any column headings. For Excel target files, this designates which record number (row number) of headings to which the integration platform writes. If there are no column headings involved in the data transformation, the value should be left to the default setting (0). If the column heading is the first record (row 1), the value should be changed to 1. To change the setting, click the Current Value cell and highlight the default value. Then type the desired value. Caution! If you set the Header Record Row Property with a value that exceeds the number of records in the source file, the Header Record Row is not written to the target. |
Encoding | ST | To specify encoding for reading source and writing target data, select a code page translation table. The default value is OEM. To use ANSI, select ISO8859-1. For Windows CodePage 1252, select CP1252. |
Property Options | S/T | Description |
Unicode | S | This property is important if your HCFA file is Unicode encoded. The default is false, therefore if your target HCFA file is Unicode encoded, change this setting to true. |
Schema File | ST | Allows you to choose a schema file for your source or target file. |
Validation | ST | False is the default setting. When Validation is true, the adapter validates: • Field requirement • Field default value if this value is defined • Validates the CheckSum • Validation works differently on source and target files. Validation on the source side occurs when you apply the source property options. On the target side, validation occurs at run time. If validation is successful, this is indicated in the log file and if there are errors, error messages are logged to the log file. On source-side validation, the first error is recorded in the log file and then the transformation aborts before any other errors are recorded. On the target side, all errors are recorded to the log file, since validation occurs at run-time. |
Property | S/T | Description |
CodePage | T | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
IgnoreMemoErr | S | This option determines how dBASE memo files are handled. Choose you selection from the list that appears. The following options are available: • Never – this is the default. This option causes the integration platform to look for and include any memo file fields when the source data file is read. • Errors – Selecting this option causes the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.DBT) is not in the same directory as the data file (.DBF), the memo file is ignored. This means that the memo fields are not included with the transformed data. • Always – Selecting this option causes the integration platform to ignore the memo file completely. This means that the memo fields are not included with the transformed data. |
CodePage | S | This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
IgnoreMemoErr | ST | This option determines how xBASE memo files are handled. Choose you selection from the picklist. The following options are available: • Never – This option (the default) instructs the integration platform to look for and include any memo file fields when reading the source data file. • Errors – Instructs the integration platform to look for and include any memo file fields when a memo file is present. If present, the memo fields are included with the transformed data. If the memo file (.fpt) is not in the same directory as the data file (.dbf), the memo file is ignored. This means that the memo fields are not included with the transformed data. • Always – Instructs the integration platform to ignore the memo file completely. The memo fields are not included with the transformed data. |
CodePage | ST | This translation table determines which encoding is used for reading and writing data. The default is ANSI, the standard in the US. |
Property | S/T | Description |
Batch Response | T | Not used. |
FlushFrequency | T | Not used. |
Upload Mode | T | How to load data into target dataset: • Append (default) – Insert records directly into the target dataset. • Clear and append – Clear all data inthe target dataset and append new records. • Create/Replace – If the specified project does not exist, it will be created. If the target data set does not exist in the specified project, a new data set will be created. If the data set exists, then drop it first and create a new one. Then insert records into the new data set. |
Custom Field Type | GoodData Display Name |
BigDecimal/Double/ Float | DECIMAL |
Date | DATE |
Short/Integer/Long/ BigInteger | BIGINT |
String/Character | VARCHAR |
Description | DI Type | GoodData Type |
Reference | Text, Character, String | Reference |
Date | Date | Date |
Connection_Point | Text, Character, String | Connection_Point |
Attribute | Text, Character, String | Attribute |
Fact | Short,Integer,Long,Double,BigInteger,BigDecimal,Float | Fact |
null | Text, Character, String | Attribute |
Short,Integer,Long,Double,BigInteger,BigDecimal,Float | Fact | |
Date | Date |