User Guide > Map Connectors > Source and Target Map Connectors > Oracle 10g, 11g, 12c, 19c, and 21c Multimode
Was this helpful?
Oracle 10g, 11g, 12c, 19c, and 21c Multimode
Oracle is a relational database management system. With the Oracle 10g, 11g, 12c, 19c, and 21c Multimode connectors, you can perform create, read, update, and delete operations on Oracle 10g, 11g, 12c, 19c, and 21c data tables. With this connector, you can write to multiple target tables within a SQL RDBMS.
Multimode connectors allow you to perform multiple operations, such as table drops and table inserts, directly on your target database.
Note:  The Oracle 21C connector does not support AIX.
You can quickly setup a multimode connector using the multimode wizard. For more information, see Setting Up Multimode Target Connector Using Multimode Wizard.
Oracle 11g and 12c connectors support Oracle wallet to manage public key security credentials on Oracle clients. If you have configured the Oracle client to use Oracle wallet, then you can provide only the Database Alias name and connect to the source or target within a map (User Name and Password is not required.) For information about Oracle wallet and how to create it in the client, see Using Oracle Wallet Manager section in the Oracle Database Advanced Security Administrator's Guide available at the following location:
Prerequisites
You must have the Oracle 10g, 11g, 12c, 19c, and 21c standard 64-bit client installed on your system and you must have access to Oracle. If Oracle is installed on your local PC, the drive and directory path of your Oracle application must be included in your path. The path statement is automatically modified when your Oracle client software is installed. The Oracle Database Manager must be running.
Other Oracle Connectors
You can connect to Oracle interfaces in the following ways:
Oracle Direct Path: Deliver incremental loads of detail data to data warehouses and data marts. See Oracle 10g, 11g, 12c, 19c, and 21c Direct Path.
Oracle SQL Loader: Produce flat loader files that can be uploaded into an Oracle database using the Oracle SQL Loader utility. See Oracle SQL Loader.
Oracle CRM On Demand: Access data through the OnDemand Web Services interface with this connector. See Oracle CRM On Demand.
This connector sets field width in bytes. What actually varies is the number of characters that fit into a given field. For example, if a field width is 10 bytes and the encoding is Shift-JIS, the field can hold a minimum of 5 characters and a maximum of 10. This means that if you try to write 8 Japanese characters, data is truncated (the Kanji characters all take 2 bytes).
Connectivity Pointers
The Oracle standard client must be installed to connect to Oracle 10g, 11g, 12c, 19c, and 21c databases.
Connector-Specific Notes
Data Types: BINARY_FLOAT and BINARY_DOUBLE data types are not supported on Linux systems.
Unicode Support: The Oracle 10g, 11g, 12c, 19c, and 21c multimode connectors include Unicode support for wide characters. To enable double-byte support, change the Regional and Language Options on your system to a double-byte language, such as Japanese.
Connector Parts
Connector parts are the fields you configure to connect with a data source or target. The fields that are available depend on the connector you select. Multimode connectors are target only connectors. For a list of all parts for target connectors, see Specifying Target Connector, Parts, and Properties.
Property Options
You can specify the following target properties:
Property
Version
Use
AllowDistributedTransactions
10g, 11g, 12c, 19c, 21c
If set to true (default), distributed transaction processing is allowed.
Code Page
10g
The code page translation table tells the integration platform which encoding to use for reading and writing data. The default is ANSI, which is the standard in the US. For the supported list, see CodePage.
Directory Alias
10g, 11g, 12c, 19c, 21c
Allows you to specify the name of the Directory Alias. There is no default.
Encoding
11g, 12c, 19c, 21c
Determines how to store characters in Oracle 11g, 12c, 19c, and 21c target files. Default is OEM.
For encoding choices, see Source and Target Map Connectors.
SQL Log
10g, 11g, 12c, 19c, 21c
The default is sql.log in the default installation directory. To use a different log, browse to the file, or enter the path and file name.
Note:  SQL statements are sent to the SQL Log file only if the SQL Output property is set to either Target and SQL Log or SQL Log Only.
SQL Output
10g, 11g, 12c, 19c, 21c
Allows you to select bound or unbound mode and whether you want to write SQL statements to a SQL log or not. Keep in mind that bound mode is faster, since bind variables are used.
Target Only – Uses bound mode, which uses bind variables. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property. Default.
Target Only (Unbound Mode) – Uses unbound mode, which does not use bind variables and sends the literal SQL statement to the database engine. SQL statements are sent to the Target and not to the SQL log specified in the SQL Log property.
Target and SQL Log – Sends SQL statements to the Target and to the SQL log specified in the SQL Log property.
SQL Log Only – Sends SQL statements only to the SQL log file specified in the SQL Log property.
Synonyms
10g, 11g, 12c, 19c, 21c
If set to true, allows you to see synonyms. The alias names appear in the table list along with the tables. Default is false.
SystemTables
10g, 11g, 12c, 19c, 21c
If set to true, this property allows you to see all tables created by the DBA in the database. The system table names appear in the table list. Default is false.
UpdateNullFields
10g, 11g, 12c, 19c, 21c
Null values are sent to the database when inserting or updating records. The default is true. If you select False, null values are not sent to the database when you insert or update records. When set to false, this property forces the connector to operate in unbound mode, which may cause slower performance.
Best Practice — If fields in the target record are not mapped, then the null values are passed to the target. If you do not want to write to these fields, then it is recommended to set the value for UpdateNullFields to False.
Views
10g, 11g, 12c, 19c, 21c
If set to True, this property allows you to see the view names in the table list along with the table names. Default is True.
Note:  This property supports only Append and DeleteAndAppend output modes and does not support the Replace output mode.
CodePage
This translation table determines which encoding to use for reading and writing data. The default is ANSI, the standard in the US. The following code pages are available:
ANSI (default)
OEM
0037 US (EBCDIC)
0273 Germany (EBCDIC)
0277 Norway (EBCDIC)
0278 Sweden (EBCDIC)
0280 Italy (EBCDIC)
0284 Spain (EBCDIC)
0285 UK (EBCDIC)
0297 France (EBCDIC)
0437 MSDOS United States
0500 Belgium (EBCDIC)
0850 MSDOS Multilingual (Latin 1)
0860 MSDOS Portuguese
0861 MSDOS Icelandic
0863 MSDOS Canadian French
0865 MSDOS Nordic
1051 Roman-8
Length
These are field lengths in your data. If you need to change field lengths, reset them in the schema.
Caution!  A field length shorter than the default may truncate data.
Supported Data Types
The following data types are supported:
Name
Syntax
Notes
BFILE
N/A
Contains a locator to a large binary file stored outside the database. Enables byte stream I/O access to external LOBs residing on the database server. Maximum size is 4 GB.
BINARY_DOUBLE
N/A
64-bit floating point number. This data type requires 9 bytes, including the length byte.
BINARY_FLOAT
N/A
32-bit floating point number. This data type requires 5 bytes, including the length byte.
BFILE
N/A
Contains a locator to a large binary file stored outside the database. Enables byte stream I/O access to external LOBs residing on the database server. Maximum size is 4 GB.
BLOB
N/A
A binary large object. Maximum size is (4 GB - 1 byte) * (database block size).
CHAR
(size [BYTE|CHAR])
Fixed-length character data of length of size bytes. Maximum size is 2000 bytes or characters. Default and minimum size is 1 byte. BYTE and CHAR have the same semantics as VARCHAR2.
CLOB
N/A
A character large object containing single- or multibyte characters. Both fixed and variable-width character sets are supported, both using the database character set. Maximum size is (4 GB - 1 byte) * (database block size).
DATE / DATETIME
N/A
Valid date range from January 1, 4712 BC to December 31, 9999 AD.
Note:  DATE data type is referred as DATETIME in Oracle connectors.
DOUBLE
N/A
The DOUBLE PRECISION data type is a floating-point number with binary precision 126.
FLOAT
FLOAT [(p)]
A subtype of the NUMBER data type having precision p. A FLOAT value is represented internally as NUMBER. The precision p can range from 1 to 126 binary digits. A FLOAT value requires from 1 to 22 bytes.
INTERVAL DAY TO SECOND
(day_precision) (fractional_seconds_precision)
Stores a period of time in days, hours, minutes and seconds, where:
day_precision is the maximum number of digits in the DAY datetime field. Accepted values are 0 to 9. The default is 2.
fractional_seconds_precision is the number of digits in the fractional part of the SECOND field. Accepted values are 0 to 9. The default is 6.
INTERVAL YEAR TO MONTH
(year_precision)
Stores a period of time in years and months, where year_precision is the number of digits in the YEAR datetime field. Accepted values are 0 to 9. The default is 2.
LONG
N/A
Character data of variable length up to 64000 bytes.
LONG RAW
N/A
Raw binary data of variable length up to 2 GB.
LONG VARCHAR
(size)
The LONG VARCHAR type allows storage of character strings with a maximum length of 32,700 characters. It is identical to VARCHAR, except that you cannot specify a maximum length when creating columns of this type.
NCHAR
(size)
Fixed-length character data of length of size characters. Maximum size is determined by the national character set definition, with an upper limit of 2000 bytes. Default and minimum size is 1 character.
NCLOB
N/A
A character large object containing Unicode characters. Both fixed-width and variable width character sets are supported, both using the database national character set. Maximum size is (4 GB - 1 byte) * (database block size). Stores national character set data.
NUMBER
(p,s)
Number having precision p and scale s. The precision p can range from 1 to 38. The scale s can range from -84 to 127.
NVARCHAR2
(size)
Variable-length character string having maximum length of size characters. Maximum size is determined by the national character set definition. You must specify size for NVARCHAR2.
RAW
(size)
Raw binary data length of size bytes. Maximum size is 2000 bytes. You must specify size for a RAW value.
ROWID
N/A
Base 64 string representing the unique address of a row in its table. This data type is primarily for values returned by the ROWID pseudocolumn.
TIMESTAMP
(fractional_seconds_precision)
Year, month and day values of date, as well as hour, minute and second values of time, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values of fractional_seconds_precision are 0 to 9. The default is 6.
TIMESTAMP with LOCAL TIME ZONE
(fractional_seconds_precision)
All values of TIMESTAMP with TIME ZONE, with the following exceptions:
Data is normalized to the database time zone when it is stored in the database.
When the data is retrieved, users see the data in the session time zone.
TIMESTAMP with TIMEZONE
(fractional_seconds_precision)
All values of TIMESTAMP as well as TIME ZONE displacement value, where fractional_seconds_precision is the number of digits in the fractional part of the SECOND datetime field. Accepted values are 0 to 9. The default is 6.
UROWID
[(size)]
Base 64 string representing the logical address of a row of an index-organized table. The optional size is the size of a column of type UROWID. The maximum size and default is 4000 bytes.
VARCHAR2
(size[BYTE|CHAR])
Variable-length character string having maximum length of size bytes for characters. Maximum size is 4000 bytes or characters and minimum is 1 byte or 1 character. You must specify size for VARCHAR2.
BYTE indicates the column has byte length semantics; CHAR indicates the column has character semantics.
Last modified date: 12/03/2024