Was this helpful?
SQL Server 2019
SQL Server 2019 delivers Big Data clusters that provide complete environment to work with large sets of data, including machine learning, and AI capabilities for SQL Server. With this connector additional capability and improvements are provided for the SQL Server database engine, SQL Server Analysis Services, SQL Server Machine Learning Services, SQL Server on Linux, and SQL Server Master Data Services are provided.
Prerequisites
Before using the connector, make sure you perform the following:
Install SQL Server Native Client 11.0 ODBC driver .
Obtain the SQL Server 2019 Server URL, Source Database, User ID, and Password.
Connector Parts
Connector parts are the fields you configure to connect with a data source or target.
The settings that are available depend on the connector you select.
For a list of all parts for source connectors, see Specifying Connector, Parts, and Properties.
For a list of all parts for target connectors, see Specifying Connector, Parts, and Properties.
Connector Properties
You can specify the following source (S) and target (T) properties:
Property
S/T
Description
Encoding
S/T
Type of character encoding to use with source and target files. The default value is OEM.
Shift-JIS encoding is used only in Japanese operating systems.
UCS-2 is no longer considered a valid encoding name, but you may use UCS2. In the data file, change UCS-2 to UCS2.
Note:  This property is not encoding of the database that you connect to, but it is the encoding in which the connector expects to receive SQL query statements that must be sent to the database.
WhereStmt
S
Provides a pass-through mechanism for SQL connectors where advanced users can construct the Where clause of the SQL query themselves. It can be used as an alternative to write a lengthy query statement. You may use this to instruct the SQL database server to filter the data based on a particular condition before sending it to the integration platform. There is no default value for this property.
Note:  This property is not applicable when the source connection is a query statement or file. This property enables data filtering when you select a table.
SystemTables
S/T
If set to True, allows you to see all tables created by the DBA in the database. The system table names appear in the table list. The default value is False.
Note:  This property is applicable only if you have logged in to the database as the Database Administrator (DBA). Only the DBA has access to system tables.
Views
S/T
If set to True, this property allows you to see the view names in the table list along with the table names. Default is True.
Note:  This property supports only Append and DeleteAndAppend output modes and does not support the Replace output mode.
CursorType
S
Type of cursor to use for retrieving records from the source table. The options available are:
Dynamic
Static
Forward Only
The default value is Forward Only.
Note:  If you select Dynamic or Static, sometimes you may not be able to view the data in the Data Browser. To resolve this, append SELECTLOOPS=N to the DriverOptions property and refresh the connection.
IdentifierQuotes
S
Quoted identifiers are used to parse the SQL statement and distinguish between columns and character data in SQL statements. All databases have quoted identifiers.
In a SQL statement, you must enclose identifiers containing special characters or match keywords in identifier quote characters; (also known as delimited identifiers in SQL-92). For example, the Accounts Receivable identifier is quoted in the following SELECT statement:
SELECT * FROM "Accounts Receivable"
If you do not use identifier quotes, the parser assumes there are two tables, Accounts and Receivable and return a syntax error that they are not separated by a comma.
IdentifierQuotes has four options:
Default
None
"
MaxDataLength
S
Maximum number of characters to write to a field. It is the maximum data length for long data types. The default value is 1 MB. You can change this value based on the available memory and target requirements.
When this connector requests the column field size for these data types, it checks for a returned value greater than the MaxDataLength value. If the value is greater, then the MaxDataLength value is used.
Some ODBC drivers have maximum data length limitations. If you choose an ODBC source or target connector and the default setting is not 1 MB, the integration platform sets the value for that particular ODBC driver. In this case, do not set the MaxDataLength property to a higher value.
TransactionIsolation
S
Isolation level when reading from or writing to a database table with ODBC. The isolation levels are:
Read uncommitted - Permits P1, P2, and P3.
Read committed - Permits P2 and P3. Does not permit P1.
Repeatable read - Permits P3. Does not permit P1 and P2.
Serializable - Does not permit P1, P2 or P3.
The default is serializable.
The ANSI SQL 2 standard defines three specific ways in which serializability of a transaction may be violated: P1 (Dirty Read), P2 (Nonrepeatable Read), and P3 (Phantoms).
For further details about TransactionIsolation levels, see the IBM DB2 Universal Database ODBC documentation.
ConstarintDDL
T
Additional SQL data definition language statements that must be executed after the target table is created. This is similar to the support provided for SQL pass-through in the SQL import connectors. Each line must be a valid ODBC DDL statement.
For example, you can have the following statements:
CREATE UNIQUE INDEX index1 ON mytable (Field1 ASC)
CREATE INDEX index2 ON mytable (Field2, Field3)
These statements create two indices on the table mytable. The first statement does not allow duplicates and the index values are stored in ascending order. The second index is a compound index on fields Field2 and Field3.
The ConstraintDDL is run only if the output mode is Replace for the target. If there are any errors, they are written to the error and event log file. An error during transformation displays the Transformation Error dialog box. You can ignore the DDL errors and continue the transformation.
ConstraintDDL also supports an escaping mechanism that allows you to specify DDL in the native SQL of the DBMS. Any statement preceded by an @ sign is sent directly to DBMS.
The following is a DDL statement for creating a primary key for the table mytable:
@CREATE INDEX pk_mytable ON mytable (Field1, Field2) WITH PRIMARY
Some ODBC drivers do not support the SQL extensions required to create a primary key with the ODBC variant of the SQL CREATE statement. In these cases, to create primary keys, use native SQL.
 
CommitFrequency
T
Allows you to control how often data is committed to the database when the AutoCommit property is set to False.
The default value is zero that is, the data is committed at the end of the transformation, allowing rollback on error. This is the slowest setting. When performing large transformations, this is not practical as it may produce too many transaction log entries.
Specifying a nonzero value indicates that data is committed to the database after inserting or updating specified number of records.
AutoCommit
T
If set to True, it allows you to automatically commit changes as they are made by each SQL statement, instead of waiting until the end of the transaction.
Also, if this option is set to True, you cannot roll back changes after they are done. It will overwrite the CommitFrequency value, which means that the changes are committed by each SQL statement irrespective of the value set for CommitFrequency.
The default value is False.
BulkOperations
T
Determines if an insert statement is run for each record or a bulk add is executed for each record. The default is False, the slower setting. If you want to maximize speed and instruct the integration platform to use a bulk add, change this setting to True. Use bulk operations for faster insert.
PrimaryKey
T
List of field names that are used to create the primary key. Field names are delimited by commas.
If the PrimaryKey property contains one or more field names, these names are included in the SQL CREATE statement when the connector is in Replace mode.
To use the PrimaryKey property, the ODBC driver must support Integrity Enhancement Facility (IEF). Only advanced ODBC drivers support this.
UseCursors
T
If set to True and the specified ODBC driver does not support cursor inserts, the integration platform falls back on the SQL INSERT mode of adding records. The default value is False.
For exports, cursor support is meant to enhance the performance of inserting records. This is applicable for desktop databases.
For database servers, there is no noticeable change in insert speed. Another complication of cursor inserts is that some drivers require the target table be indexed, otherwise positioned updates (cursors) are not allowed. The PrimaryKey and ConstraintDDL properties in the ODBC export connector addresses this issue.
Supported Data Types
The following data types are supported:
Exact numerics:
bigint
numeric
bit
smallint
decimal
smallmoney
int
tinyint
money
Note:  The bigint identity, decimal() identity, numeric() identity, smallint identity, and tinyint identity data types are Id numbers that are read-only and cannot be used as target.
Approximate numerics:
float
real
Date and time:
date
datetimeoffset
datetime2
smalldatetime
datetime
time
timestamp
Character strings:
char
varchar
text
Unicode character strings:
nchar
nvarchar
ntext
Binary strings:
binary
varbinary
image
Other data types:
uniqueidentifier
sql_variant
xml
sysname
Note:  timestamp, uniqueidentifier, and sysname are read-only and system generated. Hence, cannot be used as target.
Additional Information
Rowversion, timestamp, and sysname datatypes are automatically generated identifiers and hence cannot be used as target fields.
Note:  Timestamp data type is deprecated.
Id datatypes such as bigint identity, decimal identity, int identity, numeric identity, smallint identity, and tinyint identity are automatically generated numbers and cannot be used as target fields.
Last modified date: 02/09/2024