Was this helpful?
Configuring Spark to Use JDBC
You can use Spark to map tables from other databases using JDBC.
Follow this process to configure Spark to use JDBC:
1. Download the JAR files for JDBC connection based on the database. Examples:
Vector: iijdbc.jar
Oracle: odbc8.jar | odbc9.jar (based on Oracle version)
Zen database: pvjdbc2.jar, pvjdbc2x.jar, and jpscs.jar
2. Set II_SPARKJARS environment variable to the path where the JDBC JARs reside.
Either issue the command ingsetenv II_SPARKJARS “/path to jarsor add the export II_SPARKJARS=path to jars command to the .ingXXsh environment file.
3. Source .ingXXsh, where XX is the Vector instance ID.
4. Check that II_SPARKJARS has been updated by using the following command:
echo $II_SPARKJARS
5. Configure JDBC by adding the following line to II_SYSTEM/ingres/files/spark-provider/spark_provider.conf:
spark.jars /opt/user_mount/<<jdbc_driver jar>>
Ensure that the driver JAR files are in the folder that is mounted into the container. For example, for Oracle, Vector, or Zen databases:
spark.jars= /opt/user_mount/ojdbc8.jar, /opt/user_mount/pv jdbc2.jar, /opt/user_mount/pvjdbc2x.jar, /opt/user_mount/jpscs.jar
Note:  The iisuspark -jdbc option to configure JDBC is no longer supported.
6. Restart the Spark-Vector Provider and ensure the new JAR files have been found. If the JARs are not found, you will see the following errors:
ERROR SparkContext: Jar not found at file:/somepath/iijdbc.jar
ERROR SparkContext: Jar not found at file:/somepath/odbc8.jar
After Spark is configured, you can connect to a database and create an external table using FORMAT= 'jdbc':
CREATE EXTERNAL TABLE ext_jdbc_hello_ingres
  (id INTEGER NOT NULL,
   txt VARCHAR(20)NOT NULL)
USING SPARK WITH REFERENCE='dummy',
FORMAT='jdbc',
OPTIONS=('url' = 'db_connection_url',
         'dbtable' = 'table_name',
         'user' = '<username>',
         'password' = 'password')
Last modified date: 04/15/2025