Incremental Data Load
The most efficient way to incrementally load subsequent data into Vector is through incremental bulk data operations--for example, vwload, COPY VWLOAD, COPY, Spark SQL through the Spark-Vector Connector, or through the batch interface using ODBC, JDBC, or .NET.
You can also apply single-row DML statements using regular INSERT, UPDATE, and DELETE commands, but these incremental "batch" operations are less efficient than incremental "bulk" operations.
For more information on batch versus bulk data loads, see the section Methods for Updating Data in the Vector User Guide.
To insert additional data efficiently you can choose one of the following approaches:
• Use the vwload utility, COPY VWLOAD statement, or COPY statement to load from data files and append the data to the table. Unless the table has a clustered index and already contains data this approach will efficiently append data.
• Use the batch interface to load data directly through an ODBC, JDBC, or .NET based application. Unless the table has a clustered index and already contains data this approach will efficiently append data.
• Use INSERT INTO table AS SELECT columns FROM other table. Unless the table has a clustered index and already contains data this approach will efficiently append data.
• Use Spark SQL through the Spark-Vector Connector. Unless the table has a clustered index and already contains data this approach will efficiently append data.
• Use the MERGE statement or the MODIFY...TO COMBINE statement to add the data to the table.
Both the MERGE or MODIFY...TO COMBINE and the INSERT AS SELECT methods require you to create a staging table. When you use INSERT AS SELECT then the staging table can be an Ingres table or a Vector table. When you use the MERGE or MODIFY...TO COMBINE statement then all tables involved in the statement must be Vector tables.
Last modified date: 06/28/2024