Was this helpful?
WITH Clause for COPY
Valid WITH clause options are as follows:
CHARSET=characterset
Defines the character set of the data file. COPY INTO will transliterate columns, if necessary, when writing to file and similarly for COPY FROM when reading.
Valid values are UTF8 or the installation character set.
On_error
To direct copy to continue after encountering conversion errors, specify the on_error option. To direct copy to continue until a specified number of conversion errors have occurred, specify the error_count option (instead of on_error). By default, copy terminates when an error occurs while converting a table row into file format.
When on_error is set to continue, copy displays a warning whenever a conversion error occurs, skips the row that caused the error, and continues processing the remaining rows. At the end of the processing, copy displays a message that indicates how many warnings were issued and how many rows were successfully copied.
Setting on_error to continue does not affect how copy responds to errors other than conversion errors. Any other error, such as an error writing the file, terminates the copy operation.
Error_count
To specify how many errors can occur before processing terminates, use the error_count option. The default error_count is 1. If on_error is set to continue, setting error_count has no effect.
Log
To store any rows that copy cannot process to a file, specify the with log option. With log can only be used if you specify on_error continue. When you specify with log, copy places any rows that it cannot process into the specified log file. The rows in the log file are in the same format as the rows in the database.
Logging works as follows:
Windows, UNIX: Copy opens the log file prior to the start of data transfer. If it cannot open the log file, copy halts. If an error occurs writing to the log file, copy issues a warning, but continues. If the specified log file already exists, it is overwritten with the new values (or truncated if the copy operation encounters no bad rows).
VMS: Copy attempts to open the log file prior to the start of data transfer. If it cannot open the log file, copy halts. If an error occurs writing to the log file, copy issues a warning, but continues. If the log file already exists, copy creates a new version.
If you are copying from a data file that contains duplicate rows (or rows that duplicate rows that are already in the table) to a table that has a unique key, copy displays a warning message and does not add the duplicate rows. Note that, if you specified the with log option, copy does not write the duplicate rows to the log file.
Rollback
To direct the DBMS Server to back out all rows appended by the copy if the copy is terminated due to an error, specify with rollback=enabled. To retain the appended rows, specify with rollback=disabled. The default is with rollback=enabled. When copying to a file, the with rollback clause has no effect.
The rollback=disabled option does not mean that a transaction cannot be rolled back. Database server errors that indicate data corruption still causes rollback, and rows are committed until the transaction is complete.
Row_estimate
To specify the estimated number of rows to be copies from a file to a table, use the with row_estimate option. The DBMS Server uses the specified value to allocate memory for sorting rows before inserting them into the table. An accurate estimate can enhance the performance of the copy operation.
The estimated number of rows must be no less than 0 and no greater than 2,147,483,647. If you omit this parameter, the default value is 0, in which case the DBMS Server makes its own estimates for disk and memory requirements.
Fillfactor, Minpages, and Maxpages
Fillfactor specifies the percentage (from 1 to 100) of each primary data page that must be filled with rows, under ideal conditions. For example, if you specify a fillfactor of 40, the DBMS Server fills 40% of each of the primary data pages in the restructured table with rows. You can specify this option with the isam, hash, or btree structures. Take care when specifying large fillfactors because a nonuniform distribution of key values can later result in overflow pages and thus degrade access performance for the table.
Minpages specifies the minimum number of primary pages a hash table must have. Maxpages specifies the maximum number of primary pages a hash table can have. Minpages and maxpages must be at least 1. If both minpages and maxpages are specified in a modify statement, minpages must not exceed maxpages.
For best performance, the values that you choose for minpages and maxpages must be a power of 2. If you choose a number other than a power of 2, the DBMS Server can change the number to the nearest power of 2 when the modify executes. If you want to ensure that the number you specify is not changed, set both minpages and maxpages to that number.
Default values for fillfactor, minpages and maxpages are listed in the following table:
Storage Structure
Fillfactor
Minpages
Maxpages
hash
50
16
no limit
compressed hash
75
1
no limit
isam
80
n/a
n/a
compressed isam
100
n/a
n/a
btree
80
n/a
n/a
compressed btree
100
n/a
n/a
Leaffill and Nonleaffill Options
For btree tables, the leaffill parameter specifies to the DBMS Server how full to fill the leaf index pages. Leaf index pages are the index pages that are directly above the data pages. Nonleaffill specifies how full to fill the non-leaf index pages. Non-leaf index pages are the pages above the leaf pages. Specify leaffill and nonleaffill as percentages. For example, if you modify a table to btree, specifying nonleaffill=75, each non-leaf index page is 75% full when the modification is complete.
The leaffill and nonleaffill parameters can assist you in controlling locking contention in btree index pages. If you retain some open space on these pages, concurrent users can access the btree with less likelihood of contention while their queries descend the index tree. You must strike a balance between preserving space in index pages and creating a greater number of index pages; more levels of index pages require more I/O to locate a data row.
Default values for leaffill and nonleaffill are 70% and 80%, respectively.
Allocation
To specify the number of pages initially allocated to the table or index, use the with allocation option. By allocating disk space to a table, you can avoid runtime errors that result from running out of disk space.
The number of pages specified must be between 4 and 8,388,607 (the maximum number of pages in a table). If the specified number of pages cannot be allocated, the modify statement is aborted.
You can modify a table to a smaller size. If the table requires more pages that you specify, the table is extended and no data is lost. You can modify a table to a larger size, to reserve disk space for the table.
If the table is spread across multiple locations, space is allocated across all locations.
Extend Option
To specify the number of pages by which a table or index grows when it requires more space, use the with extend clause. The number of pages specified must be between 1 and 8,388,607 (the maximum number of pages in a table). By default, tables and indexes are extended by groups of 16 pages. If the specified number of pages cannot be allocated when the table must be extended (for example, during an insert operation), the DBMS Server aborts the statement and issues an error.
Last modified date: 11/28/2023