INSERT INTO EXTERNAL CSV (X100 Only)
Valid in: SQL, ESQL, OpenAPI, ODBC, JDBC, .NET
The INSERT INTO EXTERNAL CSV statement writes an X100 table to a local file system. The result is either a single CSV file or a collection of CSV files, depending on whether the query is run in parallel. The number of files produced cannot be specified, but can be indirectly influenced by setting [engine] max_parallelism_level.
This statement has the following format:
INSERT INTO EXTERNAL CSV 'filename' SELECT ... [WITH options]
filename
Specifies the output location, which can be either a local file system path. If multiple files are created, they use the filename with suffixes in the form '.nnn' where nnn is the file number.
Files are written by the server, so the server must have permissions to write to the specified path.
For security reasons, the server is not allowed to overwrite existing files. The query will end with an error if any of the output files already exist.
SELECT
Specifies a SELECT statement that selects the table data to be written.
WITH options
Specifies optional WITH clause options separated by a comma. Valid options are:
AWS_ACCESS_KEY='my_access_key'
Specifies the access key ID to access your S3 bucket on Amazon Web Services.
AWS_ENDPOINT='aws_endpoint'
Specifies the URL of the entry point to access AWS buckets.
For Example:
s3.us-east-2.amazonaws.com
AWS_REGION='aws_region_name'
Specifies the region where the source data resides in your S3 bucket on Amazon Web Services.
Note: ' AWS_REGION' and ' AWS_ENDPOINT' are optional components and applicable to both types of AWS credentials.
AWS_SECRET_KEY='my_secret_key'
Specifies the secret access key used to access your S3 bucket on Amazon Web Services.
AWS_SESSION_TOKEN='my_session_token'
Specifies the session token used to access your S3 bucket on Amazon Web Services.
For AWS, not all credential types are required in the credential file, rather a combination of the types using the with session token and without session token as shown below:
with session token:
AWS_ACCESS_KEY='my_access_key'
AWS_SECRET_KEY='my_secret_key'
AWS_SESSION_TOKEN='session_token'
AWS_REGION='us-east-2'
AWS_ENDPOINT='s3.us-east-2.amazonaws.com'
without session token:
AWS_ACCESS_KEY='my_access_key'
AWS_SECRET_KEY='my_secret_key'
AWS_REGION='us-east-2'
AWS_ENDPOINT='s3.us-east-2.amazonaws.com'
AZURE_CLIENT_ENDPOINT='client_endpoint'
Specifies the Directory (tenant) ID used to access your Azure Storage account.
For Example:
https://login.microsoftonline.com/tenant_id/oauth2/token'
AZURE_CLIENT_ID = 'client_id'
Specifies the Application (client) ID used to access your Azure Storage account.
AZURE_CLIENT_SECRET = 'client_secret'
Specifies the Client Secret (password) used to access your Azure Storage account.
FIELD_SEPARATOR='field_separator'
Specifies the character to use to separate fields. The delimiter must be a single character. Default: , (comma)
FLOAT_FORMAT='xwidth.prec'
Specifies the format to export floating point numbers. The format has four different parameters:
x = format specifier:
– f (decimal representation), e.g 503.42003
– e (scientific notation), e.g. 5.0342003e+03
– g (default): individually chooses the best representation between f and e (i.e. scientific format if there are significant digits after prec zeros in the fraction, otherwise decimal).
Uppercase format letters are also possible, they generate an uppercase exponent letter (e.g. 5.0342003E+03)
width = total string width
"." = decimal point: any other specifier is possible (eg "," for 503,42003)
prec = number of decimal places
Default: ‘g79.38’
Note: Very large, and very small (absolute value) numbers may cause rounding errors. This may occur if the exponent is larger than 78 or if the number of decimal places is larger than 14. It is recommended to use smaller formats, e.g. ‘g11.3’ to avoid such rounding errors.
GCS_EMAIL='name@mail.com'
Specifies the email used for the service account used for Google Cloud Storage access.
GCS_PRIVATE_KEY_ID='private_key_id'
Specifies the ID of the private key for the service account used for Google Cloud Storage access. For information about keeping your credentials safe, see the Security Guide.
GCS_PRIVATE_KEY='-----BEGIN PRIVATE KEY-----\n<long_key>\n-----END PRIVATE KEY-----\n'
Specifies the private key for the service account used for Google Cloud Storage access in PKCS#8 format. The key must include the starting and ending characters: -----BEGIN PRIVATE KEY-----\n and -----END PRIVATE KEY-----\n. For example:
-----BEGIN PRIVATE KEY-----\n
MIIEvAIBADANBgkqhkiG9w0BgSiAgEAAoIBAQDHd1HNub/lfF41 ... jm+177ZXFg+QIyXFCqbMDTAnjYY3
-----END PRIVATE KEY-----\n
NULL_MARKER='null_value'
Specifies the text to use for the NULL valued attributes. When a parameter contains commas or spaces, double quotation marks must be used. Default: 'null'
RECORD_SEPARATOR='record_separator'
Specifies the character to use to separate records. The delimiter must be a single character. To specify a control character, use an escape sequence. Default: \n
WORK_DIR='file_directory'
Specifies the directory in which the files are created if the filename is relative. Default: /tmp