Was this helpful?
Spark_provider Configuration File
The spark_provider tool can be configured using the spark_info_file located under:
<II_SYSTEM>/ingres/files/spark-provider
vector_provider_debug
Default is false, if activated the spark_provider tool will print additional information in STDOUT and also the container will have Java Remote debugging at 127.0.0.1::4747.
vector_provider_add_host
The container is created adding the FQDN of the host machine to the /etc/host of the Spark container so that it can call back the X100 server. In some cases, this might not be the correct FQDN or IP address (if you have multiple network cards) or you may want to avoid the call back. In these cases, you can use the configuration parameter to override the default. Values are no (no additional host will be added to container) or <FQDN>:<IP> (add the FQDN and IP to hosts file).
Note:  If you are working in a Linux machine and hostname -f returns localhost as the FQDN of your machine, you may need to set this configuration parameter, i.e., vector_provider_add_host = host.docker.internal:host-gateway.
vector_provider_port
Specify the port at which the container is available. If not specified, we will generate a random port and update the configuration file with it.
vector_provider_container_user
It created the spark container using the option --user to specify the userid and group id that should be used to create the container process. The default is not to specify any user except in Windows if using Podman, if which case we will use 1000:1000 as the user. Values are none (no userid/groupid specified) or <USERID>:<GROUPID> (container runs under userid and groupid).
Last modified date: 01/28/2026