Property | Description | Default Value | ||
---|---|---|---|---|
Global Variables | ||||
sharedDataPath | Path where program data will be stored. This path is not configurable through the installer. | C:/ProgramData/Actian/IntegrationManager (WIN); /etc/opt/Actian/integrationmanager (LINUX) | ||
installPath | Path chosen by the user where the Integration Manager stack will be installed. | C:\\Program Files\\Actian\\IntegrationManager | ||
Server | ||||
server.port | Integration Manager’s main stack port. | 8080 | ||
Job Configuration | ||||
job.source-bucket | Folder where job specific information will be stored. This should be a location with ample storage to retain job logs and information. | ${sharedDataPath} | ||
job.source-prefix | Folder within the job source bucket where job files will be stored. | /repository/ | ||
job.target-bucket | Folder where archive data about an executed job will be retained. This includes log files and such. | ${sharedDataPath}/history/job/ | ||
job.history.max-age | (DAYS) Property to aid in the purging of aged job data. With this property set, Integration Manager will purge Job records and data older than the value of this property, in terms of days. | 90 | ||
job.history.max-job-count | The maximum number of jobs and related job data to retain. Once this threshold has been reached, the oldest job and its related data will be purged beyond that maximum job count. | 1000 | ||
Worker Configuration | ||||
worker.concurrency | Number of concurrent engines the worker will spawn concurrently to consume to and execute jobs. | 4 | ||
worker.destinationId | (For use within an Agent/Remote Worker). The Destination Id that the worker will listen to for pooled jobs. | 1 | ||
worker.embedded | (true|false) Property that tells the stack and worker whether it is using the embedded worker that comes packaged with the stack or a remote Agent/Worker. | true | ||
worker.engineJavaHome | The Java runtime environrnment that the worker will use. | ${sharedDataPath}/di-standalone-engine/jre | ||
worker.engineType | For a Remote Worker/Agent, it can be either cosmos.v11 or dataflow to denote what type of Engine that Agent supports. | cosmos.v11 | ||
worker.workerLocalDir | The directory within the worker's file system where job's working data will be stored for use by the job. | ${sharedDataPath}/local | ||
worker.libraryPath | Path in which the worker library jars/libraries are installed. | ${installPath}/lib | ||
worker.workerType | (REMOTE|LOCAL) | REMOTE | ||
worker.api.port | The port through which communication with the worker is possible. | 5000 | ||
Engine Configuration (for this Worker) | ||||
engine.iniFilePath | Property to show the worker where the engine configuration file resides, which contains all of the Cosmos engine configuration details. | ${sharedDataPath}/conf/cosmos.ini | ||
engine.localEngineInstallPath | Path to the engine executable. | ${sharedDataPath}/di-standalone-engine/runtime/di9 | ||
engine.localEngineListenerPort | Port for dedicated communication to the engine within the worker. | 4999 | ||
engine.localEngineRemoteJvmArgs | JVM Arguments passed in during an execution on the respective engine. | -Xms64m -Xmx1g -XX:PermSize=64m -XX:MaxPermSize=256m -Dsun.client.defaultConnectTimeout=60000 -Dsun.client.defaultReadTimeout=180000 | ||
engine.allowAllExecutables | (true|false) Indicates whether the white-list entries can be executed. | false | ||
engine.executableWhiteList | If allowAllExecutables is set to no, false, or 0, this is the name(s) of an executable that is allowed. | null | ||
Dataflow Configuration | ||||
dataflow.enabled | (true|false) Tells the worker whether or not it is a DataFlow worker and can execute DataFlow jobs. | true | ||
dataflow.licensePath | Path to the DataFlow license. | ${sharedDataPath}/license/df6.slc | ||
dataflow.localEngineInstallPath | Install directory that contains the DataFlow executable (dr.bat). | ${sharedDataPath}/actian-dataflow-6.6.1-17/bin | ||
dataflow.localEngineListenerPort | Port for dedicated communication to the engine within the worker. | 4998 | ||
dataflow.localEngineRemoteJvmArgs | JVM Arguments passed in during an execution on the respective engine. | -Xms64m -Xmx1g -XX:PermSize=64m -XX:MaxPermSize=256m -Dsun.client.defaultConnectTimeout=60000 -Dsun.client.defaultReadTimeout= 180000 | ||
dataflow.charset | Character set used by the DataFlow engine. | utf-8 | ||
dataflow.strictMode | (disabled|warning|error) | disabled | ||
#dataflow.clusterConf | Specify the cluster to use for the execution environment. A cluster specification is composed of a host name and IP port number. Execution is local (not clustered) by default. | yarn://datarush-head.datarush.local | ||
#dataflow.engineConf | DataFlow Engine configuration properties to be passed in to the DataFlow Engine execution. | moduleConfiguration=datarush-hadoop-cdh5,name2=value,name3=value | ||
Notification Service | ||||
notification.enabled | (true|false) Property to turn on or off the Job Notification Service. | false | ||
notification.mailFrom | When a notification is sent out from the Job Notification Service, this is the From email address. | someone@someone.com | ||
notification.mailTo | A comma delimited list of emails that Job Notifications should be sent to when criteria are met. | someto@something.com, someto2@something.com | ||
spring.mail.host | Host to create a JavaMailSender to allow for notifcation emails to be sent out. Without this configuration, the Job Notification Service cannot send out notification emails. | email-host.com | ||
spring.mail.username | User(name) to allow a connection to the Email server defined in the spring.mail.host property. | someUsername | ||
spring.mail.password | Password corresponding to the spring.mail.username User. | someSecretPassword | ||
spring.mail.properties.mail.transport.protocol | Property to set what Transport Protocol the email server uses. | smtp | ||
spring.mail.properties.mail.smtp.port | Should SMTP be the chosen transport protocol, this is the port used by that email server. | 587 | ||
spring.mail.properties.mail.smtp.auth | (true|false) | true | ||
spring.mail.properties.mail.smtp.starttls.enable | (true|false) Used to enable a TLS-protected connection. | true | ||
spring.mail.properties.mail.smtp.starttls.required | (true|false) Require a TLS-protected connection. | true | ||
Messaging Configuration | ||||
queue.host | The host of the message queuing system. | localhost | ||
queue.port | The port to communicate with the message queuing system. | 8063 | ||
queue.username | Username within the message queuing system to allow for the stack to connect to the queuing system. This should be a user with administrative privileges that can create users, queues and exchanges. | admin | ||
queue.password | Password to the corresponding user(name) for connection to the queuing system to the stack. | admin | ||
queue.connectionTimeout | The connection timeout property, in reference to hek's connection to the remote message queuing system. | 20 | ||
queue.ssl.enabled | (true|false) If the Queue connections are secured. | false | ||
queue.ssl.key-store | The Key Store for the secure connection to the message queuing system. | rabbitmq-security/keycert.p12 | ||
queue.ssl.key-store-password | The Key Store password for the secure connection to the message queuing system. | changeit | ||
queue.ssl.key-store-type | The Key Store type for the secure connection to the message queuing system. | PKCS12 | ||
queue.ssl.protocol | The Key Store protocol for the secure connection to the message queuing system. | TLSv1.2 | ||
Database Configuration | ||||
spring.datasource.driver-class-name | JDBC Driver class name for the persistance instance. By default this is h2. | org.h2.Driver | ||
spring.datasource.url | JDBC URL String for connection to the persistence instance. | jdbc:h2:${sharedDataPath}/embedded/data/h2;DB_CLOSE_ON_EXIT=FALSE | ||
spring.datasource.initialization-mode | (always|never) Spring Boot automatically creates the schema of an embedded DataSource. This behaviour can be customized by using the spring.datasource.initialization-mode property. For instance, if you want to always initialize the DataSource regardless of its type: spring.datasource.initialization-mode=always | never | ||
spring.datasource.continue-on-error | (true|false) By default, Spring Boot enables the fail-fast feature of the Spring JDBC initializer. This means that, if the scripts cause exceptions, the application fails to start. You can tune that behavior by setting spring.datasource.continue-on-error. | false | ||
spring.h2.console.enabled | (true|false) Property on whether to enable the h2 console within the stack to allow the user to have direct access to the database through an embedded GUI. | true | ||
spring.h2.console.path | If the h2 console is enabled, what is the path to access the console GUI. | /h2 | ||
spring.jpa.hibernate.ddl-auto | (none|validate|update|create|create-drop) Spring Boot chooses a default value for you based on whether it thinks your database is embedded. It defaults to create-drop if no schema manager has been detected or none in all other cases. An embedded database is detected by looking at the Connection type. hsqldb, h2, and derby are embedded, and others are not. Be careful when switching from in-memory to a ‘real’ database that you do not make assumptions about the existence of the tables and data in the new platform. You either have to set ddl-auto explicitly or use one of the other mechanisms to initialize the database. | none | ||
#spring.jpa.properties.eclipselink.cache.shared.default | (true|false) Enable or Disable the Eclipselink Shared Cache. | false | ||
#spring.jpa.properties.eclipselink.logging.level.sql | Level for which information from Eclipselink is sent to the log file. | ALL | ||
#spring.jpa.properties.eclipselink.logging.parameters | Define if SQL bind parameters are included in exceptions and logs | true | ||
spring.jpa.show-sql | (true|false) If true, the database queries will be written to the log file. | false | ||
spring.liquibase.change-log | This way when Spring-boot application starts, it knows where is the starting point for your databaseChangeLog is | classpath:db.changelog-master.xml | ||
MultipartFiles | ||||
spring.servlet.multipart.enabled | (true|false) If true, large file uploads will be broken apart and chunked and uploaded in parts. | true | ||
spring.servlet.multipart.file-size-threshold | Maximum file size before the upload will be split and chunked. | 100KB | ||
spring.servlet.multipart.location | Temp directory where chunked file can be split and reassembled. | ${sharedDataPath}/tmp | ||
spring.servlet.multipart.max-file-size | Maximum file size that can be uploaded through the Stack API. Any larger files will be rejected. | 10MB | ||
spring.servlet.multipart.max-request-size | Maxmimum request size to the Stack API. Any larger requests will be updated. | 10MB | ||
Profile Configuration | ||||
spring.profiles.active | Tells the stack what configuration profiles to load or activate, which will in turn activate the proper configurations and components within the stack. | prem,localauth | ||
Quartz Scheduler Configuration | ||||
spring.quartz.scheduler-factory.startup-delay | (integer - seconds) Number of seconds to delay the start of the internal scheduler that fires off scheduled jobs. Set to 15 by default to give the stack time to start up before firing off any scheduled jobs that fired while the stack was offline. | 15 |