Was this helpful?
Job Settings
Job settings also control configuration of executor JVMs. However, unlike per-node settings, they are machine-independent, requiring no specific knowledge of the machine configuration. Instead, they may vary between executions of logical graphs.
These settings are read by both Cluster Manager and clients. On Cluster Manager, they specify the default values to use for any job executed in the cluster. On the client, they can be specified programmatically as options to the ClusterSpecifier. See Overriding Job Settings for an example. The job settings can also be overridden when submitting a job using RushScript. For more information, see Cluster Specifier.
As with daemon logging, executor logging is based on log4j, so logging thresholds must be valid log4j levels.
Property
Default
Description
job.resource.scheduling
FIFO
Specifies how to handle execution of newly submitted jobs to the cluster. If IMMEDIATE is used, jobs are immediately executed, regardless of available resources. Current resource usage is still considered for node allocation purposes, however.
If FIFO is used, jobs will wait until sufficient resources are available on the cluster before executing. Jobs will execute in the order in which they are submitted.
This property can only be set on Cluster Manager; it can not be overridden by the client.
job.resource.usage.control
JVM
The method to use for enforcing resource limits on job processes. If JVM is used, memory limits will be enforced using maximum heap size, but no limits on CPU usage can be enforced.
If CGROUP is used, job tasks will be launched using Linux cgroups to enforce limits on CPU and memory usage; this is only valid if all nodes support cgroups.
This property can only be set on Cluster Manager; it can not be overridden by the client.
job.master.resources.cpu
1
The number of CPUs required by the job master process.
job.master.resources.memory
128
The amount of memory, in MB, required by the job master process.
job.master.jvm.arguments
 
Additional JVM flags to use when launching the job master for a job. If omitted, no additional arguments will be passed.
Do not include the -Xmx or -Xms flags in this set; use the job.master.jvm.memory setting to effect those changes instead. Nor should this include the -cp or -classpath flags; classpath is determined based on the client’s classpath. If any of these flags are included, they will be ignored.
job.minimum.parallelism
2
Specifies the minimum number of partitions the job requires. When attempting to execute on a cluster, the requested number of partitions may not be immediately available. If the number of partitions available is greater than this threshold, the job will execute using a reduced number of partitions. Otherwise, it will wait until at least this number of partitions can be acquired.
job.partition.resources.cpu
1
The number of CPUs required for each worker partition of a job.
job.partition.resources.memory
512
The amount of memory, in MB, required for each worker partition of a job.
job.worker.jvm.arguments
 
Additional JVM flags to use when launching the worker processes for a job. If omitted, no additional arguments will be passed.
Do not include the -Xmx or -Xms flags in this set; use the job.partition.jvm.memory setting to effect those changes instead. Nor should this include the -cp or -classpath flags; classpath is determined based on the client’s classpath. If any of these flags are included, they will be ignored.
job.logging.level
INFO
The global logging threshold for the executor.
job.logging.level.<logger>
 
If specified, the logging threshold for the named logger. This is equivalent to starting the executor with the logging property log4j.logger.<logger>.
job.executor.allocation.strategy
WIDE_FIRST
The strategy to use when allocating nodes for a job.
If WIDE_FIRST is selected, then we allocate pipelines round-robin to nodes in the cluster until we have allocated the client's specified parallelism level.
If NARROW_FIRST is selected, then we allocate the number of cores on each node until the client’s specified level of parallelism is reached or until we have allocated all cores on all nodes, whichever comes first.
 
Last modified date: 12/09/2024