Option | Default | Description | Applicable Commands |
---|---|---|---|
-cluster.admin.port | 1100 | Specifies the HTTP admin port of Cluster Manager | start-cluster |
-cluster.port | 1099 | Specifies the port of Cluster Manager | start-node stop-cluster stop-node status |
-cluster.host | localhost | Specifies the host of Cluster Manager | start-node stop-cluster stop-node status |
-node.name | Either the unqualified name of the current machine or all nodes | Specifies the node name | status start-node stop-node |
-cluster.client.host | Fully qualified host name of the current machine | Specifies the host name to advertise to clients. For multi-homed hosts, this should be set; otherwise, an arbitrary host name is advertised. | start-cluster |
--module | None | Specifies the Hadoop module configuration to use. When starting Cluster Manager to interface with YARN, this setting is required and should be set according to the version of Hadoop installed. See Hadoop Module Configurations for a list of supported configurations. | start-history-server |
-node.client.host | The fully qualified host name of the current machine. | Specifies the host name that should be advertised to clients. For multi-homed hosts, this should be specified; otherwise an arbitrary host name will be advertised. | start-node |
Command | Options | Description |
---|---|---|
status | -cluster.host -node.name -cluster.port | Reports on the status and configuration of the cluster. Without the -node.name option, this displays the names and addresses of all nodes registered with the cluster, as well as their current status. The host and port number of Cluster Manager are also displayed. If node names are specified, this displays the current configuration settings in use by each node identified by the arguments. |
start-cluster | -cluster.admin.port -cluster.port -cluster.client.host | Launches the Cluster Manager daemon and administrative web application in the background. Cluster Manager will be listening on the port specified by ‑cluster.port, and the administrative application will be listening on the HTTP port specified by -cluster.admin.port. The newly started cluster has no nodes registered. It is necessary to start node managers on any machines which are intended to be part of the cluster. |
start-history-server | -cluster.admin.port -cluster.port -cluster.client.host --module | Launches the Cluster Manager daemon as a history server within a YARN-based Hadoop cluster. Also starts the administrative web application in the background. Cluster Manager listens on the port set by -cluster.port and the administrative application listens on the HTTP port specified by -cluster.admin.port. The Hadoop configuration must be available to Cluster Manager in /etc/hadoop/conf or set in the environment variable HADOOP_CONF_DIR. |
start-node | -cluster.host -node.name -cluster.port -node.client.host | Launches a node manager daemon in the background. If a name is provided, it is used to identify the node in the cluster. If none is provided, the unqualified host name is used. Node-specific configuration are obtained from Cluster Manager upon connecting. The node manager registers with Cluster Manager, making it eligible for use in the execution of distributed graphs. If Cluster Manager cannot be reached, the node manager waits and attempts registration a number of times before shutting down. Thus, it is not necessary for Cluster Manager. |
stop-cluster | -cluster.host -cluster.port | Shuts down the entire cluster. All registered cluster nodes are shut down, then Cluster Manager. Currently executing graphs in the cluster are not affected. They continue to run to normal termination. |
stop-history-server | -cluster.host -cluster.port | Shuts down Cluster Manager in a YARN-based Hadoop environment. |
stop-node | -cluster.host -cluster.port -node.name | Shuts down the specified node. The node is removed from the cluster and is no longer eligible for executing distributed graphs. Other nodes are unaffected and remain valid for execution. Currently executing graphs using the node are not affected. They continue to run to normal termination. |