3. Stage 1--Define and Create the Test Environment : Installation : Verifying the Installation
 
Share this page                  
Verifying the Installation
The instance should now be up and running on the master node (nothing is yet running on the slave nodes). Verify this is the case with the ingstatus command. Here (and for the rest of the document) the examples use an installation with default settings unless stated otherwise.
# Log in as the installation user, execute the environment script, then ingstatus
su - actian
. ./.ingVHsh
ingstatus
Actian Vector H VH name server (iigcn) - running
Actian Vector H VH recovery server (dmfrcp) - running
Actian Vector H VH DBMS server (iidbms) - 1 running
Actian Vector H VH Actian Vector H server (iix100) - not active
Actian Vector H VH Net server (iigcc) - 1 running
Actian Vector H VH Data Access server (iigcd) - 1 running
Actian Vector H VH RMCMD process (rmcmd) - running
Actian Vector H VH Management server (mgmtsvr) - running
Actian Vector H VH archiver process (dmfacp) - running
If the instance is not running, start it using the ingstart command.
Notice that there is no execution engine (x100) running; the execution engine is started upon first connection to a database. One x100 process is started for each database connected to.
To verify the installation, check that cluster connectivity is working, and that a database can be created and started. A script, RemoteExec.sh, is provided for the first step (see Test Scripts).
# Log in as the installation user and execute the environment variable settings
su- actian
. ./.ingVHsh
./RemoteExec.sh
For each host, the output should be something similar to:
[Hostname]
Found 8 items
drwxr-xr-x - actian hdfs 0 2015-06-15 20:08 /Actian
drwxrwxrwx - yarn hadoop 0 2015-06-13 18:24 /app-logs
drwxr-xr-x - hdfs hdfs 0 2015-06-13 18:17 /hdp
drwxr-xr-x - hdfs hdfs 0 2015-06-13 18:16 /mr-history
drwxr-xr-x - hdfs hdfs 0 2015-06-13 18:19 /system
drwxrwxrwx - hdfs hdfs 0 2015-06-13 18:20 /tmp
drwxr-xr-x - hdfs hdfs 0 2015-06-13 18:21 /user
Lastly, create a database and connect to it with the terminal monitor (sql):
su - actian
. ./.ingVHsh
createdb pocdb
[actian@VectorH-HW1 ~]$ sql pocdb
TERMINAL MONITOR Copyright 2014 Actian Corporation
Actian Vector H Linux Version VH 4.2.3 (a64.lnx/158) login
Tue Apr 5 05:48:47 2016
Enter \g to execute commands, "help help\g" for general help,
"help tm\g" for terminal monitor help, \q to quit
 
continue
* create table test(col1 int);
* insert into test values (1);
* select * from test;
* \g
Executing . . .
 
(1 row)
 
┌─────────────┐
│col1 │
├─────────────┤
│ 1│
└─────────────┘
(1 row)
continue
* drop table test;
* \g
Executing . . .
 
continue
* \q
It can be useful to run top –u actian on each node to observe the database startup. On the master node top shows something similar to:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
65226 actian 20 0 2957m 68m 9m S 0.7 1.1 0:39.93 mgmtsvr
45514 actian 20 0 3671m 148m 18m S 0.3 2.5 0:05.24 x100_server
47972 actian 20 0 842m 106m 9832 S 0.3 1.8 0:32.77 iidbms
45445 actian 20 0 11488 1384 1052 S 0.0 0.0 0:00.00 mpirun
45507 actian 20 0 18076 1532 1168 S 0.0 0.0 0:00.01 mpiexec.hydra
45508 actian 20 0 15972 1328 1036 S 0.0 0.0 0:00.00 pmi_proxy
47688 actian 20 0 27384 3096 1564 S 0.0 0.1 0:00.04 iigcn
47873 actian 20 0 444m 42m 33m S 0.0 0.7 0:02.00 iidbms
47949 actian 20 0 75904 4760 2420 S 0.0 0.1 0:00.05 dmfacp
48011 actian 20 0 26844 2036 1032 S 0.0 0.0 0:00.00 iigcc
48038 actian 20 0 126m 3176 1184 S 0.0 0.1 0:00.02 iigcd
48059 actian 20 0 137m 10m 2492 S 0.0 0.2 0:06.90 rmcmd
On the slave nodes top shows something similar to:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
25359 actian 20 0 98368 1812 836 S 0.0 0.0 0:00.00 sshd
25360 actian 20 0 17928 1408 1124 S 0.0 0.0 0:00.01 pmi_proxy
25376 actian 20 0 3752m 143m 18m S 0.0 2.4 0:05.65 x100_server
This guide assumes that the pocdb database exists and is running unless stated otherwise.