Finding out Keystore and Truststore Passwords on BDA

I am working in a project involving configuring SSL with Cloudera Manager on BDA. There are several ways to do it: go with Oracle’s bdacli approach or use Cloudera’s approach. For BDA related work, I usually prefer Oracle’s BDA approach because it needs to write some information to Oracle BDA’s configuration files, which are usually outside the control of Cloudera Manager. Cloudera’s approach is definitely working as well. But during the time when doing BDA upgrade or patching, if mammoth couldn’t find the correct value in BDA’s configuration files, it might cause unnecessary trouble. For example, if mammoth think certain features are not enabled, then it could skip certain steps to disable the features before upgrade. Anyway, it is another unrelated topic.

To enable TLS on Cloudera Manager is pretty easy on BDA, instead of doing so many steps stated in Cloudera Manager’s document. On BDA, just run the following command:
bdacli enable https_cm_hue_oozie

The command will automatically enable TLS for all major services on CDH, such Cloudera Manager, Hue and Oozie. Please note: TLS on Cloudera Manager agent is automatically enabled during BDA installation. Usually running this command is enough for many clients as client just need to encrypt the content when communicating
with Cloudera Manager. There is a downside for this approach: BDA uses self-signed certificates during the execution of bdacli enable https_cm_hue_oozie. This kind of self-signed certificate is good for security, but sometime can be annoying with browsing alerts. Therefore some users might prefer to use their own signed SSL certificates.

After working with Eric from Oracle Support, he recommended a way actually pretty good documented in Doc ID 2187903.1: How to Use Certificates Signed by a User’s Certificate Authority for Web Consoles and Hadoop Network Encryption Use on the BDA. The key of this approach is to get keystore’s and truststore’s paths and passwords, creating new keystore and truststore, and then importing customer’s certificates. Unfortunately, this approach works for BDA version 4.5 and above. It is not going to work in my current client environment, which is using BDA v4.3. One of major issue is that BDA v4.5 and above has the following bdacli commands while BDA v4.3 doesn’t have the following commands:
bdacli getinfo cluster_https_keystore_password
bdacli getinfo cluster_https_truststore_password

Eric then recommended a potential workaround by querying MySQL database directly by using the commands below:

use scm;
select * from CONFIGS where ATTR = 'truststore_password' or ATTR = 'keystore_password'; 

I then used two BDAs in our lab for the verification.
First, I tested on our X4 Starter rack.

[root@enkx4bda1node01 ~]# bdacli getinfo cluster_https_keystore_password
Enter the admin user for CM (press enter for admin): 
Enter the admin password for CM: 
******

[root@enkx4bda1node01 ~]# bdacli getinfo cluster_https_truststore_password
Enter the admin user for CM (press enter for admin): 
Enter the admin password for CM: 

Interestingly, the keystore password is still showing ****** while truststore password is empty. I can understand empty password for truststore as nothing is configured for truststore. But keystore password shouldn’t show hidden value as ******.

Query MySQL db on the same rack.

[root@enkx4bda1node03 ~]# mysql -u root -p
Enter password: 
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| activity_monitor   |
| hive               |
| host_monitor       |
| hue                |
| mysql              |
| navigator          |
| navigator_metadata |
| oozie              |
| performance_schema |
| reports_manager    |
| resource_manager   |
| scm                |
| sentry_db          |
| service_monitor    |
| studio             |
+--------------------+
16 rows in set (0.00 sec)

mysql> use scm;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed

mysql> select * from CONFIGS where ATTR = 'truststore_password' or ATTR = 'keystore_password'; 
+-----------+---------+-------------------+--------+------------+---------+---------------------+-------------------------+----------------------+---------+
| CONFIG_ID | ROLE_ID | ATTR              | VALUE  | SERVICE_ID | HOST_ID | CONFIG_CONTAINER_ID | OPTIMISTIC_LOCK_VERSION | ROLE_CONFIG_GROUP_ID | CONTEXT |
+-----------+---------+-------------------+--------+------------+---------+---------------------+-------------------------+----------------------+---------+
|         8 |    NULL | keystore_password | ****** |       NULL |    NULL |                   2 |                       2 |                 NULL | NONE    |
+-----------+---------+-------------------+--------+------------+---------+---------------------+-------------------------+----------------------+---------+
1 row in set (0.00 sec)

MySQL database also store the password as *****. I remember my colleague mentioned this BDA has some issue. This could be one of them.

Ok, this rack doesn’t really tell me anything and I move to the 2nd full rack BDA. Perform the same commands there.

[root@enkbda1node03 ~]# bdacli getinfo cluster_https_keystore_password 
Enter the admin user for CM (press enter for admin): 
Enter the admin password for CM: 
KUSld8yni8PMQcJbltvCnZEr2XG4BgKohAfnW6O02jB3tCP8v1DYlbMO5PqhJCVR

[root@enkbda1node03 ~]# bdacli getinfo cluster_https_truststore_password
Enter the admin user for CM (press enter for admin): 
Enter the admin password for CM: 


[root@enkbda1node03 ~]# mysql -u root -p
Enter password: 
mysql> use scm;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from CONFIGS where ATTR = 'truststore_password' or ATTR = 'keystore_password'; 
+-----------+---------+---------------------+------------------------------------------------------------------+------------+---------+---------------------+-------------------------+----------------------+---------+
| CONFIG_ID | ROLE_ID | ATTR                | VALUE                                                            | SERVICE_ID | HOST_ID | CONFIG_CONTAINER_ID | OPTIMISTIC_LOCK_VERSION | ROLE_CONFIG_GROUP_ID | CONTEXT |
+-----------+---------+---------------------+------------------------------------------------------------------+------------+---------+---------------------+-------------------------+----------------------+---------+
|         7 |    NULL | keystore_password   | KUSld8yni8PMQcJbltvCnZEr2XG4BgKohAfnW6O02jB3tCP8v1DYlbMO5PqhJCVR |       NULL |    NULL |                   2 |                       0 |                 NULL | NULL    |
|       991 |    NULL | truststore_password | NULL                                                             |       NULL |    NULL |                   2 |                       1 |                 NULL | NONE    |
+-----------+---------+---------------------+------------------------------------------------------------------+------------+---------+---------------------+-------------------------+----------------------+---------+
2 rows in set (0.00 sec)

MySQL database show same value as the value as the result from command bdacli getinfo cluster_https_keystore_password. This is exactly what I want to know. It looks like I can use MySQL query to get the necessary passwords for my work.

One side note: In case you want to check out those self-signed certificates on BDA, run the following command. When prompting for password, just press ENTER.

[root@enkx4bda1node03 ~]# bdacli getinfo cluster_https_keystore_path
Enter the admin user for CM (press enter for admin): 
Enter the admin password for CM: 
/opt/cloudera/security/jks/node.jks

[root@enkx4bda1node03 ~]# keytool -list -v -keystore /opt/cloudera/security/jks/node.jks
Enter keystore password:  

*****************  WARNING WARNING WARNING  *****************
* The integrity of the information stored in your keystore  *
* has NOT been verified!  In order to verify its integrity, *
* you must provide your keystore password.                  *
*****************  WARNING WARNING WARNING  *****************

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: enkx4bda1node03.enkitec.local
Creation date: Mar 5, 2016
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=enkx4bda1node03.enkitec.local, OU=, O=, L=, ST=, C=
Issuer: CN=enkx4bda1node03.enkitec.local, OU=, O=, L=, ST=, C=
Serial number: 427dc79f
Valid from: Sat Mar 05 02:17:45 CST 2016 until: Fri Feb 23 02:17:45 CST 2018
Certificate fingerprints:
	 MD5:  A1:F9:78:EE:D4:C7:C0:D0:65:25:4C:30:09:D8:18:6E
	 SHA1: 8B:E3:7B:5F:76:B1:81:33:35:03:B9:00:97:D0:F7:F9:03:F9:74:C2
	 SHA256: EC:B5:F3:EB:E5:DC:D9:19:DB:2A:D6:3E:71:9C:62:55:10:0A:59:59:E6:98:2C:AD:23:AC:24:48:E4:68:6A:AF
	 Signature algorithm name: SHA256withRSA
	 Version: 3

Extensions: 

#1: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 36 D2 3D 49 AF E2 C6 7A   3C C6 14 D5 4D 64 81 F2  6.=I...z<...Md..
0010: 6E F2 2C B6                                        n.,.
]
]

*******************************************
*******************************************

If you dont’ like this kind of default password, you can use command keytool -storepasswd -keystore /opt/cloudera/security/jks/node.jks to change the password.

Advertisements

Configurations after CDH Installation

In the last post, I discussed the steps to install a 3 node hadoop cluster by using Cloudera Manager. In the next few posts, I am going to discuss some technologies that are frequently used, such as Hive, Sqoop, Impala and Spark.

There are a few things that need to be configured after the CDH Installation.

1. Configure NTPD. Start up ntpd process on every host. Otherwise, Clouder Manager could display a healthcheck failure: The host’s NTP service did not respond to a request for the clock offset.
# service ntpd status
# service ntpd start
# chkconfig ntpd on
# chkconfig –list ntpd
# ntpdc -np

2. Configure Replication Factor. As my little cluster has only 2 Data nodes, I need to reduce the replication factor from the default value of 3 to 2 to avoid the annoying blocks under-replicated type of error. First run the following command to change the replication factor to 2.

hadoop fs -setrep -R 2 /

Then goto HDFS Configuration, change Replication Factor to 2.

3. Change message logging level from INFO to WARN. I can not believe how many INFO messages are logged and there are no way I can see a message for more than 3 seconds before it is quickly refreshed away by a flood of INFO messages. In my opinion, majority of the INFO messages are useless and should not be logged in the first place. It seems more like DEBUG messages to me. So before my little cluster goes crazy in logging tons of useless messages, I need to quickly change logging level from INFO to WARNING. Another painful thing is that there are many log files from various Hadoop components, and are located at many different locations. I feel like I am siting in a space shuttle cockpit and need to turn off many switches not in a central location.
space_shuttle_cockpit
I could find out the logfile configuration file, and fix the parameters one by one. But it would take some time and too painful. The easiest way I found out is to use Cloudera Manager to make the change. Bascially, type in logging level as the search term. It will pop up a long list of components with the logging level and change them one by one. You will not believe how many logging level parameters are in the system. After the change, it’s recommended to restart the cluster as certain parameters are stale.
CM_change_INO_WARN

4. Configure Hue’s superuser and password. From Cloudera Manager screen, click Hue to start the Hue screen. The weird part about the Hue is that there is no pre-set superuser for the administration. Whoever logon to the Hue first will become the superuser of Hue. I don’t understand why Hue just takes whatever user and password Cloudera Manager uses. Anyway, to make my life easier, I just use the same login user and password for Cloudera Manager, admin.
hue_initial_screen

5. Add new user.
By default hdfs user is the superuser for HDFS, not the root user. So before doing any work on Hadoop, it is a good idea to create a separte OS user instead of using hdfs user to execute Hadoop commands. Run the following commands on EVERY Host in the cluster.
a. Logon as root user.
b. Create bigdata group.
# groupadd bigdata
# grep bigdata /etc/group

c. Add the new user, wzhou.
# useradd -G bigdata -m wzhou

If the user exist before the bigdata created, do the following
# usermod -a -G bigdata wzhou

d. Change password
# passwd wzhou

e. Verify the user.
# id wzhou

f. Create the user home directory on HDFS.
# sudo -u hdfs hdfs dfs -mkdir /user/wzhou
# sudo -u hdfs hdfs dfs -ls /user

[root@vmhost1 ~]# sudo -u hdfs hdfs dfs -ls /user
Found 8 items
drwxrwxrwx   - mapred hadoop              0 2015-09-15 05:40 /user/history
drwxrwxr-t   - hive   hive                0 2015-09-15 05:44 /user/hive
drwxrwxr-x   - hue    hue                 0 2015-09-15 10:12 /user/hue
drwxrwxr-x   - impala impala              0 2015-09-15 05:46 /user/impala
drwxrwxr-x   - oozie  oozie               0 2015-09-15 05:47 /user/oozie
drwxr-x--x   - spark  spark               0 2015-09-15 05:41 /user/spark
drwxrwxr-x   - sqoop2 sqoop               0 2015-09-15 05:42 /user/sqoop2
drwxr-xr-x   - hdfs   supergroup          0 2015-09-20 11:23 /user/wzhou

g. Change the ownership of the directory.
# sudo -u hdfs hdfs dfs -chown wzhou:bigdata /user/wzhou
# hdfs dfs -ls /user

[root@vmhost1 ~]# sudo -u hdfs hdfs dfs -chown wzhou:bigdata /user/wzhou
[root@vmhost1 ~]# sudo -u hdfs hdfs dfs -ls /user</strong>
Found 8 items
drwxrwxrwx   - mapred hadoop           0 2015-09-15 05:40 /user/history
drwxrwxr-t   - hive   hive             0 2015-09-15 05:44 /user/hive
drwxrwxr-x   - hue    hue              0 2015-09-15 10:12 /user/hue
drwxrwxr-x   - impala impala           0 2015-09-15 05:46 /user/impala
drwxrwxr-x   - oozie  oozie            0 2015-09-15 05:47 /user/oozie
drwxr-x--x   - spark  spark            0 2015-09-15 05:41 /user/spark
drwxrwxr-x   - sqoop2 sqoop            0 2015-09-15 05:42 /user/sqoop2
drwxr-xr-x   - wzhou  bigdata          0 2015-09-20 11:23 /user/wzhou

h. Run a sample test.
Logon as wzhou user and verify whether the user can run sample MapReduce job from hadoop-mapreduce-examples.jar.

[wzhou@vmhost1 hadoop-mapreduce]$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 1000000
Number of Maps  = 10
Samples per Map = 1000000
Wrote input for Map #0
Wrote input for Map #1
Wrote input for Map #2
Wrote input for Map #3
Wrote input for Map #4
Wrote input for Map #5
Wrote input for Map #6
Wrote input for Map #7
Wrote input for Map #8
Wrote input for Map #9
Starting Job
15/09/20 11:32:28 INFO client.RMProxy: Connecting to ResourceManager at vmhost1.local/192.168.56.71:8032
15/09/20 11:32:29 INFO input.FileInputFormat: Total input paths to process : 10
15/09/20 11:32:29 INFO mapreduce.JobSubmitter: number of splits:10
15/09/20 11:32:29 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1442764085933_0001
15/09/20 11:32:30 INFO impl.YarnClientImpl: Submitted application application_1442764085933_0001
15/09/20 11:32:30 INFO mapreduce.Job: The url to track the job: http://vmhost1.local:8088/proxy/application_1442764085933_0001/
15/09/20 11:32:30 INFO mapreduce.Job: Running job: job_1442764085933_0001
15/09/20 11:32:44 INFO mapreduce.Job: Job job_1442764085933_0001 running in uber mode : false
15/09/20 11:32:44 INFO mapreduce.Job:  map 0% reduce 0%
15/09/20 11:32:55 INFO mapreduce.Job:  map 10% reduce 0%
15/09/20 11:33:03 INFO mapreduce.Job:  map 20% reduce 0%
15/09/20 11:33:11 INFO mapreduce.Job:  map 30% reduce 0%
15/09/20 11:33:18 INFO mapreduce.Job:  map 40% reduce 0%
15/09/20 11:33:26 INFO mapreduce.Job:  map 50% reduce 0%
15/09/20 11:33:34 INFO mapreduce.Job:  map 60% reduce 0%
15/09/20 11:33:42 INFO mapreduce.Job:  map 70% reduce 0%
15/09/20 11:33:50 INFO mapreduce.Job:  map 80% reduce 0%
15/09/20 11:33:58 INFO mapreduce.Job:  map 90% reduce 0%
15/09/20 11:34:06 INFO mapreduce.Job:  map 100% reduce 0%
15/09/20 11:34:14 INFO mapreduce.Job:  map 100% reduce 100%
15/09/20 11:34:14 INFO mapreduce.Job: Job job_1442764085933_0001 completed successfully
15/09/20 11:34:15 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=124
		FILE: Number of bytes written=1258521
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=2680
		HDFS: Number of bytes written=215
		HDFS: Number of read operations=43
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=3
	Job Counters 
		Launched map tasks=10
		Launched reduce tasks=1
		Data-local map tasks=10
		Total time spent by all maps in occupied slots (ms)=65668
		Total time spent by all reduces in occupied slots (ms)=6387
		Total time spent by all map tasks (ms)=65668
		Total time spent by all reduce tasks (ms)=6387
		Total vcore-seconds taken by all map tasks=65668
		Total vcore-seconds taken by all reduce tasks=6387
		Total megabyte-seconds taken by all map tasks=67244032
		Total megabyte-seconds taken by all reduce tasks=6540288
	Map-Reduce Framework
		Map input records=10
		Map output records=20
		Map output bytes=180
		Map output materialized bytes=360
		Input split bytes=1500
		Combine input records=0
		Combine output records=0
		Reduce input groups=2
		Reduce shuffle bytes=360
		Reduce input records=20
		Reduce output records=0
		Spilled Records=40
		Shuffled Maps =10
		Failed Shuffles=0
		Merged Map outputs=10
		GC time elapsed (ms)=1026
		CPU time spent (ms)=8090
		Physical memory (bytes) snapshot=3877482496
		Virtual memory (bytes) snapshot=17644212224
		Total committed heap usage (bytes)=3034685440
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=1180
	File Output Format Counters 
		Bytes Written=97
Job Finished in 106.368 seconds
Estimated value of Pi is 3.14158440000000000000

To restart all services in the cluster, you can just click Restart Action on the cluster from Cloudera Manager screen. However, if you want to start/stop a particular service, you might want to know the dependency of the services. Here are the order of starting/stopping sequence for all services on CDH 5.

Startup Sequence
1. Cloudera Management service
2. ZooKeeper
3. HDFS
4. Solr
5. Flume
6. Hbase
7. Key-Value Store Indexer
8. MapReduce or YARN
9. Hive
10. Impala
11. Oozie
12. Sqoop
13. Hue

Stop Sequence
1. Hue
2. Sqoop
3. Oozie
4. Impala
5. Hive
6. MapReduce or YARN
7. Key-Value Store Indexer
8. Hbase
9. Flume
10. Solr
11. HDFS
12. ZooKeeper
13. Cloudera Management Service

Ok, we are good here. In the next post, I am going to discuss load data to Hive.

Install Cloudera Hadoop Cluster using Cloudera Manager

Three years ago I tried to build up a Hadoop Cluster using Cloudera Manager. The GUI looked nice, but the installation was pain and full of issues. I gave up after many failed tries, and then went with the manual installation. It worked fine and I have built several clusters since then. After several years working on Oracle Exadata, I go back and retry the hadoop installation using Cloudera Manager. This time I installed CDH 5 cluster. The installation experience was much better than three years ago. But not surprised, the installation still has some issues and I can easily identify some bugs during the installation. But at least I can successfully install a 3 node hadoop cluster after several tries. The followings are my steps during the installation.

First, let me give a little detail about my VM environment. I am using Virtualbox and build three VMs.
vmhost1: This is where name node, clouder manager and many other roles are located.
vmhost2: Data Node
vmhost3: Data Node

Note: the default replication factor is 3 for hadoop. In my environment, it is under replicated. So I have to adjust replication factor from 3 to 2 after installation, just to get rid of some annoying alerts.

  • OS: Oracle Linux 6.7, 64-bit
  • CPU: 1 CPU initially for all 3 VMs. Then I realize vmhost1 needs a lot of processing power as majority of the installation and configuration happen on node 1. I gave vmhost1 2 CPUs. It proved still not enough and vmhost1 tended to freeze after installation. After I bump it up to 4 CPUs, vmhost1 looks fine. 1 CPU for Data Node host is enough.
  • Memory: Initially I gave 3G to all of 3 VMs. Then bump up node 1 to 5G before installation. It proved still not enough. After bumping up to 7G on vmhost1, the VM is not freezing anymore. I can see the memory usage is around 6.2G. So 7G configuration is good one. After installation, I reduced Data Node’s memory to 2G to free some memory. If not much job running, the memory usage is less than 1G on Data Node. If just testing out hadoop configuration, I can further reduce the memory to 1.5G per Data Node.
  • Network: Although I have 3 network adpaters built in the VM, I actually use only two of them. One is configured as Internal Network and this is where my cluster VMs are using to communicate with each other. Another one is configured as NAT, just to get internet connection to download packages from Cloudera site.
  • Storage: 30G. The actual size after installation is about 10~12G and really depended on how many times you fail and retry for the installation. The clean installation uses about 10G of space.

Pre-Steps Before the Installation

Before doing the installation, make sure configure the following in the VM:
1. Set SELinux policy to diasabled. Modify the following parameter in /etc/selinux/config file.
SELINUX=disabled

2. Disable firewall.
chkconfig iptables off

3. Set swappiness to 0 in /etc/sysctl.conf file. In the latest Cloudera CDH releases, it actually recommends changing to non-zero value, like 10. But for my little test, I set it to 0 like many people did.
vm.swappiness=0

4. Disable IPV6 in /etc/sysctl.conf file.
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.all.disable_ipv6 = 1

5. Configure passwordless SSH for root user. This is common step for Oracle RAC installation and I do not repeat the steps here.

Ok, ready for the installation. Here are the steps.
1. Download and Run the Cloudera Manager Server Installer
Logon as root user on vmhost1. All of the installations are under root user.
Run the following commands.

   
wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin
chmod u+x cloudera-manager-installer.bin
./cloudera-manager-installer.bin

It popups the following screen, just click Next or Yes for the rest of screens.
cdh_install_installer_1

If successful, you will see the following screen.
cdh_install_installer_finish

After click Close, it will pop up a browser window and point to http://localhost:7180/. At this moment, you can click Finish button on the previous installation GUI and close the installation GUI. Then move to browser and patiently wait for your Cloudera Manager starts up. Note. It usually takes several minutes. So be patient.

2. Logon Screen
After the following screen shows up, logon as admin user and use the same admin as password.
cdh_install_logon

3. Choose Version
The next screen is to choose which version to use. The default option is Cloudera Enterprise Data Hub Edition Trial, but with 60 days limit. Although Cloudera Express has no time limit, the Express version misses a lot of features I would like to test out. So I go with the Enterprise 60 days trial version.
cdh_install_version

4. Thank You Screen
Click Continue for the next Thank You screen.
cdh_install_thanks

5. Host Screen
Input vmhost[1-3].local, then click New Search. Note, make sure to use FQDN. I used to have bad experience not using FQDN in the old version of CDH installation. I am not going to waste my time in trying out what happens if not using FQDN.

After the following screen shows up, Click New Search, then the 3 hosts shows up. Then click Continue.
cdh_install_search

6. Select Repository
For Select Repository screen, the default option is using Parcels. Unfortunately I had issue using Parcel during the installation. It passed the step of installation on all of 3 hosts, but was stuck in download the latest Parcel file. After looking around, it seems the issue was that the default release was for September version, but the latest Parcel is pointing to the old August release. It seems version mismatch to me. Anyway, I am going to try out the Parcels option in the future again. But for this installation I changed to use Packages version. I intentionally did not choose the latest CDH 5.4.5 version. I would like to go with the version has long lag in time. For example there is about one month lag between CDH 5.4.3 and CDH 5.4.4. If 5.4.3 is not stable, Cloudera would put a new release a few days later and can not wait for one month to release new version. So I went with CDH 5.4.3.
Make sure to choose 5.4.3 for Navigator Key Trustee as well.
cdh_install_repos

7. Java Installation
For Java installation, leave it uncheck in default and click Continue.
cdh_install_jdk

8. Single User
For Enable Single User Mode, I did NOT check Single User Mode as I want cluster installation.
cdh_install_singleUser

9. SSH Login Credentials
For SSH Login Credentials, input root password. For Number of Simultaneous Installations, the default value is 10. It created a lot of headache during my installation. Each host downloads its own copy from cloudera website. As three of VMs were fighting each other for the internet bandwidth on my host machine, certain VM could wait there for several minutes for downloading the next package. If wait for more than 30 seconds, Cloudera Manager would time out the installation for this host and marked as failed installation. I am fine with the time out, but not happy with the next action. The the next step after clicking Retry Failed Hosts, it rolls back the installed packages on this VM and restart from scratch for the next try. It could take hours before I could reach to that point. The more elegant way to do the installation should be download once on host and distribute to other hosts for installation. If failed, retry from the failing point. Although the total download files is about a few GB per host, the failed retries can easily make it 10GB per host. So I have to set Number of Simultaneous Installation to 1 to limit to one VM for installation to reduce my failure rate.
cdh_install_ssh

10. Installation
The majority of installation time spends here if going with Package option. For Parcel option, this step is very fast because the majority of downloads are in the different screen. The time in this step really depends on the following factors:
1. How fast your internet bandwidth. The faster, the better.
2. The download speed from Cloudera site. Although my internet download speed can easily reach to 12M per second, my actual download time from Cloudera could vary depend on the time of day. Majority of the time is around 1~2M per second. Not great, but manageable. But sometimes it could drop down to 100K per second. This is the time I have higher chance to see the time out failure and fail the installation. At one point I could not tolerate this, I wake up at 2am and began my installation process. It was much faster. I can get 10M per second download speed with about 4~7 M on average. I only saw a few timeout failure on one host.
3. How many times the installation time out and have to retry.

If successful, the following screen shows.
cdh_install_success

11. Detect Version
After the success of installation, it shows the version screen.
cdh_install_detectVersion

12. Finish Screen
Finally, I can see this Finish screen. Life is good? Wrong! See my comment in the Cluster Setup step.
cdh_install_finish

13. Cluster Setup
When I reached to this step, I knew I was almost done. Just a few more steps, less than 30 minutes work. After a long day, I went for dinner and resume my configuration later. It proved to be the most expensive mistake I have done during this installation. After the dinner, I went back the same screen, click Continue. It show Session Time Out error. Not a big deal as I thought the background process knew where I was for the installation. Open the browser and type in the url, http://localhost:7180. Guess what, not the Cluster Setup screen, but the screen at step 4. Tried many ways and could not find a workaround. Out of ideas, I had to reinstall from step 4. What’s a pain! Another 7~8 hours work. My next installation did not waste any time on this step and completed it as quickly as possible.

Ok, go back to this screen. I want to use both Impala and Spark and could not find the combination for these two except all services. So I chose Custom Services and chose the services mainly from Core with Impala + Spark. Make sure to check Include Cloudera Navigator.
cdh_setup_service

14. Role Assignment
I chose the default, click Continue.
cdh_setup_role

15. Database Setup
Choose the default. Make sure to click Test Connection before clicking Continue.
cdh_setup_database_1
cdh_setup_database_2

16. Review
Click Continue.
cdh_setup_review

17. Completion
It shows the progress during the setup.
cdh_setup_progress

Finally it show the real completion screen.
cdh_setup_complete

After clicking Finish, you should screen similar as follows.
cdh_cm_screen
The life is good right now. The powerful Cloudera Manager has much more nice features than three years ago. Really worth my effort to go through the installation.
life_is_good