Install Oracle Big Data SQL on Exadata

Oracle Big Data SQL is part of Oracle Big Data family. It allows users to use regular Oracle SQL to access data in Oracle database, Hadoop and other sources at the same time. Recently I installed the new version of Oracle Big Data SQL (BDSQL) v3.0 on two different systems:

  • An X3-2 Full Rack Big Data Appliance (BDA) connecting with an X3-2 Quarter Rack Exadata. This BDA is non-kerberosed.
  • An X6-2 Starter Rack BDA connecting with an X6-2 Eighth Rack Exadata. This BDA is kerberosed.

Unlike previous previous BDSQL, the installations of this version of BDSQL on both systems are relatively smooth and straight forward. I list the installation steps as follows:


  • The BDA’s version is at least 4.5.
  • The April 2016 Proactive Bundle Patch ( BP) for Oracle Database must be pre-installed on Exadata.

BDSQL requires the installation on both BDA and Exadata. Perform the installation on BDA first, then Exadata next.

BDA Installation
Run bdacli command to check whether BDSQL is enabled or not on BDA.
[root@enkbda1node01 ~]# bdacli getinfo cluster_big_data_sql_enabled

If not enabled, run the following command:
[root@enkbda1node01 ~]# bdacli enable big_data_sql

After BDSQL is enabled on BDA, you should able to see the service on BDA’s Cloudera Manager.

Exadata Installation

Logon as root user on Exadata DB Node 1.
Step 1: copy file from BDA to exadata

[root@enkx3db01 ~]# mkdir bdsql
[root@enkx3db01 ~]# cd bdsql
[root@enkx3db01 bdsql]# curl -O http://enkbda1node01/bda/
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 40258  100 40258    0     0  13.6M      0 --:--:-- --:--:-- --:--:-- 38.3M

[root@enkx3db01 bdsql]# ls -l
total 40
-rw-r--r-- 1 root root 40258 Sep 28 07:04

Step 2. Change the ownership of the script and bdsql directory.
We need to use oracle user for the installation.

[root@enkx3db01 bdsql]# chown oracle:oinstall
[root@enkx3db01 bdsql]# chmod +x
[root@enkx3db01 bdsql]# ls -l
total 40
-rwxr-xr-x 1 oracle oinstall 40258 Sep 28 07:04
[root@enkx3db01 bdsql]# cd 
[root@enkx3db01 ~]# chown oracle:oinstall bdsql

Step 3. Perform the installation
Logon as oracle user, not root user. Set the environment as follows:

export ORACLE_HOME=/u01/app/oracle/product/
export ORACLE_SID=dbm1
export GI_HOME=/u01/app/
export TNS_ADMIN=/u01/app/

echo $GI_HOME

Run the installation script.

[oracle@enkx3db01 ~] cd bdsql
oracle:dbm1@enkx3db01 bdsql > ./
bds-exa-install: setup script started at   : Wed Sep 28 07:23:06 SGT 2016
bds-exa-install: bds version               : bds-2.0-2.el6.x86_64
bds-exa-install: bda cluster name          : enkbda
bds-exa-install: bda web server            : enkbda1node01.enkitec.local
bds-exa-install: cloudera manager url      : enkbda1node03.enkitec.local:7180
bds-exa-install: hive version              : hive-1.1.0-cdh5.7.0
bds-exa-install: hadoop version            : hadoop-2.6.0-cdh5.7.0
bds-exa-install: bds install date          : 09/24/2016 05:48 SGT
bds-exa-install: bd_cell version           : bd_cell-
bds-exa-install: action                    : setup
bds-exa-install: crs                       : true
bds-exa-install: db resource               : dbm
bds-exa-install: database type             : RAC
bds-exa-install: cardinality               : 2
Detected a multi instance database (dbm). Run this script on all instances.
Please read all option of this program (bds-exa-install --help)
This script does extra work on the last instance.  The last instance is determined as
the instance with the largest instance_id number.
press <return>

bds-exa-install: root shell script         : /u01/app/oracle/product/
please run as root:


waiting for root script to complete, press <enter> to continue checking.. q<enter> to quit

Open another session, run the following command as root user.

[root@enkx3db01 ~]# /u01/app/oracle/product/ enkbda: removing existing entries for from /etc/hosts enkbda: removing existing entries for from /etc/hosts enkbda: removing existing entries for from /etc/hosts enkbda: removing existing entries for from /etc/hosts enkbda: removing existing entries for from /etc/hosts enkbda: removing existing entries for from /etc/hosts enkbda: added entry " enkbda1node01.enkitec.local    enkbda1node01" to /etc/hosts enkbda: added entry " enkbda1node02.enkitec.local    enkbda1node02" to /etc/hosts enkbda: added entry " enkbda1node03.enkitec.local    enkbda1node03" to /etc/hosts enkbda: added entry " enkbda1node04.enkitec.local    enkbda1node04" to /etc/hosts enkbda: added entry " enkbda1node05.enkitec.local    enkbda1node05" to /etc/hosts enkbda: added entry " enkbda1node06.enkitec.local    enkbda1node06" to /etc/hosts

After it completes, close this root session and go back to the oracle installation session. Then press ENTER key. The installation continues.

bds-exa-install: root script seem to have succeeded, continuing with setup bds
/bin/mkdir: created directory `/u01/app/oracle/product/'
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: downloading JDK
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: installing JDK tarball
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: Copying JCE policy jars
/bin/mkdir: created directory `default_dir'
/bin/mkdir: created directory `bigdata_config'
/bin/mkdir: created directory `log'
/bin/mkdir: created directory `jlib'
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: removing old oracle bds jars if any
bds-exa-install: downloading oracle bds jars
bds-exa-install: installing oracle bds jars
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: downloading               : hadoop-2.6.0-cdh5.7.0.tar.gz
bds-exa-install: downloading               : hive-1.1.0-cdh5.7.0.tar.gz
bds-exa-install: unpacking                 : hadoop-2.6.0-cdh5.7.0.tar.gz
bds-exa-install: unpacking                 : hive-1.1.0-cdh5.7.0.tar.gz
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: downloading               : cdh-ol6-native.tar.gz
bds-exa-install: creating /u01/app/oracle/product/ for hdfs/mapred client access
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: creating bds property files
bds-exa-install: working directory         : /u01/app/oracle/product/
bds-exa-install: created
bds-exa-install: created
bds-exa-install: creating default and cluster directories needed by big data external tables
bds-exa-install: note this will grant default and cluster directories to public!
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
bds-exa-install: granted default and cluster directories to public!
bds-exa-install: mta set to use listener end point : LISTENER
bds-exa-install: mta will be setup
bds-exa-install: creating /u01/app/oracle/product/
bds-exa-install: mta setting agent home as : /u01/app/oracle/product/
bds-exa-install: mta shutdown              : bds_dbm_enkbda
ORA-28593: agent control utility: command terminated with error
bds-exa-install: registering crs resource  : bds_dbm_enkbda
bds-exa-install: skipping crs registration on this instance
bds-exa-install: patching view LOADER_DIR_OBJS
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
bds-exa-install: creating mta dblinks
bds-exa-install: cluster name              : enkbda
bds-exa-install: extproc sid               : bds_dbm_enkbda
bds-exa-install: cdb                       : true
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
catcon: ALL catcon-related output will be written to /u01/app/oracle/product/
catcon: See /u01/app/oracle/product/*.log files for output generated by scripts
catcon: See /u01/app/oracle/product/*.lst files for spool files, if any completed successfully
bds-exa-install: setup script completed all steps

Step 4. Install kerberos related stuff
If BDA is not kerberosed, skip this step.

Logon as root user, and copy the krb5.conf file from BDA node 1

[root@enkx3db01 etc]# cp -p krb5.conf krb5.conf.orig
[root@enkx3db01 etc]# scp root@enkbda1node01:/etc/krb5.conf .
root@enkbda1node01's password:
krb5.conf                                           100%  888     0.9KB/s   00:00 

Download krb5-workstation-1.10.3-57.el6.x86_64.rpm file and do the yum install on Exadata.

[root@enkx3db01 tmp]# yum localinstall krb5-workstation-1.10.3-57.el6.x86_64.rpm

Step 5: Verify Hadoop Access from Exadata
I used hive to verify whether Hadoop access is working or not on Exadata. If the BDA environment is non-kerberosed, skip the steps to do the kinit and klist.

The hive's principal is used for kinit and oracle@ENKITEC.KDC principal works as well.

[root@enkbda1node01 ~]# kinit hive
Password for hive@ENKITEC.KDC: 
[root@enkbda1node01 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hive@ENKITEC.KDC
Valid starting     Expires            Service principal
09/28/16 10:23:28  09/29/16 10:23:28  krbtgt/ENKITEC.KDC@ENKITEC.KDC
	renew until 10/05/16 10:23:28
[root@enkbda1node01 ~]# hive
hive> select count(*) from sample_07;
Query ID = root_20160928102323_f691ee8c-17f7-4ba8-af36-dad452bac458
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1475025273405_0001, Tracking URL = http://enkbda1node04.enkitec.local:8088/proxy/application_1475025273405_0001/
Kill Command = /opt/cloudera/parcels/CDH-5.7.0-1.cdh5.7.0.p1464.1349/lib/hadoop/bin/hadoop job  -kill job_1475025273405_0001
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2016-09-28 10:24:11,810 Stage-1 map = 0%,  reduce = 0%
2016-09-28 10:24:20,134 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.94 sec
2016-09-28 10:24:25,280 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.27 sec
MapReduce Total cumulative CPU time: 4 seconds 270 msec
Ended Job = job_1475025273405_0001
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 4.27 sec   HDFS Read: 52641 HDFS Write: 4 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 270 msec
Time taken: 28.544 seconds, Fetched: 1 row(s)

hive> select * from sample_07 limit 10;
00-0000	All Occupations	134354250	40690
11-0000	Management occupations	6003930	96150
11-1011	Chief executives	299160	151370
11-1021	General and operations managers	1655410	103780
11-1031	Legislators	61110	33880
11-2011	Advertising and promotions managers	36300	91100
11-2021	Marketing managers	165240	113400
11-2022	Sales managers	322170	106790
11-2031	Public relations managers	47210	97170
11-3011	Administrative services managers	239360	76370
Time taken: 0.054 seconds, Fetched: 10 row(s)
hive> quit;
[root@enkbda1node01 ~]# 

Check hadoop from Oracle database. Run the following SQLs in SQLPlus. You can see BDSQL is installed under $ORACLE_HOME/bigdatasql directory.

set lines 150
col OWNER for a10
col DIRECTORY_NAME for a25
col DIRECTORY_PATH for a50
select OWNER, DIRECTORY_NAME, DIRECTORY_PATH  from dba_directories where directory_name like '%BIGDATA%';

---------- ------------------------- --------------------------------------------------
SYS        ORA_BIGDATA_CL_enkbda
SYS        ORACLE_BIGDATA_CONFIG     /u01/app/oracle/product/

Create a bds user in Oracle database. This is the user I can test whether I can query hive table from SQLPlus.

SQL> create user bds identified by bds;
User created.
SQL> grant connect , resource to bds;
Grant succeeded.

Create a few tables like sample_07 and sample_08. Both tables exists in Hive on BDA.

CREATE TABLE sample_07 (  code VARCHAR2(4000),  description VARCHAR2(4000),

Several other useful queries to query hive metadata from SQLPlus:

set lines 150
set pages 999
col cluster_id for a20
col database_name for a8
col table_name for a15
col location for a20
col owner for a8
col table_type for a8
col input_format for a20
col hive_uri for a25
select cluster_id, database_name, table_name, location,
owner, table_type, input_format, hive_uri
from all_hive_tables
order by owner, table_name;

------ -------- --------------- -------------------- -------- -------- -------------------- -------------------------
enkbda default  sample_07       hdfs://enkbda-ns/use hive     MANAGED_ thrift://enkbda1node0
                                r/hive/warehouse/sam          TABLE    pred.TextInputFormat 4.enkitec.local:9083

col partitioned for a15
SELECT cluster_id, database_name, owner, table_name, partitioned FROM all_hive_tables;

------------ -------- -------- --------------- ---------------
enkbda       default  hive     sample_07       UN-PARTITIONED

col column_name for a20
col hive_column_type for a20
col oracle_column_type for a15
SELECT table_name, column_name, hive_column_type, oracle_column_type 
FROM all_hive_columns;

--------------- -------------------- -------------------- ---------------
sample_07       code                 string               VARCHAR2(4000)
sample_07       description          string               VARCHAR2(4000)
sample_07       total_emp            int                  NUMBER
sample_07       salary               int                  NUMBER

4 rows selected.

Ok, let me get some rows from sample_07 table.
oracle:dbm1@enkx3db01 bdsql > sqlplus bds/bds
SQL> select * from bds.sample_07;
select * from bds.sample_07
ERROR at line 1:
ORA-29913: error in executing ODCIEXTTABLEOPEN callout
ORA-29400: data cartridge error
ORA-28575: unable to open RPC connection to external procedure agent

After some investigation, I realized the above error was caused by a weird requirement of BDSQL installation: the installation must be done on ALL db nodes on Exadata before accessing Hive table on BDA. Not a big deal. I performed the installation from Step 1 to Step 4 on DB node 2 on Exadata. After the completion, I bounced BDSQL on BDA using the
following commands as root user:

bdacli stop big_data_sql_cluster
bdacli start big_data_sql_cluster

Step 6: Final Verification
After BDSQL was installed on db node 2 and BDSQL was bounced on BDA, I was able to query the sample_07 table from SQLPlus.

SQL> select * from sample_07 where rownum < 3; 
---------- ----------
All Occupations
 134354250	40690

Management occupations
   6003930	96150

SQL> select count(*) from sample_07;


Cool, BDSQL is working on Exadata.

A few more tips
If stopping the database that has BDSQL installed, you will see the following message when stopping the db.
oracle:dbm1@enkx3db01 ~ > srvctl stop database -d dbm
PRCR-1133 : Failed to stop database dbm and its running services
PRCR-1132 : Failed to stop resources using a filter
CRS-2529: Unable to act on 'ora.dbm.db' because that would require stopping or relocating 'bds_dbm_enkbda', but the force option was not specified
CRS-2529: Unable to act on 'ora.dbm.db' because that would require stopping or relocating 'bds_dbm_enkbda', but the force option was not specified

The reason is that BDSQL installation created a new resource bds_dbm_enkbda that is running on both all db nodes. We can see the resource from crsctl status resource command.

oracle:+ASM1@enkx3db01 ~ > crsctl status resource -t
Name           Target  State        Server                   State details       
Local Resources
               ONLINE  ONLINE       enkx3db01          STABLE
               ONLINE  ONLINE       enkx3db02          STABLE
               ONLINE  ONLINE       enkx3db01          STABLE
               ONLINE  ONLINE       enkx3db02          STABLE
               ONLINE  ONLINE       enkx3db01          STABLE
               ONLINE  ONLINE       enkx3db02          STABLE
               ONLINE  ONLINE       enkx3db01          STABLE
               ONLINE  ONLINE       enkx3db02          STABLE
               ONLINE  ONLINE       enkx3db01          Started,STABLE
               ONLINE  ONLINE       enkx3db02          Started,STABLE
               ONLINE  ONLINE       enkx3db01          STABLE
               ONLINE  ONLINE       enkx3db02          STABLE
               ONLINE  ONLINE       enkx3db01          STABLE
               ONLINE  ONLINE       enkx3db02          STABLE
Cluster Resources
      1        ONLINE  ONLINE       enkx3db01          STABLE
      2        ONLINE  ONLINE       enkx3db02          STABLE
      1        ONLINE  ONLINE       enkx3db02          STABLE
      1        ONLINE  ONLINE       enkx3db01          STABLE
      1        ONLINE  ONLINE       enkx3db01          STABLE
      1        ONLINE  ONLINE       enkx3db01 192.1
      1        ONLINE  ONLINE       enkx3db01          STABLE
      1        ONLINE  ONLINE       enkx3db01          Open,STABLE
      2        ONLINE  ONLINE       enkx3db02          Open,STABLE
      1        ONLINE  ONLINE       enkx3db01          STABLE
      1        ONLINE  ONLINE       enkx3db02          STABLE
      1        ONLINE  ONLINE       enkx3db01          Open,STABLE
      1        ONLINE  ONLINE       enkx3db01          STABLE
      1        ONLINE  ONLINE       enkx3db02          STABLE
      1        ONLINE  ONLINE       enkx3db01          STABLE
      1        ONLINE  ONLINE       enkx3db01          STABLE

So to shutdown the dbm database, need to run crsctl stop resource bds_dbm_enkbda command first.

This resource is actually an agent for BDSQL. Running ps command can see this agent.

oracle:dbm2@enkx3db02 ~ > ps -ef|grep bds
oracle    21046   1945  0 11:42 pts/1    00:00:00 grep bds
oracle   367388      1  0 11:19 ?        00:00:02 extprocbds_dbm_enkbda -mt

If running the following lsnrctl, you should be able see 30 handlers for bds.
lsnrctl status | grep bds

Build an Oracle Big Data Discovery Project in 5 minutes

Oracle Big Data Discovery (BDD) is a nice visual analytic tools providing powerful capabilities that can turn raw data into business insight within minutes. The knowledge of hadoop or big data is not required. In this blog, I am going to show how to create a simple BDD project and do some interesting analysis within 5 minutes.

Recently there are some blogs and articles about the potential strong earthquake in California for the articles like 7.0+ earthquake could shake LA … today or tomorrow or Is California About to Be Destroyed by a Killer Quake?. It made me wondering how the earthquake activities looks like for the past month. I know there is a lot of websites providing this kind of analysis, but I am still interested in what I can find out from raw data by using Oracle BDD.

First, go to USGS website at and download the data of All earthquakes for the past 30 days. It is a CSV file containing the data from Sep 13 to Oct 12, 2016. Save it as all_earthquake_30days.csv.

Next, go to BDD login page. I just did a fresh install of BDD on our X3 full Rack Big Data Appliance (BDA) a few days back. So I login to URL: https://enkbda1node05:7004/bdd/web/home. The default installation location is 5th node on BDA and port 7003 for HTTP and port 7004 for HTTPS protocol.

After logon, click Add Data Set.

Click Browse button to find out the CSV file just downloaded from USGS website, click Next.

You will see Preview screen. For certain columns don’t really care (in this case, net column), just uncheck the column header, the column data will not show up in the analysis. I unchecked column net, id, updated, depthError, horizontalError, magError, magNst, type, status and locationSource, then click Next. The nice thing about BDD is that it detects the data and set the header information, delimiter and other setting information automatically.

In the Data Set screen, input the data set name, description, and Hive table name. Then click Create.


After the data is loaded and indexed, the Explore screen shows up. You can see there is 9100 records in the dataset and 18 attributes (or columns) are indexed. Click Add to Project on the top right to create a new project for the data set.

Input the Project name and Description, click Add.

You can add one or more attributes to the scratchpad. For example, I want to add Mag column to scratchpad.
After adding to the scratchpad, it shows more detail about the target attribute.

Next, click Transform, then click Convert. Highlight latitude column, then click to Geocode.

Select latitude column for Latitude and longitude column for Longitude. Give a new attribute name location, then click Add to Script, then click Commit to Project.

You will see the new attribute location is added to data panel, and is committed to the project. After finish, click Discover on the top.

bdd_1project_explore_transform_3Drag MapComponent from right to the main panel. Automatically a nice map showing the earthquake location shows up.

I want to know anything happened in CA. Click Search button, and input information for LA.

A nice view of earthquake activities show up on the screen.

Ok, next, I want to filter by mag column and want to see only the quakes a litte bigger. I chose mag between 1.8 and 5.2.

You can see majority of the quakes are smaller than 1.8 with only 832 results out of 5070 selected.

It seems there is an issue for BDD to recognize my transformed location column as geocode column. So I modified the CSV file and add a new column called geocode using the rule like Latitude Longtitude and created a new dataset. In this way, BDD can recognized this new column as geocode-enabled. I added this attribute to the scratchpad.

Goto the Discover page and zoom in this thematic map to the US area.

Zoom in more by checking out California.

From the graph, it tells me the central California area (Mono County) has the largest earthquake activities with 732 records, followed by two South California counties, Riverside County (569) and Imperial County (428). The data covers only 30 days and definitely can not tell whether it is normal or not. It needs to be compared with a long history in the past. I would definitely leave the earthquake forecasting to experts and would not comment anything about the earthquake. Anyway, this blog just demonstrates how easy we can do the data discovery within a few minutes using Oracle BDD. It looks like an impressed tools and and I believe it will have strong potential in the big data world.

Disabling Firewall after Turning off Firewall

Many applications requires to disable firewall on Linux. The most common used commands are as follows:

Stop the ipchains service.
# service ipchains stop
Stop the iptables service.
# service iptables stop
Stop the ipchains service after reboot.
# chkconfig ipchains off
Stop the iptables service after reboot.
# chkconfig iptables off

Another popular one is to set SELINUX=disabled in the /etc/selinux/config file to disable some extra security restrictions.

The above usually works fine with me when turning off firewall. Recently I run into a situation that makes me to add extra check for firewall stuff. The consultant tried to install Oracle Big Data Discovery on a Red Hat Linux VM and connect it to an Oracle Big Data Appliance (BDA) X6-2 Starter Rack. He used similar approaches as above to turn off the firewall and Linux security between this Red Hat VM and BDA. But still run into a weird issue when BDD application on BDA nodes try to pull a request from a web service on this Red Hat VM. The result has never come back.

I tried ping and ssh. Both worked. Hmm, it does show the connectivity between both. Looks like
firewall issue. Check with network infrastructure team. It has firewall rules between the two, but not enabled yet.

I noticed the OS is Red Hat 7.1 Linux. Could be some new firewall feature in 7.1? After some investigation, yes, it does. On Redhat 7 Linux, the firewall run as firewalld daemon. So let me find out what it does.

[root@bddhost ~]# firewall-cmd --zone=public --list-services 
dhcpv6-client ssh

[root@bddhost ~]# firewall-cmd --get-default-zone 

[root@bddhost ~]# firewall-cmd --list-all
public (default, active)
  interfaces: eth0 eth2
  services: dhcpv6-client ssh
  masquerade: no
  rich rules:

The above commands shows the firewall allows only ssh service. Not wonder http web service is not working.

Ok, let me stop it.

[root@bddhost ~]# systemctl stop firewalld
[root@bddhost ~]# firewall-cmd --list-ports
FirewallD is not running

Right now the WGET is working from BDA to BDD VM.

[root@uat-bda1node01 ~]# wget http://192.168.2113:7003/endeca-server/ws/config?wsdl
--2016-10-03 18:56:29--  http://192.168.2113:7003/endeca-server/ws/config?wsdl
Connecting to 192.168.2113:7003... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2529 (2.5K) [text/xml]
Saving to: “config?wsdl”
100%[============================================>] 2,529       --.-K/s   in 0s
2016-10-03 18:56:29 (456 MB/s) - “config?wsdl” saved [2529/2529]

The above changes works only if the server is not rebooted.

[root@bddhost ~]# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled)
   Active: inactive (dead) since Mon 2016-10-03 18:56:22 SGT; 10min ago
 Main PID: 1016 (code=exited, status=0/SUCCESS)

Sep 30 12:52:35 localhost.localdomain systemd[1]: Started firewalld - dynamic fire....
Sep 30 15:13:09 firewalld[1016]: 2016-09-30 15:13:09 ERR...
Oct 03 18:56:21 bddhost systemd[1]: Stopping firewalld - dynamic firewall dae.....
Oct 03 18:56:22 bddhost systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.

To make the change to be permeant, need to do the following:

[root@bddhost ~]# systemctl disable firewalld
rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
rm '/etc/systemd/system/’

[root@bddhost ~]# systemctl status firewalld
firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled)
   Active: inactive (dead)

Sep 30 12:52:33 localhost.localdomain systemd[1]: Starting firewalld - dynamic fir....
Sep 30 12:52:35 localhost.localdomain systemd[1]: Started firewalld - dynamic fire....
Sep 30 15:13:09 firewalld[1016]: 2016-09-30 15:13:09 ERR...
Oct 03 18:56:21 bddhost systemd[1]: Stopping firewalld - dynamic firewall dae.....
Oct 03 18:56:22 bddhost systemd[1]: Stopped firewalld - dynamic firewall daemon.
Hint: Some lines were ellipsized, use -l to show in full.

To learn more about this firewalld daemon, please check out this link at

Data Pump Export to ASM

I usually do the data pump export to a file system that is large enough to hold my export files. However, sometimes I don’t have any cluster file system available for my export data. But there is tons of free space on my RAC DB’s ASM. Of course, I would choose to backup to ASM as a fast alternative to my regular approach. ASM is my swiss army knife for everything. Here is the detail of the steps:

1. Create a Directory in ASM for the dumpfile
Logon as grid user and run the following commands.

mkdir DPDUMP

2. Create an Oracle Directory in the database to use the dumpfile
$ sqlplus / as sysdba

create or replace directory ASM_DUMP as '+RECO/MYDEMO/DPDUMP';
grant read,write on directory asm_dump to system;

3. Create a Unix directory for the logfile
$ mkdir /home/wzhou/log

4. Create an Oracle Directory for the logfile

create or replace directory ASMDP_LOGDIR as '/home/wzhou/log';
grant read,write on directory ASMDP_LOGDIR to system;

select * from dba_directories;

5. Create a parameter file for the datapump export
cd /home/wzhou/etc
$ vi expdp_mydemo_schemas.par
Input the following content in the expdp_mydemo_schemas.par file.


6. Run datapump export
$ expdp parfile=expdp_mydemo_schemas.par

Use Toad to Access Hadoop Ecosystem

Sometime back I wrote a blog about Use SQL Developer to Access Hive Table on Hadoop. Recently I noticed another similar product: Toad for Hadoop. So I decided to give a try.
Like many people, I like Toad products in general and use Toad in many of my projects. Toad for Hadoop is a new product in the Toad family. The current version is Toad for Hadoop 1.3.1 Beta on Windows platform only. The software supports Cloudera CDH 5.x and Hortonworks Data Platform 2.3. The software is free for now. But You need to create an account with Dell before you can download the zip file. The entire process of installation and configuration are pretty simple and straight forward. Here are the steps:

Download the zip files
Go to Toad for Hadoop. Click Download button. The zip file is 555 MB in size.

I installed the software in my Window VM. Just double click ToadHadoop_1.3.1_Beta_x64.exe file and take the default values for all of installation screens. At the end of installation, it will open the software automatically.

Unlike so many buttons in the regular Toad software, this one looks quite simple.
Click the dropdown box on the right of Ecosystem box, then click Add New Ecosystem. The Select your Hadoop Cluster setup screen shows up as follows.
Input the name you want for this connection. For this one, I configured the connection for our X3 Big Data Appliance (BDA) full rack cluster with 18 nodes. So I input the Name as Enk-X3-DBA. For Detection Method, you can see it support Cloudera CDH via Cloudera Manager or Hortonworks HDP via Ambari. For this one, I chose CDH managed by Cloudera Manager for Detection Method.

Next screen is to Enter your Cloudera Manager credentials. Use the same url and port number that you access your Cludera Manager for Server Address. The user name is the user name you access Cludera Manager. Make sure you create your user directory on HDFS before you run the installation of the software, for example, create a folder /user/zhouw and change the permission to zhouw user for read/write access. Otherwise you will see permission exception later on.
Next screen shows Autodetection. It does many checks and validations and you should see the successful status for all of them.
Next one shows Ecosystem Configuration. In this screen, I just input zhouw for User Name. Then click Activate button. There is a bug in this version. Sometimes both Activate and Cancel buttons disappear. The workaround is just to close and restart the software.

SQL Screen
The most frequently used screen is SQL Screen. You can run the SQLs against either Hive or Impala engine.
The screen is very similar to traditional Toad screen I use to. On the left panel, it shows the schemas and table names. The bottom panel shows the result. Although it has Explain Plan tab in the result panel, I usually consider Explain Plan on Hadoop as a joke at this time of writing. You can take a look, but I would not waste the time in checking out the plan. You will see more issues from other parts of Hadoop world instead of suboptimal query plans. The History panel on the right is an interesting one, which I found it very useful later on. It is not only shows the timing for my queries (or jobs), but also cache the result from the previous runs. It proves a smart feature and I don’t have to rerun my queries to get the result back.
Sometimes you might want to check out DDLs for certain tables. You can just right click the table and select Generate Create Table statement as follows:
Here is an example of generated DDL.

HDFS Screen
HDFS Screen is another feature I really like. It works just like Window Explorer and shows HDFS directory and files under it in tree structure. It also shows the size information for directory and files. With just a few clicks, you can quickly find out which directories and files are taking a lot of space. On the right panel, it can show you some content of the files. By default, it shows the first 4K of data. Very convenient and save me the time in typing multiple commands to find out the same kind of information. If you want to download and upload files from/to HDFS, just click Download and Upload buttons on the top.
Sometimes I am interested in the replication factor and physical locations of certain files on HDFS. Just right click the file on HDFS, then select Properties.
It shows everything about this file. For sizing information, it shows both Summary and individual block information.

Chart Screen
The Chart Screen also looks nice. It does not have many charts in Cloudera Manager, but does have the necessary key information I usually want to know. I just list a few of them as follows:


Log Screen
The Log Screen is an ok one. Here are some of them:


Transfer Screen
The Transfer Screen is supposed to support the data to/from RDBMS, like Oracle, MySQL and SQL Server. I haven’t really tried out this one.

Service Screen
The Service Screen is useful when you want to know where you deploy your services on Hadoop, like hostname and port number for certain services. It does not have everything, but good enough.


In general, Toad for Hadoop is a nice tool that can help you to quickly find out certain information on Hadoop without going through many screens and commands. I would say this tools is for Hadoop Administrators instead of regular Hadoop user. The reason is that you probably don’t want to give the Cloudera Manager access for every user.