Use Python for H2O

I wrote serveral blogs related to H2O topic as follows:
H2O vs Sparkling Water
Sparking Water Shell: Cloud size under 12 Exception
Access Sparkling Water via R Studio
Running H2O Cluster in Background and at Specific Port Number
Weird Ref-count mismatch Message from H2O

While both H2O Flow UI and R Studio can access H2O cluster started by Sparkling Water Shell, both lack the powerful functionality to manipulate data like Python does. In this blog, I am going to discuss how to use Python for H2O.

There are two main topics related to python for H2O. The first is to start a H2O cluster and another one is to access an existing H2O cluster using python.

1. Start H2O Cluster using pysparkling
I discussed start a H2O cluster using sparkling-shell in blog Sparking Water Shell: Cloud size under 12 Exception. For python, need to use pysparkling.

/opt/sparkling-water-2.2.2/bin/pysparkling \
--master yarn \
--conf spark.ext.h2o.cloud.name=WeidongH2O-Cluster \
--conf spark.ext.h2o.client.port.base=26000 \
--conf spark.executor.instances=6 \
--conf spark.executor.memory=12g \
--conf spark.driver.memory=8g \
--conf spark.yarn.executor.memoryOverhead=4g \
--conf spark.yarn.driver.memoryOverhead=4g \
--conf spark.scheduler.maxRegisteredResourcesWaitingTime=1000000 \
--conf spark.ext.h2o.fail.on.unsupported.spark.param=false \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.sql.autoBroadcastJoinThreshold=-1 \
--conf spark.locality.wait=30000 \
--conf spark.yarn.queue=HighPool \
--conf spark.scheduler.minRegisteredResourcesRatio=1

Then run the following python commands.

from pysparkling import *
from pyspark import SparkContext
from pyspark.sql import SQLContext
import h2o
hc = H2OContext.getOrCreate(sc)

The following is the sample output.

[wzhou@enkbda1node05 sparkling-water-2.2.2]$ /opt/sparkling-water-2.2.2/bin/pysparkling \
> --master yarn \
> --conf spark.ext.h2o.cloud.name=WeidongH2O-Cluster \
> --conf spark.ext.h2o.client.port.base=26000 \
> --conf spark.executor.instances=6 \
> --conf spark.executor.memory=12g \
> --conf spark.driver.memory=8g \
> --conf spark.yarn.executor.memoryOverhead=4g \
> --conf spark.yarn.driver.memoryOverhead=4g \
> --conf spark.scheduler.maxRegisteredResourcesWaitingTime=1000000 \
> --conf spark.ext.h2o.fail.on.unsupported.spark.param=false \
> --conf spark.dynamicAllocation.enabled=false \
> --conf spark.sql.autoBroadcastJoinThreshold=-1 \
> --conf spark.locality.wait=30000 \
> --conf spark.yarn.queue=HighPool \
> --conf spark.scheduler.minRegisteredResourcesRatio=1
Python 2.7.13 |Anaconda 4.4.0 (64-bit)| (default, Dec 20 2016, 23:09:15)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/anaconda2/lib/python2.7/site-packages/pyspark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/12/10 15:00:20 WARN util.Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
17/12/10 15:00:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/10 15:00:21 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
17/12/10 15:00:24 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Python version 2.7.13 (default, Dec 20 2016 23:09:15)
SparkSession available as 'spark'.
>>> from pysparkling import *
>>> from pyspark import SparkContext
>>> from pyspark.sql import SQLContext
>>> import h2o
>>> hc = H2OContext.getOrCreate(sc)
/opt/sparkling-water-2.2.2/py/build/dist/h2o_pysparkling_2.2-2.2.2.zip/pysparkling/context.py:111: UserWarning: Method H2OContext.getOrCreate with argument of type SparkContext is deprecated and parameter of type SparkSession is preferred.
17/12/10 15:01:45 WARN h2o.H2OContext: Method H2OContext.getOrCreate with an argument of type SparkContext is deprecated and parameter of type SparkSession is preferred.
Connecting to H2O server at http://192.168.10.41:26000... successful.
--------------------------  -------------------------------
H2O cluster uptime:         12 secs
H2O cluster version:        3.14.0.7
H2O cluster version age:    1 month and 21 days
H2O cluster name:           WeidongH2O-Cluster
H2O cluster total nodes:    6
H2O cluster free memory:    63.35 Gb
H2O cluster total cores:    192
H2O cluster allowed cores:  192
H2O cluster status:         accepting new members, healthy
H2O connection url:         http://192.168.10.41:26000
H2O connection proxy:
H2O internal security:      False
H2O API Extensions:         Algos, AutoML, Core V3, Core V4
Python version:             2.7.13 final
--------------------------  -------------------------------

Sparkling Water Context:
 * H2O name: WeidongH2O-Cluster
 * cluster size: 6
 * list of used nodes:
  (executorId, host, port)
  ------------------------
  (3,enkbda1node12.enkitec.com,26000)
  (1,enkbda1node13.enkitec.com,26000)
  (6,enkbda1node10.enkitec.com,26000)
  (5,enkbda1node09.enkitec.com,26000)
  (4,enkbda1node08.enkitec.com,26000)
  (2,enkbda1node11.enkitec.com,26000)
  ------------------------

  Open H2O Flow in browser: http://192.168.10.41:26000 (CMD + click in Mac OSX)


>>>
>>> h2o
<module 'h2o' from '/opt/sparkling-water-2.2.2/py/build/dist/h2o_pysparkling_2.2-2.2.2.zip/h2o/__init__.py'>
>>> h2o.cluster_status()
[WARNING] in <stdin> line 1:
    >>> ????
        ^^^^ Deprecated, use ``h2o.cluster().show_status(True)``.
--------------------------  -------------------------------
H2O cluster uptime:         14 secs
H2O cluster version:        3.14.0.7
H2O cluster version age:    1 month and 21 days
H2O cluster name:           WeidongH2O-Cluster
H2O cluster total nodes:    6
H2O cluster free memory:    63.35 Gb
H2O cluster total cores:    192
H2O cluster allowed cores:  192
H2O cluster status:         locked, healthy
H2O connection url:         http://192.168.10.41:26000
H2O connection proxy:
H2O internal security:      False
H2O API Extensions:         Algos, AutoML, Core V3, Core V4
Python version:             2.7.13 final
--------------------------  -------------------------------
Nodes info:     Node 1                                         Node 2                                         Node 3                                         Node 4                                         Node 5                                         Node 6
--------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------
h2o             enkbda1node08.enkitec.com/192.168.10.44:26000  enkbda1node09.enkitec.com/192.168.10.45:26000  enkbda1node10.enkitec.com/192.168.10.46:26000  enkbda1node11.enkitec.com/192.168.10.47:26000  enkbda1node12.enkitec.com/192.168.10.48:26000  enkbda1node13.enkitec.com/192.168.10.49:26000
healthy         True                                           True                                           True                                           True                                           True                                           True
last_ping       1513108921427                                  1513108920821                                  1513108920602                                  1513108920876                                  1513108920718                                  1513108921149
num_cpus        32                                             32                                             32                                             32                                             32                                             32
sys_load        1.24                                           1.38                                           0.42                                           1.08                                           0.75                                           0.83
mem_value_size  0                                              0                                              0                                              0                                              0                                              0
free_mem        11339923456                                    11337239552                                    11339133952                                    11332368384                                    11331912704                                    11338491904
pojo_mem        113671168                                      116355072                                      114460672                                      121226240                                      121681920                                      115102720
swap_mem        0                                              0                                              0                                              0                                              0                                              0
free_disk       422764871680                                   389836439552                                   422211223552                                   418693251072                                   422237437952                                   422187106304
max_disk        491885953024                                   491885953024                                   491885953024                                   491885953024                                   491885953024                                   491885953024
pid             1172                                           801                                            15879                                          17866                                          28980                                          30818
num_keys        0                                              0                                              0                                              0                                              0                                              0
tcps_active     0                                              0                                              0                                              0                                              0                                              0
open_fds        441                                            441                                            441                                            441                                            441                                            441
rpcs_active     0                                              0                                              0                                              0                                              0                                              0

2. Accessing Existing H2O Cluster from a Edge Node
To access an existing H2O cluster from a edge node also requires to use pysparkling command. But you don’t have to specify a lot of parameters just like above step to start a H2O cluster. Running without any parameters is fine for the client purpose.

First start the pysparkling by running /opt/sparkling-water-2.2.2/bin/pysparkling command.

Once in the python prompt, run the following to connect to an existing cluster. The key is run h2o.init(ip=”enkbda1node05.enkitec.com”, port=26000). This command will connect to the existing H2O cluster.

from pysparkling import *
from pyspark import SparkContext
from pyspark.sql import SQLContext
import h2o

h2o.init(ip="enkbda1node05.enkitec.com", port=26000)
h2o.cluster_status()

Run some testing code there.

print("Importing hdfs data")
stock_data = h2o.import_file("hdfs://ENKBDA1-ns/user/wzhou/work/test/stock_price.txt")
stock_data

print("Spliting data")
train,test = stock_data.split_frame(ratios=[0.9])

h2o.ls()

The following is the sample output.

[wzhou@ham-lnx-vs-0086 ~]$ /opt/sparkling-water-2.2.2/bin/pysparkling
Python 2.7.13 |Anaconda 4.4.0 (64-bit)| (default, Dec 20 2016, 23:09:15)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
Anaconda is brought to you by Continuum Analytics.
Please check out: http://continuum.io/thanks and https://anaconda.org
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 2.2.0.cloudera1
      /_/

Using Python version 2.7.13 (default, Dec 20 2016 23:09:15)
SparkSession available as 'spark'.
>>> from pysparkling import *
>>> from pyspark import SparkContext
>>> from pyspark.sql import SQLContext
>>> import h2o
>>>
>>> h2o.init(ip="enkbda1node05.enkitec.com", port=26000)
Warning: connecting to remote server but falling back to local... Did you mean to use `h2o.connect()`?
Checking whether there is an H2O instance running at http://enkbda1node05.enkitec.com:26000. connected.
--------------------------  ---------------------------------------
H2O cluster uptime:         5 mins 43 secs
H2O cluster version:        3.14.0.7
H2O cluster version age:    1 month and 21 days
H2O cluster name:           WeidongH2O-Cluster
H2O cluster total nodes:    6
H2O cluster free memory:    58.10 Gb
H2O cluster total cores:    192
H2O cluster allowed cores:  192
H2O cluster status:         locked, healthy
H2O connection url:         http://enkbda1node05.enkitec.com:26000
H2O connection proxy:
H2O internal security:      False
H2O API Extensions:         Algos, AutoML, Core V3, Core V4
Python version:             2.7.13 final
--------------------------  ---------------------------------------
>>> h2o.cluster_status()
[WARNING] in <stdin> line 1:
    >>> ????
        ^^^^ Deprecated, use ``h2o.cluster().show_status(True)``.
--------------------------  ---------------------------------------
H2O cluster uptime:         5 mins 43 secs
H2O cluster version:        3.14.0.7
H2O cluster version age:    1 month and 21 days
H2O cluster name:           WeidongH2O-Cluster
H2O cluster total nodes:    6
H2O cluster free memory:    58.10 Gb
H2O cluster total cores:    192
H2O cluster allowed cores:  192
H2O cluster status:         locked, healthy
H2O connection url:         http://enkbda1node05.enkitec.com:26000
H2O connection proxy:
H2O internal security:      False
H2O API Extensions:         Algos, AutoML, Core V3, Core V4
Python version:             2.7.13 final
--------------------------  ---------------------------------------
Nodes info:     Node 1                                         Node 2                                         Node 3                                         Node 4                                         Node 5                                         Node 6
--------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------  ---------------------------------------------
h2o             enkbda1node08.enkitec.com/192.168.10.44:26000  enkbda1node09.enkitec.com/192.168.10.45:26000  enkbda1node10.enkitec.com/192.168.10.46:26000  enkbda1node11.enkitec.com/192.168.10.47:26000  enkbda1node12.enkitec.com/192.168.10.48:26000  enkbda1node13.enkitec.com/192.168.10.49:26000
healthy         True                                           True                                           True                                           True                                           True                                           True
last_ping       1513109250525                                  1513109250725                                  1513109250400                                  1513109250218                                  1513109250709                                  1513109250536
num_cpus        32                                             32                                             32                                             32                                             32                                             32
sys_load        1.55                                           1.94                                           2.4                                            1.73                                           0.4                                            1.51
mem_value_size  0                                              0                                              0                                              0                                              0                                              0
free_mem        11339923456                                    10122031104                                    10076566528                                    10283636736                                    10280018944                                    10278896640
pojo_mem        113671168                                      1331563520                                     1377028096                                     1169957888                                     1173575680                                     1174697984
swap_mem        0                                              0                                              0                                              0                                              0                                              0
free_disk       422754385920                                   389839585280                                   422231146496                                   418692202496                                   422226952192                                   422176620544
max_disk        491885953024                                   491885953024                                   491885953024                                   491885953024                                   491885953024                                   491885953024
pid             1172                                           801                                            15879                                          17866                                          28980                                          30818
num_keys        0                                              0                                              0                                              0                                              0                                              0
tcps_active     0                                              0                                              0                                              0                                              0                                              0
open_fds        440                                            440                                            440                                            440                                            440                                            440
rpcs_active     0                                              0                                              0                                              0                                              0                                              0
>>> print("Importing hdfs data")
Importing hdfs data
>>> stock_data = h2o.import_file("hdfs://ENKBDA1-ns/user/wzhou/work/test/stock_price.txt")
Parse progress: | 100%

>>>
>>> stock_data
date                   close    volume    open    high     low
-------------------  -------  --------  ------  ------  ------
2016-09-23 00:00:00    24.05     56837   24.13  24.22   23.88
2016-09-22 00:00:00    24.1      56675   23.49  24.18   23.49
2016-09-21 00:00:00    23.38     70925   23.21  23.58   23.025
2016-09-20 00:00:00    23.07     35429   23.17  23.264  22.98
2016-09-19 00:00:00    23.12     34257   23.22  23.27   22.96
2016-09-16 00:00:00    23.16     83309   22.96  23.21   22.96
2016-09-15 00:00:00    23.01     43258   22.7   23.25   22.53
2016-09-14 00:00:00    22.69     33891   22.81  22.88   22.66
2016-09-13 00:00:00    22.81     59871   22.75  22.89   22.53
2016-09-12 00:00:00    22.85    109145   22.9   22.95   22.74

[65 rows x 6 columns]

>>> print("Spliting data")
Spliting data
>>> train,test = stock_data.split_frame(ratios=[0.9])
>>> h2o.ls()
                      key
0  py_1_sid_83d3_splitter
1         stock_price.hex

Let’s go to R Studio. We should see this H2O frame.

Let’s check it out from H2O Flow UI.

Cool, it’s there as well and we’re good here.

Advertisements

Weird Ref-count mismatch Message from H2O

H2O is a nice fast tools for data science work. I have discussed this topic in the following blogs:
H2O vs Sparkling Water
Sparking Water Shell: Cloud size under 12 Exception
Access Sparkling Water via R Studio
Running H2O Cluster in Background and at Specific Port Number
Recently we run into some weird issue when using H2O when using R Studio. For unknown reasons, it throws the following error messages:

score <- as.data.frame(main[,c('id', 'my_test_score')])

ERROR: Unexpected HTTP Status code: 500 Server Error (url = http://enkbda1node05.enkitec.com:26000/99/Rapids)

java.lang.IllegalStateException
 [1] "java.lang.IllegalStateException: Ref-count mismatch for vec $04ffc52a0000ffffffff$hdfs://ENKBDA1-ns/user/wzhou/test/data/test1/part-00000-8516f9f8-3d5f-164b-cf7e-18ca7e8d467f-d000.snappy.parquet: REFCNT = 2, should be 1"
 [2] "    water.rapids.Session.sanity_check_refs(Session.java:341)"                                                                                                                                                                                                     
 [3] "    water.rapids.Session.exec(Session.java:83)"                                                                                                                                                                                                                   
 [4] "    water.rapids.Rapids.exec(Rapids.java:93)"                                                                                                                                                                                                                     
 [5] "    water.api.RapidsHandler.exec(RapidsHandler.java:38)"                                                                                                                                                                                                          
 [6] "    sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)"                                                                                                                                                                                                 
 [7] "    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)"                                                                                                                                                                        
 [8] "    java.lang.reflect.Method.invoke(Method.java:498)"                                                                                                                                                                                                             
 [9] "    water.api.Handler.handle(Handler.java:63)"                                                                                                                                                                                                                    
[10] "    water.api.RequestServer.serve(RequestServer.java:448)"                                                                                                                                                                                                        
[11] "    water.api.RequestServer.doGeneric(RequestServer.java:297)"                                                                                                                                                                                                    
[12] "    water.api.RequestServer.doPost(RequestServer.java:223)"                                                                                                                                                                                                       
[13] "    javax.servlet.http.HttpServlet.service(HttpServlet.java:707)"                                                                                                                                                                                                 
[14] "    javax.servlet.http.HttpServlet.service(HttpServlet.java:790)"                                                                                                                                                                                                 
[15] "    ai.h2o.org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)"                                                                                                                                                                                
[16] "    ai.h2o.org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503)"                                                                                                                                                                            
[17] "    ai.h2o.org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1086)"                                                                                                                                                                    
[18] "    ai.h2o.org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:429)"                                                                                                                                                                             
[19] "    ai.h2o.org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1020)"                                                                                                                                                                     
[20] "    ai.h2o.org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)"                                                                                                                                                                         
[21] "    ai.h2o.org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)"                                                                                                                                                                 
[22] "    ai.h2o.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)"                                                                                                                                                                       
[23] "    water.JettyHTTPD$LoginHandler.handle(JettyHTTPD.java:189)"                                                                                                                                                                                                    
[24] "    ai.h2o.org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)"                                                                                                                                                                 
[25] "    ai.h2o.org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)"                                                                                                                                                                       
[26] "    ai.h2o.org.eclipse.jetty.server.Server.handle(Server.java:370)"                                                                                                                                                                                               
[27] "    ai.h2o.org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:494)"                                                                                                                                                        
[28] "    ai.h2o.org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)"                                                                                                                                                         
[29] "    ai.h2o.org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:982)"                                                                                                                                                              
[30] "    ai.h2o.org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1043)"                                                                                                                                              
[31] "    ai.h2o.org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:865)"                                                                                                                                                                                      
[32] "    ai.h2o.org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)"                                                                                                                                                                                 
[33] "    ai.h2o.org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)"                                                                                                                                                                
[34] "    ai.h2o.org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)"                                                                                                                                                          
[35] "    ai.h2o.org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)"                                                                                                                                                                      
[36] "    ai.h2o.org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)"                                                                                                                                                                       
[37] "    java.lang.Thread.run(Thread.java:745)"                                                                                                                                                                                                                        

Error in .h2o.doSafeREST(h2oRestApiVersion = h2oRestApiVersion, urlSuffix = page,  : 
  
ERROR MESSAGE:

Ref-count mismatch for vec $04ffc52a0000ffffffff$hdfs://ENKBDA1-ns/user/wzhou/test/data/test1/part-00000-8516f9f8-3d5f-164b-cf7e-18ca7e8d467f-d000.snappy.parquet: REFCNT = 2, should be 1

Tried different ways and still got the same error. Had to bounce the H2O cluster and get rid of this error. Then further investigation found out the steps we can reproduce this issue. If we delete some or all H2O frames from H2O UI, we could run into this issue.

Basically, just run getFrames command, then select the frames want to be deleted, click Delete selected frames at the bottom. Run getFrames again. It should show those frames deleted. But if I go to R Studio, run h2o.ls(), it will show the exact Ref-count mismatch error. It seems like the frames were deleted from H2O UI perspective, but not from R Studio’s perspective.

Ok, we found out the cause. How to resolve it? Check out H2O source code at https://github.com/h2oai/h2o-3/blob/master/h2o-core/src/main/java/water/rapids/Session.java.

we could see that this error is thrown during sanity_check_refs function. The function also has the following interesting description

Check that ref counts are in a consistent state.
* This should only be called between calls to Rapids expressions (otherwise may blow false-positives).

It seems there is something not handled well in other part of the code. If inconsistent in the object references, some other part of the code might fail. It looks bugs to me.

After some investigation, found out h2o.removeAll() running from R Studio should fix this issue.

Running H2O Cluster in Background and at Specific Port Number

In a few of my previous blog, I discussed H2O vs Sparkling Water, Sparking Water Shell: Cloud size under 12 Exception, and Access Sparkling Water via R Studio.

When using H2O Sparkling Water, there are two common issues. The first one is that the default port number is 54321. However, if the H2O cluster has been bounced multiple times, the assigned port could be assigned to a different port number, actually next available port number, 54323. If the cluster is used by many data analysts, it is inconvenient to inform all of them every time the port number is changed. You want users remember only one port number.

The second issue is that the sparkling shell session can not be running in the background. If close the putty session running the sparkling shell, the H2O cluster is terminated.

This blog discusses the solution to work around the above two issues.

For port number, there are actually two parameters related: spark.ext.h2o.client.port.base and spark.ext.h2o.node.port.base. The spark.ext.h2o.client.port.base is the port number for H2O UI while spark.ext.h2o.node.port.base is the port used by H2O cluster internally for the communication among H2O nodes. Make sure these two port numbers should be different. Having these two are the same will cause issue. I also add spark.ext.h2o.cloud.name for the name of my H2O cluster.

I created two separate scripts: run_sparkling_shell.sh for the running command for sparkling shell and sparkling-shell-init.scala for starting up commands for H2O cluster in scala.

[wzhou@enkbda1node05 ~]$ cat run_sparkling_shell.sh
bin/sparkling-shell \
--master yarn \
--conf spark.ext.h2o.cloud.name=WeidongH2O-Cluster \
--conf spark.ext.h2o.client.port.base=26000 \
--conf spark.ext.h2o.node.port.base=26005 \
--conf spark.executor.instances=10 \
--conf spark.executor.memory=12g \
--conf spark.driver.memory=8g \
--conf spark.executor.cores=4 \
--conf spark.yarn.executor.memoryOverhead=4g \
--conf spark.yarn.driver.memoryOverhead=4g \
--conf spark.scheduler.maxRegisteredResourcesWaitingTime=1000000 \
--conf spark.ext.h2o.fail.on.unsupported.spark.param=false \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.sql.autoBroadcastJoinThreshold=-1 \
--conf spark.locality.wait=30000 \
--conf spark.yarn.queue=WZTestPool \
--conf spark.scheduler.minRegisteredResourcesRatio=1 \
-i  sparkling-shell-init.scala

The code for sparkling-shell-init.scala.

import org.apache.spark.h2o._
val h2oContext = H2OContext.getOrCreate(spark)
import h2oContext._

To execute sparkling-shell in background, my first try was to use nohup. It didn’t work. When calling sparkling-shell-init.scala script, it automatically adds :quit command at the end and terminate H2O cluster.

When I did work on Exadata, I used to use screen command a lot. It is a very useful tool for protecting long running critical job execution, like patching/upgrade and import/export. Therefore, I use the same trick in screen to help me to get around the background issue. Here are the steps.

1. Start screen session
Use screen -ls command to check whether I have screen available.

[wzhou@enkbda1node05 ~]$ screen -ls
No Sockets found in /var/run/screen/S-wzhou.

Start a screen session.
[wzhou@enkbda1node05 ~]$ screen

2. Start H2O Cluster
Run the script to start H2O cluster.

[wzhou@enkbda1node05 ~]$ ./run_sparkling_shell.sh

-----
  Spark master (MASTER)     : yarn
  Spark home   (SPARK_HOME) :
  H2O build version         : 3.14.0.7 (weierstrass)
  Spark build version       : 2.2.0
  Scala version             : 2.11
----

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/anaconda2/lib/python2.7/site-packages/pyspark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/12/09 10:12:58 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/12/09 10:12:58 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
17/12/09 10:12:59 WARN yarn.Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
Spark context Web UI available at http://192.168.10.14:4040
Spark context available as 'sc' (master = yarn, app id = application_2567590118914_1007).
Spark session available as 'spark'.
Loading sparkling-shell-init.scala...
import org.apache.spark.h2o._
h2oContext: org.apache.spark.h2o.H2OContext =

Sparkling Water Context:
 * H2O name: WeidongH2O-Cluster
 * cluster size: 10
 * list of used nodes:
  (executorId, host, port)
  ------------------------
  (2,enkbda1node08.enkitec.com,26005)
  (4,enkbda1node12.enkitec.com,26005)
  (9,enkbda1node17.enkitec.com,26005)
  (5,enkbda1node13.enkitec.com,26005)
  (7,enkbda1node15.enkitec.com,26005)
  (1,enkbda1node09.enkitec.com,26005)
  (8,enkbda1node04.enkitec.com,26005)
  (6,enkbda1node10.enkitec.com,26005)
  (3,enkbda1node11.enkitec.com,26005)
  (10,enkbda1node05.enkitec.com,26005)
  ------------------------

  Open H2O Flow in browser: http://192.168.10.14:26000 (CMD + click in Mac OSX)


import h2oContext._

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_121)
Type in expressions to have them evaluated.
Type :help for more information.
scala&gt;

3. Verify H2O Cluster
Right now, close the session. Open a new session. Check out the existing screen session.

[wzhou@enkbda1node05 ~]$ screen -ls
There is a screen on:
        19044.pts-0.enkbda1node05       (Detached)
1 Socket in /var/run/screen/S-wzhou.

Now, attach to the existing session
[wzhou@enkbda1node05 ~]$ screen -x 19044

You should see the original session that was running sparking shell. The UI is still working as expected.

Sparking Water Shell: Cloud size under 12 Exception

In my last blog, I compared Sparking Water and H2O. Before I made Sparking-shell work, I run into a lot of issues. One of annoying errors was runtime exception: Cloud size under xx. Searched internet and found many people have the similar problems. There are many recommendations, ranging from downloading the latest and matching version, to set to certain parameters during startup. Unfortunately none of them were working for me. But finally I figured out the issue and would like to share my solution in this blog.

After I downloaded Sparking Water, unzipped the file, and run sparking-shell command as shown from http://h2o-release.s3.amazonaws.com/sparkling-water/rel-2.2/2/index.html. It looked good initially.

[sparkling-water-2.2.2]$ bin/sparkling-shell --conf "spark.executor.memory=1g"

-----
  Spark master (MASTER)     : local[*]
  Spark home   (SPARK_HOME) : /opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2
  H2O build version         : 3.14.0.7 (weierstrass)
  Spark build version       : 2.2.0
  Scala version             : 2.11
----

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://10.132.110.145:4040
Spark context available as 'sc' (master = yarn, app id = application_3608740912046_1343).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0.cloudera1
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.

scala> 

But when I run command val h2oContext = H2OContext.getOrCreate(spark), it gave me many errors as follows:

scala> import org.apache.spark.h2o._
import org.apache.spark.h2o._

scala> val h2oContext = H2OContext.getOrCreate(spark)
17/11/05 10:07:48 WARN internal.InternalH2OBackend: Increasing 'spark.locality.wait' to value 30000
17/11/05 10:07:48 WARN internal.InternalH2OBackend: Due to non-deterministic behavior of Spark broadcast-based joins
We recommend to disable them by configuring `spark.sql.autoBroadcastJoinThreshold` variable to value `-1`:
sqlContext.sql("SET spark.sql.autoBroadcastJoinThreshold=-1")
17/11/05 10:07:48 WARN internal.InternalH2OBackend: The property 'spark.scheduler.minRegisteredResourcesRatio' is not specified!
We recommend to pass `--conf spark.scheduler.minRegisteredResourcesRatio=1`
17/11/05 10:07:48 WARN internal.InternalH2OBackend: Unsupported options spark.dynamicAllocation.enabled detected!
17/11/05 10:07:48 WARN internal.InternalH2OBackend:
The application is going down, since the parameter (spark.ext.h2o.fail.on.unsupported.spark.param,true) is true!
If you would like to skip the fail call, please, specify the value of the parameter to false.

java.lang.IllegalArgumentException: Unsupported argument: (spark.dynamicAllocation.enabled,true)
  at org.apache.spark.h2o.backends.internal.InternalBackendUtils$$anonfun$checkUnsupportedSparkOptions$1.apply(InternalBackendUtils.scala:46)
  at org.apache.spark.h2o.backends.internal.InternalBackendUtils$$anonfun$checkUnsupportedSparkOptions$1.apply(InternalBackendUtils.scala:38)
  at scala.collection.immutable.List.foreach(List.scala:381)
  at org.apache.spark.h2o.backends.internal.InternalBackendUtils$class.checkUnsupportedSparkOptions(InternalBackendUtils.scala:38)
  at org.apache.spark.h2o.backends.internal.InternalH2OBackend.checkUnsupportedSparkOptions(InternalH2OBackend.scala:30)
  at org.apache.spark.h2o.backends.internal.InternalH2OBackend.checkAndUpdateConf(InternalH2OBackend.scala:60)
  at org.apache.spark.h2o.H2OContext.<init>(H2OContext.scala:90)
  at org.apache.spark.h2o.H2OContext$.getOrCreate(H2OContext.scala:355)
  at org.apache.spark.h2o.H2OContext$.getOrCreate(H2OContext.scala:383)
  ... 50 elided

You can see I need to pass in more parameters when starting sparking-shell. Change the parameters as follows:

bin/sparkling-shell \
--master yarn \
--conf spark.executor.memory=1g \
--conf spark.scheduler.maxRegisteredResourcesWaitingTime=1000000 \
--conf spark.ext.h2o.fail.on.unsupported.spark.param=false \
--conf spark.dynamicAllocation.enabled=false \
--conf spark.sql.autoBroadcastJoinThreshold=-1 \
--conf spark.locality.wait=30000 \
--conf spark.scheduler.minRegisteredResourcesRatio=1

Ok, this time it looked better, at least warning messages disappeared. But got error message java.lang.RuntimeException: Cloud size under 2.

[sparkling-water-2.2.2]$ bin/sparkling-shell \
> --master yarn \
> --conf spark.executor.memory=1g \
> --conf spark.scheduler.maxRegisteredResourcesWaitingTime=1000000 \
> --conf spark.ext.h2o.fail.on.unsupported.spark.param=false \
> --conf spark.dynamicAllocation.enabled=false \
> --conf spark.sql.autoBroadcastJoinThreshold=-1 \
> --conf spark.locality.wait=30000 \
> --conf spark.scheduler.minRegisteredResourcesRatio=1

-----
  Spark master (MASTER)     : yarn
  Spark home   (SPARK_HOME) : /opt/cloudera/parcels/SPARK2-2.2.0.cloudera1-1.cdh5.12.0.p0.142354/lib/spark2
  H2O build version         : 3.14.0.7 (weierstrass)
  Spark build version       : 2.2.0
  Scala version             : 2.11
----

Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://10.132.110.145:4040
Spark context available as 'sc' (master = yarn, app id = application_3608740912046_1344).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.2.0.cloudera1
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import org.apache.spark.h2o._
import org.apache.spark.h2o._

scala> val h2oContext = H2OContext.getOrCreate(spark)
java.lang.RuntimeException: Cloud size under 2
  at water.H2O.waitForCloudSize(H2O.java:1689)
  at org.apache.spark.h2o.backends.internal.InternalH2OBackend.init(InternalH2OBackend.scala:117)
  at org.apache.spark.h2o.H2OContext.init(H2OContext.scala:121)
  at org.apache.spark.h2o.H2OContext$.getOrCreate(H2OContext.scala:355)
  at org.apache.spark.h2o.H2OContext$.getOrCreate(H2OContext.scala:383)
  ... 50 elided

Not big deal. Anyway I didn’t specify the number of executors used and executor memory. It may complain about the size of the H2O cluster is too small. Add the following three parameters.

–conf spark.executor.instances=12 \
–conf spark.executor.memory=10g \
–conf spark.driver.memory=8g \

Rerun the whole thing got the same error with the size number changing to 12. It did not look right to me. Then I check out the H2O error logfile and found tons of messages as follows:

11-06 10:23:16.355 10.132.110.145:54325  30452  #09:54321 ERRR: Got IO error when sending batch UDP bytes: java.net.ConnectException: Connection refused
11-06 10:23:16.790 10.132.110.145:54325  30452  #06:54321 ERRR: Got IO error when sending batch UDP bytes: java.net.ConnectException: Connection refused

It looks like Sparking Water can not connect to Spark cluster. After some investigation, I then realized I installed and run sparking-shell from edge node on the BDA. If the H2O cluster was running inside a Spark application, the communication of Spark cluster on BDA is through BDA’s private network, or InfiniteBand network. Edge node can not directly communicate to IB network on BDA. With this assumption in mind, I installed and run Sparking Water on one of BDA nodes, it worked perfectly without any issue. Problem solved!

Use OEM 13c R2 to Discover Oracle BDA

OEM 13c Cloud Control is a powerful monitoring tool, not only for Exadata and Oracle database, but also for Oracle Big Data Appliance (BDA). There are many articles or blogs about Exadata Discovery using OEM 12c or 13c. But not many places discuss the OEM BDA Discovery, especially using the new version of OEM, 13c Cloud Control. In this blog, I am going to discuss the steps to discover BDA using OEM 13c R2.

First, do not use OEM 13c R1 for BDA Discovery. It is very time consuming and very likely not going to work. OEM 13c R2 is much better, at least I can successfully do the BDA Discovery on all of the BDAs I have worked on.

Secondly, unlike OEM Exadata Discovery, BDA Discovery usually requires one extra step before the Manual OEM BDA Discovery by using bdacli enable em command first. Theoretically if works, I don’t need to do anything in manual BDA discovery process. Unfortunately I have never run into this perfect situation in different BDA environment and always get certain kind of errors at the end.

Preparation
There are a few useful notes about OEM BDA Discovery.
1) Instructions to Install 12.1.0.4 BDA Plug-in on Oracle Big Data Appliance (BDA) V2.*/V3.0.*/V3.1/V4.* (Doc ID 1682558.1)
2) BDA Credentials for Enterprise Manager 13.x Plugin (Doc ID 2206111.1)
3) Instructions to Enable / Disable the 13.x BDA Enterprise Manager Plug-in on Oracle Big Data Appliance (BDA) V4.5-V4.7 (Doc ID 2206207.1)

Execute bdacli command
Run bdacli enable em. For BDA version below 4.5, run command bdacli enable em –force. I am almost 100% guarantee you won’t see the successful completion message from this command. For example, get the following error at the end.

INFO: Running: /opt/oracle/emcli_home/emcli discover_bda_cluster -hostname=enkx4bda1node01.enkitec.local -cloudera_credential=BDA_ENKX4BDA_CM_CRED -host_credential=BDA_ENKX4BDA_HOSTS_CRED -cisco_credential=BDA_ENKX4BDA_CISCO_CRED -ilom_credential=BDA_ENKX4BDA_ILOM_CRED -infiniband_credential=BDA_ENKX4BDA_IB_CRED -pdu_credential=BDA_ENKX4BDA_PDU_CRED -cisco_snmp_string="snmp_v3;;SNMPV3Creds;authUser:none;authPwd:none;authProtocol:none;privPwd:none" -pdu_snmp_string="snmp_v1v2_v3;;SNMPV1Creds;COMMUNITY:none" -switch_snmp_string="snmp_v1v2_v3;;SNMPV3Creds;authUser:none;authPwd:none;authProtocol:none;privPwd:none"
ERROR: Syntax Error: Unrecognized argument -cisco_snmp_string #Step Syntax Error: Unrecognized argument -pdu_snmp_string#
Are you sure you want to completely cleanup em and lose all related state ?

When see the above message, always type in N and not rollback the changes. Basically you have OEM agent deployed, just need to figure out which node you want to use as the start point for Manual OEM BDA Discovery.

On each node, run the following command:

[root@enkx4bda1node06 ~]# java -classpath /opt/oracle/EMAgent/agent_13.2.0.0.0/jlib/*:/opt/oracle/EMAgent/agent_13.2.0.0.0/plugins/oracle.sysman.bda.discovery.plugin_13.2.2.0.0/archives/* oracle.sysman.bda.discovery.pojo.GetHadoopClusters http://enkx4bda1node03.enkitec.local:7180/api/v1/clusters admin admin_password

You should see the error below for the execution on many nodes.

Apr 10, 2017 10:14:44 AM com.sun.jersey.api.client.ClientResponse getEntity
SEVERE: A message body reader for Java class [Loracle.sysman.bda.discovery.pojo.Items;, and Java type class [Loracle.sysman.bda.discovery.pojo.Items;, and MIME media type text/html was not found
Apr 10, 2017 10:14:44 AM com.sun.jersey.api.client.ClientResponse getEntity
SEVERE: The registered message body readers compatible with the MIME media type are:
*/* ->
  com.sun.jersey.core.impl.provider.entity.FormProvider
  com.sun.jersey.core.impl.provider.entity.MimeMultipartProvider
  com.sun.jersey.core.impl.provider.entity.StringProvider
  com.sun.jersey.core.impl.provider.entity.ByteArrayProvider
  com.sun.jersey.core.impl.provider.entity.FileProvider
  com.sun.jersey.core.impl.provider.entity.InputStreamProvider
  com.sun.jersey.core.impl.provider.entity.DataSourceProvider
  com.sun.jersey.core.impl.provider.entity.XMLJAXBElementProvider$General
  com.sun.jersey.core.impl.provider.entity.ReaderProvider
  com.sun.jersey.core.impl.provider.entity.DocumentProvider
  com.sun.jersey.core.impl.provider.entity.SourceProvider$StreamSourceReader
  com.sun.jersey.core.impl.provider.entity.SourceProvider$SAXSourceReader
  com.sun.jersey.core.impl.provider.entity.SourceProvider$DOMSourceReader
  com.sun.jersey.core.impl.provider.entity.XMLRootElementProvider$General
  com.sun.jersey.core.impl.provider.entity.XMLListElementProvider$General
  com.sun.jersey.core.impl.provider.entity.XMLRootObjectProvider$General
  com.sun.jersey.core.impl.provider.entity.EntityHolderReader

For certain node, you could see successful message and showing below.

enkx4bda;;

In my case, it is node 2. So I will use Node 2 for my manual BDA Discovery in the following steps.

Manual OEM BDA Discovery
Logon to OEM as sysman user. Select Add Target -> Add Target Manually.

Select Add Targets Using Guided Process

Select Oracle Big Data Appliance

The Add Targets Manually pages shows up. Select node2 from the list. Click Next.

After it completes, it will show the following hardware information. Click Next.

The Hardware Credentials screen shows up. If all Host credentials show green sign, you don’t need to do anything related to Host. Go to the next one, for example, IB Switch. Select Set Credentials -> All Infiniband Switches . Then set SNMP Credentials type and community string. Majority of the time, input public for community string. Then click OK.

If successful, it shows the green check.

Following the similar procedure for all other hardware components, like ILOM, PDU and Cisco Switch. At the end, you should see the following screen.
One interesting note about PDU. PDU component always behave in a weird way during the discovery. For this case, it shows successful with green check, but later on OEM shows PDUs as DOWN status after the discovery. In my other discovery works for different BDA environments, the green check has never shown up in this page, but PDUs shows UP status after the discovery. So the result is inconsistent.

Click Next. The screen for Cloudera Manager shows up. Click Edit, verify the credential for admin user for Cloudera Manager. Then click Next.

The Software page shows up, click Next.

The review page shows up, click Submit

If successful, will see the screen message below, click OK.

The BDA Discovery is completed.
.
You might notice the new BDA cluster is called BDA Network1. This is not a good way to name a cluster, especially you have multiple BDAs under the management from the same OEM. I don’t understand why not to use BDA’s cluster name or Cloudera Manager’s cluster name. Either one will be much better than this naming. Even worse, you can change a lot of target name in OEM, but not for this one. I have another blog (Change BDA Cluster Name in OEM Cloud Control 13c) discussing a partial workaround for this issue.

To view the detail of a host target, you can have the following:

The presentation looks better than OEM 12c. In general, OEM 13c for BDA is good one. But pay attention to the followings. Otherwise you will spend a lot of additional time.
1) Before performing OEM BDA Discovery, make sure you have changed all of your default passwords on BDA. It’s easier to use default password during the discovery, but a huge pain after you change passwords for certain user accounts used in BDA discovery. Basically, update the Named Credentials is not enough and you have to delete the whole BDA target in OEM and redo the discovery.

2) Similarly, if configure TLS with Cloudera Manager after BDA Discovery, you will have to remove the BDA target and redo the discovery. It is a clearly a bug in OEM, at least not fixed at the time I am writing this blog.

3) Sometimes you might see tons of alerts from almost every ports in the Cisco switch. If from a few ports, I might believe it. But for almost every port, there is no way this is the right alert. As matter of fact, Oracle Support confirmed it seem false alert. At the time I had to do the BDA Rediscovery after configuring TLS with Cloudera Manager, I happened to notice all Cisco port alerts were gone after BDA rediscovery.

4) Both Oracle document and Oracle support says OEM 13c R2 supports BDA v4.5+ and any version below it is not supported. It’s true the lower BDA version would run into additional issues, but I managed to find workaround and make it working for BDA v4.3.