Configure Attunity Replicate to Perform CDC from Oracle to Kafka

In the last blog, Install Attunity Replicate on Linux, I discussed how to install Attunity Replicate on a Linux VM. In today’s blog, I am going to continue the topic of Attunity Replicate and discuss how to configure Attunity Replicate to perform CDC from an Oracle source database to a Kafka target.

1. Enable Source Database in Archivelog Mode
CDC requires the source database should be in archive log mode. You can easily check it out by running the following command:

SQL> archive log list
Database log mode	       Archive Mode
Automatic archival	       Enabled
Archive destination	       USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence     32
Next log sequence to archive   34
Current log sequence	       34

If archivelog mode is not set, do the following:

shutdown immediate;
startup nomount;
alter database mount;
alter database archivelog;
alter database open;

2. Enable Supplemental Logging
Check whether supplemental logging is enable or not. If not, enable it.

SQL> SELECT supplemental_log_data_min FROM v$database;
SUPPLEME
--------
NO

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Database altered.

SQL>  SELECT supplemental_log_data_min FROM v$database;
SUPPLEME
--------
YES

3. Create a DB User for CDC and Grant the Permission

CREATE USER cdc_user
  IDENTIFIED BY cdc_user
  DEFAULT TABLESPACE USERS
  TEMPORARY TABLESPACE TEMP
  PROFILE DEFAULT
  ACCOUNT UNLOCK;

4. Grant the permission to the user.

GRANT CREATE SESSION TO CDC_USER;
GRANT SELECT ANY TRANSACTION TO CDC_USER;
GRANT SELECT ON V_$ARCHIVED_LOG TO CDC_USER;
GRANT SELECT ON V_$LOG TO CDC_USER;
GRANT SELECT ON V_$LOGFILE TO CDC_USER;
GRANT SELECT ON V_$DATABASE TO CDC_USER;
GRANT SELECT ON V_$THREAD TO CDC_USER;
GRANT SELECT ON V_$PARAMETER TO CDC_USER;
GRANT SELECT ON V_$NLS_PARAMETERS TO CDC_USER;
GRANT SELECT ON V_$TIMEZONE_NAMES TO CDC_USER;
GRANT SELECT ON V_$TRANSACTION TO CDC_USER;
GRANT SELECT ON ALL_INDEXES TO CDC_USER;
GRANT SELECT ON ALL_OBJECTS TO CDC_USER;
GRANT SELECT ON DBA_OBJECTS TO CDC_USER;
GRANT SELECT ON ALL_TABLES TO CDC_USER;
GRANT SELECT ON ALL_USERS TO CDC_USER;
GRANT SELECT ON ALL_CATALOG TO CDC_USER;
GRANT SELECT ON ALL_CONSTRAINTS TO CDC_USER;
GRANT SELECT ON ALL_CONS_COLUMNS TO CDC_USER;
GRANT SELECT ON ALL_TAB_COLS TO CDC_USER;
GRANT SELECT ON ALL_IND_COLUMNS TO CDC_USER;
GRANT SELECT ON ALL_LOG_GROUPS TO CDC_USER;
GRANT SELECT ON SYS.DBA_REGISTRY TO CDC_USER;
GRANT SELECT ON SYS.OBJ$ TO CDC_USER;
GRANT SELECT ON SYS.ENC$ TO CDC_USER;
GRANT SELECT ON DBA_TABLESPACES TO CDC_USER;
GRANT SELECT ON ALL_TAB_PARTITIONS TO CDC_USER;
GRANT SELECT ON ALL_ENCRYPTED_COLUMNS TO CDC_USER;
GRANT SELECT ON ALL_VIEWS TO CDC_USER;
GRANT SELECT ANY TABLE TO CDC_USER;
GRANT CREATE ANY DIRECTORY TO CDC_USER;
GRANT ALTER ANY TABLE TO CDC_USER;
GRANT SELECT ON V_$TRANSPORTABLE_PLATFORM TO CDC_USER;
GRANT EXECUTE ON DBMS_FILE_TRANSFER TO CDC_USER;

5. Create Test Data

drop user wztest cascade;
create user wztest identified by wztest default tablespace users;
grant connect, resource, unlimited tablespace to wztest;

CREATE TABLE wztest.EMPLOYEE
(
 EMPLOYEE_ID NUMBER (6)  NOT NULL ,
 FIRST_NAME VARCHAR2 (20)  NOT NULL ,
 LAST_NAME VARCHAR2 (20)  NOT NULL ,
 JOIN_DATE date not null,
 ADDRESS_LINE1 varchar2(50),
 ADDRESS_LINE2 varchar2(50),
 ADDRESS_LINE3 varchar2(50),
 POSTCODE varchar2(10),
 PHONE_NUMBER varchar2(30)
) tablespace USERS;

CREATE SEQUENCE employee_id_seq start with 1 increment by 1 cache 20;

begin
    for i in 1..1000 loop
        insert into wztest.employee (employee_id, first_name, last_name, join_date)
        values (employee_id_seq.nextval,
                dbms_random.string('U', 1),
                dbms_random.string('U', dbms_random.value(5,10)),
                to_date(trunc(DBMS_RANDOM.VALUE(to_char(DATE '2000-01-01','J'),to_char(DATE '2016-12-31','J') ) ),'J' ));
    end loop;
    commit;
end;
/
;

select count(*) from wztest.employee;

6. Install Single Node Kafka
For quick demo purpose, I just created a single node Kafka on the same Attunity Replicate Server.

[root@cdc-att-vm1 ~]# mkdir /opt/kafka
[root@cdc-att-vm1 ~]# cd /opt/kafka
[root@cdc-att-vm1 kafka]# wget http://apache.mirrors.lucidnetworks.net/kafka/2.2.0/kafka_2.12-2.2.0.tgz
--2019-05-27 08:38:41--  http://apache.mirrors.lucidnetworks.net/kafka/2.2.0/kafka_2.12-2.2.0.tgz
Resolving apache.mirrors.lucidnetworks.net (apache.mirrors.lucidnetworks.net)... 108.166.161.136
Connecting to apache.mirrors.lucidnetworks.net (apache.mirrors.lucidnetworks.net)|108.166.161.136|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 57028557 (54M) [application/x-gzip]
Saving to: ‘kafka_2.12-2.2.0.tgz’

100%[===============================================================================================================================================>] 57,028,557  1011KB/s   in 55s    

2019-05-27 08:39:36 (1007 KB/s) - ‘kafka_2.12-2.2.0.tgz’ saved [57028557/57028557]

[root@cdc-att-vm1 kafka]# tar -xzf kafka_2.12-2.2.0.tgz
[root@cdc-att-vm1 kafka]# cd kafka_2.12-2.2.0
[root@cdc-att-vm1 kafka_2.12-2.2.0]# ls -l
total 52
drwxr-xr-x. 3 root root  4096 Mar  9 19:46 bin
drwxr-xr-x. 2 root root  4096 Mar  9 19:46 config
drwxr-xr-x. 2 root root  4096 May 27 08:39 libs
-rw-r--r--. 1 root root 32216 Mar  9 19:44 LICENSE
-rw-r--r--. 1 root root   336 Mar  9 19:44 NOTICE
drwxr-xr-x. 2 root root    44 Mar  9 19:46 site-docs

7. Start Kafka
Run the following two commands to start Kafka.

bin/zookeeper-server-start.sh config/zookeeper.properties  &
bin/kafka-server-start.sh config/server.properties &

8. Create a Test Topic
Run the following to create a topic.
bin/kafka-topics.sh –create –bootstrap-server localhost:9092 –replication-factor 1 –partitions 3 –topic wztest1
Verify the result.

[root@cdc-att-vm1 kafka_2.12-2.2.0]# bin/kafka-topics.sh --list --bootstrap-server localhost:9092
wztest1

9. Add Oracle as Source Endpoint
Now, we can do the real work for configuring Attunity Replicate. The first thing is to configure Attunity Endpoint. I need to configure two endpoints: one for Oracle and one for Kafka. Click Manage Endpoint Connection

Then choose Source as Role and Oracle as the Type. Filling rest of the required information.

Click Advanced tab.
Filling in the asmuser information. The user asmuser should have sysasm privilege.

When done, click Save, then click Test Connection. If parameters are input correctly, you should see the message of Test Connection Succeed.

10. Configure Kafka Endpoint
Click + New Endpoint Connection. This time, choose Kafka as Target. Input the parameters shown below.

Click Save and then Test Connection. It should work.

11. Create a New CDC Task
Make sure you contact with Attunity Sales people to get a license file and import the trail license before creating a new CDC task.
Click + New Task
Give a name and choose the rest default values.

Drag Oracle Source from the panel of Endpoint Connection to the source location. Do the similar thing for Kafka Target to the target location.


Next, I need to add source table. click Table Selection. Add my testing table there.

Then click Save to save task information.

12. Run the Task of Initial Load
Click Run button. It starts the initial load of the tables in the task.

Right now you should see a nice finish of the initial load.

13. Test CDC
Next, let me test whether CDC is working or not. Let me run some DML statements on the testing table.

update wztest.employee set last_name='wztest1' where employee_id=265;
update wztest.employee set last_name='wztest2' where employee_id=268;
update wztest.employee set last_name='wztest3' where employee_id=311;
update wztest.employee set last_name='wztest4' where employee_id=580;
delete from wztest.employee where employee_id=581;
commit;

Now, let me check out CDC monitoring screen. Nice, it shows 4 updates and 1 delete CDC changes.

Nice UI from Attunity.

Advertisements

Install Attunity Replicate on Linux

Recently I have done a lot of work on Attunity products, especially Attunity Replicate. Similar to Oracle GoldenGate, Attunity Replicate is a data replication tool. I used Oracle GoldenGate for many years and pretty familiar with the architecture and configuration of Oracle GoldenGate. Oracle GoldenGate is literally the gold standard for data replication and used by many enterprises worldwide. There are also many data replication tools out there in the market. But whether they can really complete with Oracle GoldenGate is questionable. So when I first learned about Attunity product, my first reaction was “what is that”? Can it compete with Oracle GoldenGate? After several months’ intensive feature testing and benchmarking, I am quite impressive with Attunity Replicate product and feel this is a good product I can recommend in my blog. I am going to write a few blogs about how to install, and configure Attunity Replicate product. This blog is the first one of the series.

Attunity Replicate is the flagship product from Attunity, which started in 1988 and changed the name to Attunity in 2000. The company mainly focuses on providing the products for data replication, change data capture (CDC), and data connectivity. The Attunity Replicate supports a broad range of source and targets. The environment the product supports including:
RDBMS: Oracle, SQL, DB2, MySQL, Sybase, PostgreSQL
Data warehouses: Exadata, Teradata, IBM Netezza, Vertica, Pivotal, SQL Data Warehouse
Hadoop: Hortonworks, Cloudera, MapR
Cloud: AWS, Azure, Google Cloud
Streaming platforms: Apache Kafka and Confluent Platform
Applications: e.g., SAP
Legacy: IMS/DB, DB2 z/OS, RMS, VSAM

For data replication and CDC tools, my first question is always whether the product is mature enough to support real production load. After some discussions and research, I was surprised to know that Attunity Replicate product is actually the foundation component in the Microsoft’s database migration tools and I will discuss this piece in the future blogs. From what I noticed is that there is a lot of demands to use Attunity Replicate to do the CDC type work from relational databases (like Oracle, DB2 and MySQL) to Kafka. I am going to discuss the configuration of this topic in the next blog. Ok, let me go back to the topic of installation steps for Attunity Replicate.

Installation Environment
I use a VM with CentOS Linux 7.5, 4 CPU, 16GB memory. Copy all of the software binary to a stage area on the Attunity Replicate Server. For this blog, I use /tmp for quick configuration without worrying about creating additional data disk. The Attunity software will be installed at /opt/attunity directory.

Software Required
Here are the software required:

  • Attunity Replicate Software

areplicate-6.3.0-351.x86_64.rpm.
Unfortunately I don’t believe there is an URL that open to public without Attunity Sales people. You have to make the request and they will provide the download URL and trial license file.

  • Database Connection Tools

For Oracle database, I use Oracle Instant Client. For other databases, you have to download the required ODBC tools. If you plan to use Oracle database as the CDC source, here are some of the files you need.
instantclient-basic-linux.x64-12.2.0.1.0.zip
instantclient-sqlplus-linux.x64-12.2.0.1.0.zip

Create Attunity Users/Groups
If not using AD user and group, then need to create a local group and user.

[root@cdc-att-vm1 ~]# groupadd attunity
[root@cdc-att-vm1 ~]# useradd -m -s /bin/bash -g attunity attunity -p attunity123
[root@cdc-att-vm1 ~]# id attunity
uid=1001(attunity) gid=1001(attunity) groups=1001(attunity)

Create Installation Directories

[root@cdc-att-vm1 ~]# mkdir /opt/attunity
[root@cdc-att-vm1 ~]# mkdir /opt/oracle

Install Oracle Client
Unzip oracle client’s zip files

[root@cdc-att-vm1 ~]# unzip /tmp/instantclient-basic-linux.x64-12.2.0.1.0.zip -d /opt/oracle
Archive:  /tmp/instantclient-basic-linux.x64-12.2.0.1.0.zip
  inflating: /opt/oracle/instantclient_12_2/adrci  
  inflating: /opt/oracle/instantclient_12_2/BASIC_README  
  inflating: /opt/oracle/instantclient_12_2/genezi  
  inflating: /opt/oracle/instantclient_12_2/libclntshcore.so.12.1  
  inflating: /opt/oracle/instantclient_12_2/libclntsh.so.12.1  
  inflating: /opt/oracle/instantclient_12_2/libipc1.so  
  inflating: /opt/oracle/instantclient_12_2/libmql1.so  
  inflating: /opt/oracle/instantclient_12_2/libnnz12.so  
  inflating: /opt/oracle/instantclient_12_2/libocci.so.12.1  
  inflating: /opt/oracle/instantclient_12_2/libociei.so  
  inflating: /opt/oracle/instantclient_12_2/libocijdbc12.so  
  inflating: /opt/oracle/instantclient_12_2/libons.so  
  inflating: /opt/oracle/instantclient_12_2/liboramysql12.so  
  inflating: /opt/oracle/instantclient_12_2/ojdbc8.jar  
  inflating: /opt/oracle/instantclient_12_2/uidrvci  
  inflating: /opt/oracle/instantclient_12_2/xstreams.jar  
[root@cdc-att-vm1 ~]# unzip /tmp/instantclient-sqlplus-linux.x64-12.2.0.1.0.zip -d /opt/oracle
Archive:  /tmp/instantclient-sqlplus-linux.x64-12.2.0.1.0.zip
  inflating: /opt/oracle/instantclient_12_2/glogin.sql  
  inflating: /opt/oracle/instantclient_12_2/libsqlplusic.so  
  inflating: /opt/oracle/instantclient_12_2/libsqlplus.so  
  inflating: /opt/oracle/instantclient_12_2/sqlplus  
  inflating: /opt/oracle/instantclient_12_2/SQLPLUS_README 

Change the ownership to attunity user.

[root@cdc-att-vm1 ~]# chown -R attunity:attunity /opt/oracle
[root@cdc-att-vm1 ~]# ls -l /opt/oracle
total 4
drwxr-xr-x. 2 attunity attunity 4096 May 27 01:07 instantclient_12_2

Configure Oracle Client
After installing Oracle instant client, I need to create a tnsnames.ora file for the oracle client.

[root@cdc-att-vm1 instantclient_12_2]# mkdir -p /opt/oracle/instantclient_12_2/network/admin
[root@cdc-att-vm1 instantclient_12_2]# cd /opt/oracle/instantclient_12_2/network/admin
[root@cdc-att-vm1 admin]# vi tnsnames.ora
[root@cdc-att-vm1 admin]# cat tnsnames.ora
pdb1= 
(DESCRIPTION = 
  (ADDRESS = (PROTOCOL = TCP)(HOST = 10.21.2.12 )(PORT = 1521))
  (CONNECT_DATA =
    (SERVER = DEDICATED)
    (SERVICE_NAME = pdb1)
  )
)

Configure Attunity User Environment

[attunity@cdc-att-vm1 ~]$ vi .bash_profile
[attunity@cdc-att-vm1 ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs
export ORACLE_INSTANT_HOME=/opt/oracle/instantclient_12_2
export TNS_ADMIN=$ORACLE_INSTANT_HOME/network/admin
export ATTUNITY_HOME=/opt/attunity

export LD_LIBRARY_PATH=$ORACLE_INSTANT_HOME:$LD_LIBRARY_PATH
export PATH=$PATH:$ORACLE_INSTANT_HOME


PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
[attunity@cdc-att-vm1 ~]$ . .bash_profile
[attunity@cdc-att-vm1 ~]$ sqlplus sys@pdb1 as sysdba
SQL> show con_id;
CON_ID
------------------------------
3

SQL> select name, open_mode from v$database;
NAME	  OPEN_MODE
--------- --------------------
MYDB1	  READ WRITE

SQL> exit

Install Attunity Replicate
Make sure to logon as root user and run the following command:

[root@cdc-att-vm1 tmp]# nocredentials=true verbose=true rpm -ivh --prefix /opt /tmp/areplicate-6.3.0-351.x86_64.rpm
Preparing...                          ################################# [100%]
pre install ...
new installation ...
nocredentials set to true - skipping check for user/group
user=attunity
group=attunity
Updating / installing...
   1:areplicate-6.3.0-351             ################################# [100%]
post install ...
creating /opt/attunity/replicate/bin/site_arep_login.sh
changing ownership of /opt/attunity/replicate to attunity:attunity
installing areplicate as a service ...
/opt/attunity/replicate/bin/arep.sh: running as root
/opt/attunity/replicate/bin/arep.sh: installation directory is /etc/init.d
/opt/attunity/replicate/bin/arep.sh: sourcing /opt/attunity/replicate/bin/arep_login.sh
/opt/attunity/replicate/bin/arep.sh: the Attunity Replicate root directory is to /opt/attunity/replicate
/opt/attunity/replicate/bin/arep.sh: sourcing /opt/attunity/replicate/bin/site_arep_login.sh
/opt/attunity/replicate/bin/arep.sh: created /etc/init.d/areplicate
/opt/attunity/replicate/bin/arep.sh: created symlink /opt/attunity/replicate/bin/areplicate and run level symlinks
/opt/attunity/replicate/bin/arep.sh: created empty instance specific settings file /opt/attunity/replicate/data/areplicate_arep_login.sh
/opt/attunity/replicate/bin/arep.sh: installed /etc/init.d/areplicate
/opt/attunity/replicate/bin/arep.sh: root directory = /opt/attunity/replicate
/opt/attunity/replicate/bin/arep.sh: internal port  = 3550
/opt/attunity/replicate/bin/arep.sh: rest port      = 3552
/opt/attunity/replicate/bin/arep.sh: data directory = /opt/attunity/replicate/data
/opt/attunity/replicate/bin/arep.sh: installed by   = root
starting service(s) ...
starting service areplicate ...

Check out the processes.

[root@cdc-att-vm1 tmp]# ps -ef|grep attunity
attunity  38431      1  0 01:40 ?        00:00:00 /opt/attunity/replicate/bin/repctl -d /opt/attunity/replicate/data service start port=3550 rest_port=3552
attunity  38432  38431  1 01:40 ?        00:00:01 /opt/attunity/replicate/bin/repctl -d /opt/attunity/replicate/data service start port=3550 rest_port=3552
root      38554   1214  0 01:42 pts/0    00:00:00 grep --color=auto attunity

Change root user Environment
Modify .bash_profile file for root user.

[root@cdc-att-vm1 ~]# vi .bash_profile
[root@cdc-att-vm1 ~]# cat .bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
	. ~/.bashrc
fi

# User specific environment and startup programs
export PATH=/opt/attunity/replicate/bin:/opt/oracle/instantclient_12_2:.:$PATH
export LD_LIBRARY_PATH=/opt/attunity/replicate/lib:/opt/oracle/instantclient_12_2:$LD_LIBRARY_PATH

PATH=$PATH:$HOME/bin

export PATH
[root@cdc-att-vm1 ~]# . .bash_profile
[root@cdc-att-vm1 ~]# areplicate status
running: /opt/attunity/replicate/bin/repctl -d /opt/attunity/replicate/data service start port=3550 rest_port=3552

Change Attunity admin User Password
Need to change the default password for admin user. Otherwise, you can not logon to the Attunity Replicate UI.

[root@cdc-att-vm1 ~]# repctl
Enter a command or quit
SETSERVERPASSWORD cdcatt
[SETSERVERPASSWORD command] Succeeded
Enter a command or quit
quit
[prompt command] Succeeded

After change the admin password, need to bounce the replicate server.

[root@cdc-att-vm1 ~]# areplicate stop
[root@cdc-att-vm1 ~]# areplicate start
[root@cdc-att-vm1 ~]# areplicate status
running: /opt/attunity/replicate/bin/repctl -d /opt/attunity/replicate/data service start port=3550 rest_port=3552

Access Attunity Replicate
The Attunity Replicate UI is located at http://:3552/attunityreplicate. After logon, you should see the following screen.


Ok, it’s done for the installation part. In the next blog, I am going to discuss how to configure source Oracle database and a Kafka target.

Useful Scripts for Kafka

I have used Kafka for the past few years. It is an interesting technology. In this blog, I am going to discuss some useful commands and scripts. There are many blogs and articles about how to install a Kafka cluster. Whether it is single node or multiple node cluster, the command usage is quite similar.In this blog, I am going to show you some useful commands and scripts to quickly test out your kafka topic.

Set Environment
When dealing with –zookeeper or –broker-list, I have to a long list of zookeeper and Kafka broker with port number. To simplify the process, I usually set it in an environment file. For my testing, I am going to create a test topic, wz-test1.

[testuser@wz-vm1 kafka-test]$ cat set-kafka.env
export KAFKAZKHOSTS=zknode1:2181,zknode2:2181,zknode3:2181
export KAFKABROKERS=kbnode1:9092,kbnode2:9092,kbnode3:9092
export KAFKACURTOPIC=dd-wz-test1

Create and list Topic
Here is the script to create a topic, then do the listing of topic.

[testuser@wz-vm1 kafka-test]$ cat create-topic.sh
. set-kafka.env
kafka-topics --create --zookeeper $KAFKAZKHOSTS --replication-factor 3 --partitions 5 --topic $KAFKACURTOPIC

[testuser@wz-vm1 kafka-test]$ cat ./list-topic.sh
. set-kafka.env
kafka-topics --list --zookeeper $KAFKAZKHOSTS 

Create a topic.

[testuser@wz-vm1 kafka-test]$ ./create-topic.sh
Created topic "wz-test1".

[testuser@wz-vm1 kafka-test]$ ./list-topic.sh
wz-test1

Describe Topic

[testuser@wz-vm1 kafka-test]$ cat desc-topic.sh
. set-kafka.env
kafka-topics --describe --zookeeper $KAFKAZKHOSTS --topic $KAFKACURTOPIC

[testuser@wz-vm1 kafka-test]$ ./desc-topic.sh
Topic:wz-test1  PartitionCount:5        ReplicationFactor:3     Configs:
        Topic: wz-test1 Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 3,1,2
        Topic: wz-test1 Partition: 1    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
        Topic: wz-test1 Partition: 2    Leader: 2       Replicas: 2,3,1 Isr: 2,3,1
        Topic: wz-test1 Partition: 3    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
        Topic: wz-test1 Partition: 4    Leader: 1       Replicas: 1,3,2 Isr: 1,3,2

Delete Topic

[testuser@wz-vm1 kafka-test]$ cat delete-topic.sh
. set-kafka.env
kafka-topics --delete --zookeeper $KAFKAZKHOSTS --topic $1
[testuser@wz-vm1 kafka-test]$ ./list-topic.sh 
wz-test1

[testuser@wz-vm1 kafka-test]$ ./delete-topic.sh wz-test1
Topic wz-test1 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

[testuser@wz-vm1 kafka-test]$ ./list-topic.sh 
[testuser@wz-vm1 kafka-test]$

Produce a message

[testuser@wz-vm1 kafka-test]$ cat produce-msg.sh
. set-kafka.env
echo "hello msg: `date '+%Y-%m-%d %H:%M:%S'`" | kafka-console-producer --broker-list $KAFKABROKERS --topic $KAFKACURTOPIC

[testuser@wz-vm1 kafka-test]$ ./produce-msg.sh
>>[testuser@wz-vm1 kafka-test]$

Consume message

[testuser@wz-vm1 kafka-test]$ cat consume-msg.sh
. set-kafka.env
kafka-console-consumer --bootstrap-server $KAFKABROKERS --topic $KAFKACURTOPIC --from-beginning

[testuser@wz-vm1 kafka-test]$ ./consume-msg.sh
hello msg: 2019-01-06 19:15:34
^CProcessed a total of 1 messages

Push a file to topic

[testuser@wz-vm1 kafka-test]$ cat test_file.txt
hello1
hello2
hello3
hello4
hello5

[testuser@wz-vm1 kafka-test]$  kafka-console-producer --broker-list $KAFKABROKERS --topic $KAFKACURTOPIC < test_file.txt
>>>>>>

[testuser@wz-vm1 kafka-test]$ ./consume-msg.sh
hello5
hello3
hello1
hello4
hello2
^CProcessed a total of 5 messages
[testuser@wz-vm1 kafka-test]$

Configure Retention Time
The default retention period is 24 hrs or 86400000 milliseconds in Kafka. If you want to delete all messages in a topic, you can set retentions.ms parameter to a small number.

Consume message

[testuser@wz-vm1 kafka-test]$ cat config-retention.sh
. set-kafka.env
kafka-configs --alter --zookeeper $KAFKAZKHOSTS --entity-type topics --entity-name $KAFKACURTOPIC --add-config retention.ms=$1

Set the retention time to 1 minute.

[testuser@wz-vm1 kafka-test]$ ./config-retention.sh 60000
Completed Updating config for entity: topic 'wz-test1'.

[testuser@wz-vm1 kafka-test]$ ./desc-topic.sh
Topic:wz-test1  PartitionCount:5        ReplicationFactor:3     Configs:retention.ms=60000
        Topic: wz-test1 Partition: 0    Leader: 3       Replicas: 3,1,2 Isr: 3,1,2
        Topic: wz-test1 Partition: 1    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3
        Topic: wz-test1 Partition: 2    Leader: 2       Replicas: 2,3,1 Isr: 2,3,1
        Topic: wz-test1 Partition: 3    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
        Topic: wz-test1 Partition: 4    Leader: 1       Replicas: 1,3,2 Isr: 1,3,2

Bulk test of producing messages
If you want to test out the distribution of messages, you can use kafka-verifiable-producer.

[testuser@wz-vm1 kafka-test]$ cat test-producer.sh
. set-kafka.env
kafka-verifiable-producer --broker-list $KAFKABROKERS --topic $KAFKACURTOPIC --max-messages 10

[testuser@wz-vm1 kafka-test]$ ./test-producer.sh
{"timestamp":1552341811025,"name":"startup_complete"}
{"timestamp":1552341811297,"name":"producer_send_success","key":null,"value":"4","offset":1,"topic":"wz-test1","partition":2}
{"timestamp":1552341811300,"name":"producer_send_success","key":null,"value":"9","offset":2,"topic":"wz-test1","partition":2}
{"timestamp":1552341811301,"name":"producer_send_success","key":null,"value":"2","offset":3,"topic":"wz-test1","partition":4}
{"timestamp":1552341811302,"name":"producer_send_success","key":null,"value":"7","offset":4,"topic":"wz-test1","partition":4}
{"timestamp":1552341811302,"name":"producer_send_success","key":null,"value":"3","offset":1,"topic":"wz-test1","partition":1}
{"timestamp":1552341811302,"name":"producer_send_success","key":null,"value":"8","offset":2,"topic":"wz-test1","partition":1}
{"timestamp":1552341811302,"name":"producer_send_success","key":null,"value":"0","offset":1,"topic":"wz-test1","partition":3}
{"timestamp":1552341811302,"name":"producer_send_success","key":null,"value":"5","offset":2,"topic":"wz-test1","partition":3}
{"timestamp":1552341811303,"name":"producer_send_success","key":null,"value":"1","offset":1,"topic":"wz-test1","partition":0}
{"timestamp":1552341811303,"name":"producer_send_success","key":null,"value":"6","offset":2,"topic":"wz-test1","partition":0}
{"timestamp":1552341811312,"name":"shutdown_complete"}
{"timestamp":1552341811313,"name":"tool_data","sent":10,"acked":10,"target_throughput":-1,"avg_throughput":34.602076124567475}

Consume the message under the same consumer group

[testuser@wz-vm1 kafka-test]$ cat test-consumer.sh
. set-kafka.env
kafka-verifiable-consumer --broker-list $KAFKABROKERS --topic $KAFKACURTOPIC --max-messages 3 --group-id $1

[testuser@wz-vm1 kafka-test]$ ./test-consumer.sh myconsumergrp1
{"timestamp":1552342329954,"name":"startup_complete"}
{"timestamp":1552342330191,"name":"partitions_revoked","partitions":[]}
{"timestamp":1552342330215,"name":"partitions_assigned","partitions":[{"topic":"wz-test1","partition":4},{"topic":"wz-test1","partition":3},{"topic":"wz-test1","partition":2},{"topic":"wz-test1","partition":1},{"topic":"wz-test1","partition":0}]}
^C{"timestamp":1552342343799,"name":"shutdown_complete"}

How to do tagging efficiently in Terraform?

I have done a lot of Terraform stuff on all three major clouds (AWS, Azure, GCP) for the past few months. If you need to build up a lot of resource on clooud, Terraform is the way to go. However, sometimes it seems easier to copy some part code from one place to another place. If you have a huge enterprise level environment, this kind of duplicated code is a nightmare for changes and maintenance. Tagging is the one that is very easy to have duplicated code everywhere. In this blog, I am going to discuss how to do tagging efficiently in Terraform.

Here is an example how tagging is done in Azure environment.

resource "azurerm_resource_group" "default" {
   name     = "my-test-rg"
   location = "eastus"

   tags {
    environment = "Dev"
    project   = "MyLab"
    owner     = "weidong.zhou"
   }
}

resource "azurerm_virtual_network" "default" {
   name                = "mytest-vnet1"
   address_space       = ["10.1.0.0/16"]
   location            = "${azurerm_resource_group.default.location}"
   resource_group_name = "${azurerm_resource_group.default.name}"
  
   tags {
    environment = "Dev"
    project   = "MyLab"
    owner     = "weidong.zhou"
    network   = "vnet1"
    support   = "IT operation"
   }
}

You can see the tags section can be easily duplicated to almost every resource that needs to be tagged. It is going to be a nightmare to add or change tags in the future. I remembered I sawed some code as follows somewhere.

resource "aws_vpc" "this" {
  count = "${var.create_vpc ? 1 : 0}"
 
  cidr_block                       = "${var.cidr}"
  instance_tenancy                 = "${var.instance_tenancy}"
  enable_dns_hostnames             = "${var.enable_dns_hostnames}"
  enable_dns_support               = "${var.enable_dns_support}"
  assign_generated_ipv6_cidr_block = "${var.assign_generated_ipv6_cidr_block}"
 
  tags = "${merge(map("Name", format("%s", var.name)), var.tags, var.vpc_tags)}"
}

Pay more attention to the last line. In one line, it does all the tagging it needs. This is the direction I am looking for. After some investigation, I figured out a similar way in the implementation. Here are the sample code:

locals {
  common_tags = {
    environment  = "${var.environment}"
    project      = "${var.project}"  
    Owner        = "${var.owner}"
  }
  extra_tags  = {
    network = "${var.network1_name}"
    support = "${var.network_support_name}"
  }
}
 
resource "azurerm_resource_group" "default" {
   name     = "my-test-rg"
   location = "eastus"

   tags = "${merge( local.common_tags, local.extra_tags)}"
}

resource "azurerm_virtual_network" "default" {
   name                = "mytest-vnet1"
   address_space       = ["10.1.0.0/16"]
   location            = "${azurerm_resource_group.default.location}"
   resource_group_name = "${azurerm_resource_group.default.name}"
 
   tags = "${merge( local.common_tags, local.extra_tags)}"
}

For a project with a lot of tags in every resource, this approach will help in the maintenance.

Use Oven to Fix iMac Hardware Issue

Seriously? Yes, I am not joking. I like repairing stuff, not for the purpose saving money, but just want to see whether I can do it. It is fun to challenge something seem quite impossible. Ok, get the story short. I have a iMac for almost nine years. It’s an old, but still functioning good. Recently it show the issue of stripped screen as shown below.

I know it’s time to change a new computer for this one. I have done some repairs to my home window PC and laptop longtime back, but have never seriously fixing some hardware issue on Mac. From what I read on the internet, it seems relating to graphics card. It would cost about $150 for a new graphic card, far exceed what this iMac is worth right now. Ok, let me find out whether anyone has fix similar issue before. Then I found this one: 2010 iMac 27″ Graphics Card Replacement.

It is a nice video showing how to open iMac and replace graphics card with a new one. It has a very good instruction in how to open iMac and take parts out. Mine was actually more complicated than this one and I had to take out the whole motherboard to get the graphics card. Then I found another video Burn the VGA and Fix your iMac – Don’t Pay Shitty Problems!!!. It’s insane approach to fix this striped screen issue: put the graphics card into oven with 380 F degrees for 8 minutes. This is probably the craziest idea to repair Mac I have ever heard. But from the comments, it seems it indeed fixed many people’s problem. Anyway, I am going to throw out this computer. So there is nothing to lose if I can fix it.

So my journey began during the Thanksgiving holiday. First I bought the required tools from Amazon: OWC General Servicing Kit for all Apple iMacs 2007 & Later. Actually the most useful tools are T10, T8 Torx Drivers and Suction Cups. 

I followed the videos’ instructions to remove the monitor screen and take out monitor related stuff. Here is what it looks like when monitor is removed.

As you can see, Apple intentionally makes the repair work of iMac super painful. Everything is packed in this limited space with many wirings and cables. I was not even sure I can put back anything after I disassembly the iMac. After I removed many screws, the whole motherboard was still stuck there and could not be taken out. I reviewed the videos many times and still did not have a clue why it was still stuck. Finally I remembered I upgraded RAM before and know RAM is located at the bottom of the iMac. Maybe it was RAM. So I took out the RAM and tried again. Bingo! It was RAM. Here is the photo for the motherboard:

Checked the clock, 2 hrs passed. Get the graphic card out of motherboard is quite simple, just a few minutes work.

As the video instructed, I put the card in the oven (380 F degrees) for 8 minutes. Then took it out and cool off completely. I thought it might be easier to put them back. I was completely wrong. It took 200% to 300% effort to put them back in place. As the space is very tight in iMac, any small variation in positioning of motherboard would cause issue and had to redo the whole assembly process. Some issues are:

  • One time. I assembled everything. Then I realized I could not put in RAMs at the bottom. Managed to squeeze one bank of RAM, but still had issues for the rest of 3 banks of RAMs.
  • Sometimes I had a perfect fit for the RAMs. Then realized I forgot to plug in Data cable from hard disk to motherboard. The computer can show the Apple logo, but it would show a folder logo with question mark.
  • A few times, I forgot to connect monitor cable to motherboard. During the boot, it had the normal noise of startup, but with complete black screen.
  • Sometimes, a few cables was blocked behind the motherboard and I couldn’t find them when I tried to plug them in.

Finally after almost 10 times re-assembly process, I finally made it working. It shows the nice Apple logo.

Exhausted after 9 hrs of work, but really had fun in fixing stuff.