GCP AlloyDB Blog Series Part 5 : Compare AlloyDB Omni with PostgreSQL

In the Part 1 of AlloyDB Blog Series, I discussed the Introduction of AlloyDB for PostgresSQL. Then in Part 2 of the blog series, I discussed how to do the installation of AlloyDB Omni in a GCE VM. In Part 3 of the blog series, I discussed how to connect to AlloyDB Omni using psql and pgAdmin. In Part 4 of the blog series, I discussed how to load TPCH data into the AlloyDB Omni instance and check out AlloyDB Omni’s columnar engine. In Part 5 of the blog series, I will discuss how to create a new VM with the same vCPU and memory setting as the VM used for AlloyDB Omni in the previous blogs, install PostgreSQL database server, load TPCH data and see the query performance there. Time for a duel, AlloyDB Omni vs PostgreSQL, which one is better?

Create VM for PostgreSQL Database Server

Create a GCP VM with the exact configuration used in Part 2 installation of AlloyDB Omni in a GCE VM.

export VM_NAME=pgdb
export VM_DISK_SIZE=50
export VM_MACHINE_TYPE=n2-highmem-4
export VM_OS_IMAGE=projects/centos-cloud/global/images/centos-stream-9-v20230711
export GCP_PROJECT=<put your project id here>
export GCP_ZONE=us-west4-b 
export GCP_SUBNET=<put your subnet here>
export GCP_SERVICE_ACCOUNT=<put your GCP Service account here>

gcloud compute instances create $VM_NAME \
    --project=$GCP_PROJECT \
    --zone=$GCP_ZONE \
    --machine-type=$VM_MACHINE_TYPE \
    --network-interface=network-tier=PREMIUM,subnet=$GCP_SUBNET,no-address \
    --metadata=enable-os-login=true \
    --no-address \
    --maintenance-policy=MIGRATE \
    --provisioning-model=STANDARD \
    --service-account=$GCP_SERVICE_ACCOUNT \
    --scopes=https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \
    --create-disk=auto-delete=yes,boot=yes,device-name=instance-1,image=$VM_OS_IMAGE,mode=rw,size=$VM_DISK_SIZE,type=projects/alloydb-omni/zones/us-central1-a/diskTypes/pd-ssd

Configure VM and Install PostgreSQL v15 Database

Create a new user, pguser.

gcloud compute ssh --zone $GCP_ZONE $VM_NAME --tunnel-through-iap --project $GCP_PROJECT  

sudo yum install -y yum-utils wget unzip 

export PGDB_USER=pguser
sudo useradd -g adm -G google-sudoers ${PGDB_USER}
groups
id ${PGDB_USER}
sudo timedatectl set-timezone America/Chicago
ls -l /etc/localtime
sudo su ${PGDB_USER}
cd
pwd 

Install and configure PostgreSQL database

sudo su -
dnf module -y install postgresql:15
postgresql-setup --initdb
systemctl enable --now postgresql

Here is the result output

[pguser@pgdb weidong.zhou]$ sudo su -
[root@pgdb ~]# dnf module -y install postgresql:15
Last metadata expiration check: 0:01:24 ago on Mon 21 Aug 2023 06:45:28 PM CDT.
Dependencies resolved.
=======================================================================================================================================
 Package                               Architecture         Version                                      Repository               Size
=======================================================================================================================================
Installing group/module packages:
 postgresql-server                     x86_64               15.2-1.module_el9+264+92dde3f0               appstream               6.1 M
Installing dependencies:
 postgresql                            x86_64               15.2-1.module_el9+264+92dde3f0               appstream               1.7 M
 postgresql-private-libs               x86_64               15.2-1.module_el9+264+92dde3f0               appstream               138 k
Installing module profiles:
 postgresql/server                                                                                                                    
Enabling module streams:
 postgresql                                                 15                                                                        

Transaction Summary
=======================================================================================================================================
Install  3 Packages

Total download size: 8.0 M
Installed size: 31 M
Downloading Packages:
(1/3): postgresql-private-libs-15.2-1.module_el9+264+92dde3f0.x86_64.rpm                               333 kB/s | 138 kB     00:00    
(2/3): postgresql-15.2-1.module_el9+264+92dde3f0.x86_64.rpm                                            2.5 MB/s | 1.7 MB     00:00    
(3/3): postgresql-server-15.2-1.module_el9+264+92dde3f0.x86_64.rpm                                     6.0 MB/s | 6.1 MB     00:01    
---------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                  5.9 MB/s | 8.0 MB     00:01     
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                                               1/1 
  Installing       : postgresql-private-libs-15.2-1.module_el9+264+92dde3f0.x86_64                                                 1/3 
  Installing       : postgresql-15.2-1.module_el9+264+92dde3f0.x86_64                                                              2/3 
  Running scriptlet: postgresql-server-15.2-1.module_el9+264+92dde3f0.x86_64                                                       3/3 
  Installing       : postgresql-server-15.2-1.module_el9+264+92dde3f0.x86_64                                                       3/3 
  Running scriptlet: postgresql-server-15.2-1.module_el9+264+92dde3f0.x86_64                                                       3/3 
  Verifying        : postgresql-15.2-1.module_el9+264+92dde3f0.x86_64                                                              1/3 
  Verifying        : postgresql-private-libs-15.2-1.module_el9+264+92dde3f0.x86_64                                                 2/3 
  Verifying        : postgresql-server-15.2-1.module_el9+264+92dde3f0.x86_64                                                       3/3 

Installed:
  postgresql-15.2-1.module_el9+264+92dde3f0.x86_64                postgresql-private-libs-15.2-1.module_el9+264+92dde3f0.x86_64        
  postgresql-server-15.2-1.module_el9+264+92dde3f0.x86_64        

Complete!
[root@pgdb ~]# postgresql-setup --initdb
 * Initializing database in '/var/lib/pgsql/data'
 * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log
[root@pgdb ~]# systemctl enable --now postgresql
Created symlink /etc/systemd/system/multi-user.target.wants/postgresql.service → /usr/lib/systemd/system/postgresql.service.

Create Database/User and Copy Data/Scripts

Run the following to create database and user, then change the password for both postgres and pguser.

su - postgres

export PGDB_USER=pguser
createuser ${PGDB_USER}
createdb tpch_db -O ${PGDB_USER}

psql -c "select usename from pg_user;"
psql -l

psql
show server_version;
\conninfo
alter user postgres with password 'postgres123';
alter user pguser with password 'pguser123';
\q
exit 

Here is the result looks like. Note: the database is also the same version of v15.2 as my AlloyDB Omni database in the last blog, Load TPCH Data into AlloyDB Omni.

[root@pgdb data]# su - postgres
[postgres@pgdb ~]$ export PGDB_USER=pguser
[postgres@pgdb ~]$ createuser ${PGDB_USER}
[postgres@pgdb ~]$ createdb tpch_db -O ${PGDB_USER}
[postgres@pgdb ~]$ psql -c "select usename from pg_user;"
 usename  
----------
 postgres
 pguser
(2 rows)

[postgres@pgdb ~]$ psql -l
                                                 List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    | ICU Locale | Locale Provider |   Access privileges   
-----------+----------+----------+-------------+-------------+------------+-----------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | =c/postgres          +
           |          |          |             |             |            |                 | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | =c/postgres          +
           |          |          |             |             |            |                 | postgres=CTc/postgres
 tpch_db   | pguser   | UTF8     | en_US.UTF-8 | en_US.UTF-8 |            | libc            | 
(4 rows)

[postgres@pgdb ~]$ psql
psql (15.2)
Type "help" for help.

postgres=# show server_version;
 server_version 
----------------
 15.2
(1 row)
postgres=# \conninfo
You are connected to database "postgres" as user "postgres" via socket in "/var/run/postgresql" at port "5432".
postgres=# alter user postgres with password 'postgres123';
ALTER ROLE
postgres=# alter user pguser with password 'pguser123';
ALTER ROLE
postgres=# \q
[postgres@pgdb ~]$ exit
logout
[root@pgdb data]# 

Copy TPCH data and related SQL scripts

I have already copied the TPCH data and related SQL scripts to a GCS bucket. So right now just need to copy the data and script to the right folder

su - pguser
cd
mkdir tpch
gsutil cp -r gs://<your GCS bucket>/database/tpch .
ls -l  tpch/tpch1g
ls -l  tpch/ddl
sed -i  's/alloyuser/pguser/' tpch/ddl/tpch_load_1g_data.sql
cat tpch/ddl/tpch_load_1g_data.sql

Create Database/Schema/Tables and Load TPCH Data

Run the following to create database/schema/tables and then load TPCH data. Make sure you are still in the session of pguser OS user.

psql -d tpch_db 
\conninfo
create schema tpch1g;
SET search_path TO tpch1g, public;

##### Copy the DDL and datafile
\set cur_schema_name 'tpch1g'
\set cur_user 'pguser'
\set root_path '/home/pguser/tpch'

select :'root_path' "root_path";
select :'cur_schema_name' "cur_schema_name";
select :'cur_user' "cur_user";

GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON ALL TABLES IN SCHEMA :"cur_schema_name" TO :"cur_user";
ALTER DEFAULT PRIVILEGES IN SCHEMA :"cur_schema_name" GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO :"cur_user";

\i :root_path/ddl/tpch_create_table.sql
\i :root_path/ddl/tpch_create_index.sql 
\i :root_path/ddl/tpch_load_1g_data.sql

\q

Compare TPCH Query Performance Between AlloyDB Omni and Postgres

Ok, right now. I have all of the data ready. Let’s do the comparison. The first query is the Query 6 I used in the last blog, Load TPCH Data into AlloyDB Omni. The performance is about 280 ms without enable_columnar_scan parameter setting to off and 27ms after the parameter was set to on and execute google_columnar_engine_recommend. Let’s how Postgres performs. The result is shown below.

tpch_db=> \timing on
Timing is on.
tpch_db=> SET search_path TO tpch1g, public;
SET
Time: 0.210 ms
tpch_db=> select                            
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 622.051 ms
tpch_db=> select
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 579.368 ms
tpch_db=> select
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 591.576 ms

I execute the query three times and get result ranging from 579 to 622 ms. So for AlloyDB Omni, the performance is about 2 times faster without columnar engine or 20 times faster with columnar engine. This is great performance.

Let me try another query, Query 2 : Minimum Cost Supplier Query. This query finds which supplier should be selected to place an order for a given part in a given region. The query is shown below:

select
	s_acctbal,
	s_name,
	n_name,
	p_partkey,
	p_mfgr,
	s_address,
	s_phone,
	s_comment
from
	part,
	supplier,
	partsupp,
	nation,
	region
where
	p_partkey = ps_partkey
	and s_suppkey = ps_suppkey
	and p_size = 25
	and p_type like '%STEEL'
	and s_nationkey = n_nationkey
	and n_regionkey = r_regionkey
	and r_name = 'EUROPE'
	and ps_supplycost = (
		select
			min(ps_supplycost)
		from
			partsupp,
			supplier,
			nation,
			region
		where
			p_partkey = ps_partkey
			and s_suppkey = ps_suppkey
			and s_nationkey = n_nationkey
			and n_regionkey = r_regionkey
			and r_name = 'EUROPE'
	)
order by
	s_acctbal desc,
	n_name,
	s_name,
	p_partkey
LIMIT 100;

Here is the result looks like

Database TypeColumnar EngineExecution SeqNoExecution Time
AlloyDB Omni (v15.2)Off1300.949 ms
Off2296.547 ms
On1207.195 ms
On2170.969 ms
PostgreSQL (v15.2)N/A1430.005 ms
N/A2394.203 ms

So the result above shows AlloyDB Omni is slightly better, about 30% faster, without columnar engine, and more than 2 times faster with columnar engine on.

So AlloyDB Omni is indeed faster from my test cases than PostgreSQL. Please note, AlloyDB Omni is a standalone version of AlloyDB that can be run anywhere. The AlloyDB cluster running on the GCP infrastructure will be much faster compared with the single instance AlloyDB Omni. AlloyDB Omni is definitely a good way to test out AlloyDB columnar store anywhere you want. As many people are concerning the rising cost of cloud based resource, AlloyDB Omni is a good and cheaper way to use for DEV and TEST environment while AlloyDB cluster can be used in Pre-Prod or Production environment.

In the next blog, I will discuss how to create AlloyDB cluster and access it from laptop.

GCP AlloyDB Blog Series Part 4 : Load TPCH Data into AlloyDB Omni

In the Part 1 of AlloyDB Blog Series, I discussed the Introduction of AlloyDB for PostgresSQL. Then in Part 2 of the blog series, I discussed how to do the installation of AlloyDB Omni in a GCE VM. In Part 3 of the blog series, I discussed how to connect to AlloyDB Omni using psql and pgAdmin. In Part 4 of the blog series, I will discuss how to load TPCH data into the AlloyDB Omni instance and check out AlloyDB Omni‘s columnar engine.

Generate TPCH Data

TPCH is a popular benchmarking dataset generation tools and a set of associated queries. Many organizations use it for benchmarking database performance. The dataset can be generated based on size criteria, such as 1GB, 100GB, or much bigger. For my little VM, 1GB dataset will be fine. The largest table, lineitem, contains 6 million rows in 1GB dataset. The following shows the detail of the row count for each tables.

So you can see, 1000GB dataset will have 6 billion rows in lineitem table and 1.5 billion rows in orders table. It is possible to download 1GB dataset somewhere from internet. But I prefer to build up the dataset by myself. The tools to build up this dataset is called, dbgen. Before I can use this tools, I need to install a few packages to allow me to compile dbgen source code. Those packages has been taken care of in the Step 2 of Part 2 Blog Series. So I do not need to install them anymore. Just execute the following to download and compile dbgen code.

cd
mkdir tpch
cd tpch
wget https://github.com/electrum/tpch-dbgen/archive/refs/heads/master.zip
unzip master.zip
cd tpch-dbgen-master/
echo "#define EOL_HANDLING 1" >> config.h # remove the tail '|'
make

You may see some redefine warning messages. Just ignore them. Ok, let generate the 1GB dataset with 1 file per table. To improve the load performance for large dataset,

./dbgen -s 1 -C 1 -f 
mkdir -p ../data/tpch1g
mv *.tbl ../data/tpch1g
ls -l ../data/tpch1g

Here is the result looks like

[alloyuser@omnidb tpch-dbgen-master]$ ./dbgen -s 1 -C 1 -f 
TPC-H Population Generator (Version 2.14.0)
Copyright Transaction Processing Performance Council 1994 - 2010
[alloyuser@omnidb tpch-dbgen-master]$ mkdir -p ../data/tpch1g
[alloyuser@omnidb tpch-dbgen-master]$ mv *.tbl ../data/tpch1g
[alloyuser@omnidb tpch-dbgen-master]$ ls -l ../data/tpch1g
total 1066456
-rw-r--r--. 1 alloyuser adm  24196144 Aug 20 20:31 customer.tbl
-rw-r--r--. 1 alloyuser adm 753862072 Aug 20 20:31 lineitem.tbl
-rw-r--r--. 1 alloyuser adm      2199 Aug 20 20:31 nation.tbl
-rw-r--r--. 1 alloyuser adm 170452161 Aug 20 20:31 orders.tbl
-rw-r--r--. 1 alloyuser adm 118184616 Aug 20 20:31 partsupp.tbl
-rw-r--r--. 1 alloyuser adm  23935125 Aug 20 20:31 part.tbl
-rw-r--r--. 1 alloyuser adm       384 Aug 20 20:31 region.tbl
-rw-r--r--. 1 alloyuser adm   1399184 Aug 20 20:31 supplier.tbl

Create TPCH Loading Scripts

Ok, the data files are ready. Let me work on the table DDL scripts.

cd /home/alloyuser/tpch
mkdir ddl
cd ddl
vi tpch_create_table.sql
vi tpch_create_index.sql
vi tpch_load_1g_data.sql

Create tpch_create_table.sql, tpch_create_index.sql and tpch_load_1g_data.sql scripts.

tpch_create_table.sql

-- This script requires the definition of cur_schema_name variable
select :'cur_schema_name' "Create Table : cur_schema_name";

-- nation
CREATE TABLE IF NOT EXISTS :"cur_schema_name".nation (
  "n_nationkey"  INT,
  "n_name"       CHAR(25),
  "n_regionkey"  INT,
  "n_comment"    VARCHAR(152)
  );

-- region
CREATE TABLE IF NOT EXISTS :"cur_schema_name".region (
  "r_regionkey"  INT,
  "r_name"       CHAR(25),
  "r_comment"    VARCHAR(152)
  );

-- supplier
CREATE TABLE IF NOT EXISTS :"cur_schema_name".supplier (
  "s_suppkey"     INT,
  "s_name"        CHAR(25),
  "s_address"     VARCHAR(40),
  "s_nationkey"   INT,
  "s_phone"       CHAR(15),
  "s_acctbal"     DECIMAL(15,2),
  "s_comment"     VARCHAR(101)
  );

-- customer
CREATE TABLE IF NOT EXISTS :"cur_schema_name".customer (
  "c_custkey"     INT,
  "c_name"        VARCHAR(25),
  "c_address"     VARCHAR(40),
  "c_nationkey"   INT,
  "c_phone"       CHAR(15),
  "c_acctbal"     DECIMAL(15,2),
  "c_mktsegment"  CHAR(10),
  "c_comment"     VARCHAR(117)
  );

-- part
CREATE TABLE IF NOT EXISTS :"cur_schema_name".part (
  "p_partkey"     INT,
  "p_name"        VARCHAR(55),
  "p_mfgr"        CHAR(25),
  "p_brand"       CHAR(10),
  "p_type"        VARCHAR(25),
  "p_size"        INT,
  "p_container"   CHAR(10),
  "p_retailprice" DECIMAL(15,2) ,
  "p_comment"     VARCHAR(23) 
);

-- partsupp
CREATE TABLE IF NOT EXISTS :"cur_schema_name".partsupp (
  "ps_partkey"     INT,
  "ps_suppkey"     INT,
  "ps_availqty"    INT,
  "ps_supplycost"  DECIMAL(15,2),
  "ps_comment"     VARCHAR(199)
  );

-- orders
CREATE TABLE IF NOT EXISTS :"cur_schema_name".orders (
  "o_orderkey"       INT,
  "o_custkey"        INT,
  "o_orderstatus"    CHAR(1),
  "o_totalprice"     DECIMAL(15,2),
  "o_orderdate"      DATE,
  "o_orderpriority"  CHAR(15),
  "o_clerk"          CHAR(15),
  "o_shippriority"   INT,
  "o_comment"        VARCHAR(79)
  );

-- lineitem
CREATE TABLE IF NOT EXISTS :"cur_schema_name".lineitem (
  "l_orderkey"          INT,
  "l_partkey"           INT,
  "l_suppkey"           INT,
  "l_linenumber"        INT,
  "l_quantity"          DECIMAL(15,2),
  "l_extendedprice"     DECIMAL(15,2),
  "l_discount"          DECIMAL(15,2),
  "l_tax"               DECIMAL(15,2),
  "l_returnflag"        CHAR(1),
  "l_linestatus"        CHAR(1),
  "l_shipdate"          DATE,
  "l_commitdate"        DATE,
  "l_receiptdate"       DATE,
  "l_shipinstruct"      CHAR(25),
  "l_shipmode"          CHAR(10),
  "l_comment"           VARCHAR(44)
  );

tpch_create_index.sql

-- This script requires the definition of cur_schema_name variable

select :'cur_schema_name' "Create Table : cur_schema_name";

create unique index c_ck on :"cur_schema_name".customer (c_custkey asc) ;
create index c_nk on :"cur_schema_name".customer (c_nationkey asc) ;
create unique index p_pk on :"cur_schema_name".part (p_partkey asc) ;
create unique index s_sk on :"cur_schema_name".supplier (s_suppkey asc) ;
create index s_nk on :"cur_schema_name".supplier (s_nationkey asc) ;
create index ps_pk on :"cur_schema_name".partsupp (ps_partkey asc) ;
create index ps_sk on :"cur_schema_name".partsupp (ps_suppkey asc) ;
create unique index ps_pk_sk on :"cur_schema_name".partsupp (ps_partkey asc, ps_suppkey asc) ;
create unique index ps_sk_pk on :"cur_schema_name".partsupp (ps_suppkey asc, ps_partkey asc) ;
create unique index o_ok on :"cur_schema_name".orders (o_orderkey asc) ;
create index o_ck on :"cur_schema_name".orders (o_custkey asc) ;
create index o_od on :"cur_schema_name".orders (o_orderdate asc) ;
create index l_ok on :"cur_schema_name".lineitem (l_orderkey asc) ;
create index l_pk on :"cur_schema_name".lineitem (l_partkey asc) ;
create index l_sk on :"cur_schema_name".lineitem (l_suppkey asc) ;
--create index l_ln on :"cur_schema_name".lineitem (l_linenumber asc) ;
create index l_sd on :"cur_schema_name".lineitem (l_shipdate asc) ;
create index l_cd on :"cur_schema_name".lineitem (l_commitdate asc) ;
create index l_rd on :"cur_schema_name".lineitem (l_receiptdate asc) ;
--create unique index l_ok_ln on :"cur_schema_name".lineitem (l_orderkey asc, l_linenumber asc) ;
--create unique index l_ln_ok on :"cur_schema_name".lineitem (l_linenumber asc, l_orderkey asc) ;
create index l_pk_sk on :"cur_schema_name".lineitem (l_partkey asc, l_suppkey asc) ;
create index l_sk_pk on :"cur_schema_name".lineitem (l_suppkey asc, l_partkey asc) ;
create unique index n_nk on :"cur_schema_name".nation (n_nationkey asc) ;
create index n_rk on :"cur_schema_name".nation (n_regionkey asc) ;
create unique index r_rk on :"cur_schema_name".region (r_regionkey asc) ;

tpch_load_1g_data.sql

\copy :"cur_schema_name".region     from '/home/alloyuser/tpch/data/tpch1g/region.tbl'        DELIMITER '|' CSV;
\copy :"cur_schema_name".nation     from '/home/alloyuser/tpch/data/tpch1g/nation.tbl'        DELIMITER '|' CSV;
\copy :"cur_schema_name".customer   from '/home/alloyuser/tpch/data/tpch1g/customer.tbl'    DELIMITER '|' CSV;
\copy :"cur_schema_name".supplier   from '/home/alloyuser/tpch/data/tpch1g/supplier.tbl'    DELIMITER '|' CSV;
\copy :"cur_schema_name".part       from '/home/alloyuser/tpch/data/tpch1g/part.tbl'            DELIMITER '|' CSV;
\copy :"cur_schema_name".partsupp   from '/home/alloyuser/tpch/data/tpch1g/partsupp.tbl'    DELIMITER '|' CSV;
\copy :"cur_schema_name".orders     from '/home/alloyuser/tpch/data/tpch1g/orders.tbl'        DELIMITER '|' CSV;
\copy :"cur_schema_name".lineitem   from '/home/alloyuser/tpch/data/tpch1g/lineitem.tbl'    DELIMITER '|' CSV;

Create Tables for TPCH and Load Data

Right now we have both data and SQL scripts ready. We can create database, schema and tables using the commands below. Note: do not execute these scripts and commands from psql within the Omni container as the scripts and data are in the hosting VM, not the container.

Create DB/Schema/Tables and Load Data

psql -h localhost -U postgres

CREATE EXTENSION IF NOT EXISTS google_columnar_engine;

CREATE USER tpchuser WITH PASSWORD 'alloy123' CREATEDB;
CREATE DATABASE tpch_db OWNER tpchuser;

\c tpch_db
CREATE SCHEMA tpch1g;
set search_path to tpch1g;

\set cur_schema_name 'tpch1g'
\set cur_user 'tpchuser'
\set root_path '/home/alloyuser/tpch'

select :'root_path' "root_path";
select :'cur_schema_name' "cur_schema_name";
select :'cur_user' "cur_user";

GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON ALL TABLES IN SCHEMA :"cur_schema_name" TO :"cur_user";
ALTER DEFAULT PRIVILEGES IN SCHEMA :"cur_schema_name" GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO :"cur_user";

\i :root_path/ddl/tpch_create_table.sql
\i :root_path/ddl/tpch_create_index.sql 
\q

The execution result is shown below:

postgres=# CREATE USER tpchuser WITH PASSWORD 'alloy123' CREATEDB;
CREATE ROLE
postgres=# CREATE DATABASE tpch_db OWNER tpchuser;
CREATE DATABASE
postgres=# \c tpch_db
psql (13.11, server 15.2)
WARNING: psql major version 13, server major version 15.
         Some psql features might not work.
You are now connected to database "tpch_db" as user "postgres".
tpch_db=# CREATE EXTENSION IF NOT EXISTS google_columnar_engine;
NOTICE:  extension "google_columnar_engine" already exists, skipping
CREATE EXTENSION
tpch_db=# \c tpch_db tpchuser
psql (13.11, server 15.2)
WARNING: psql major version 13, server major version 15.
         Some psql features might not work.
You are now connected to database "tpch_db" as user "tpchuser".
tpch_db=> \conninfo
You are connected to database "tpch_db" as user "tpchuser" on host "localhost" (address "::1") at port "5432".
tpch_db=> CREATE SCHEMA tpch1g;
CREATE SCHEMA
tpch_db=> set search_path to tpch1g;
SET
tpch_db=> \set cur_schema_name 'tpch1g'
tpch_db=> \set cur_user 'tpchuser'
tpch_db=> \set root_path '/home/alloyuser/tpch'
tpch_db=> select :'root_path' "root_path";
      root_path       
----------------------
 /home/alloyuser/tpch
(1 row)

tpch_db=> select :'cur_schema_name' "cur_schema_name";
 cur_schema_name 
-----------------
 tpch1g
(1 row)

tpch_db=> select :'cur_user' "cur_user";
 cur_user 
----------
 tpchuser
(1 row)

tpch_db=> GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON ALL TABLES IN SCHEMA :"cur_schema_name" TO :"cur_user";
GRANT
tpch_db=> ALTER DEFAULT PRIVILEGES IN SCHEMA :"cur_schema_name" GRANT SELECT, INSERT, UPDATE, DELETE, TRUNCATE, REFERENCES, TRIGGER ON TABLES TO :"cur_user";
ALTER DEFAULT PRIVILEGES
tpch_db=> \i :root_path/ddl/tpch_create_table.sql
 Create Table : cur_schema_name 
--------------------------------
 tpch1g
(1 row)

CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
CREATE TABLE
tpch_db=> \i :root_path/ddl/tpch_create_index.sql 
 Create Table : cur_schema_name 
--------------------------------
 tpch1g
(1 row)

CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX

tpch_db=> \i :root_path/ddl/tpch_load_1g_data.sql
COPY 5
COPY 25
COPY 150000
COPY 10000
COPY 200000
COPY 800000
COPY 1500000
COPY 6001215
tpch_db=> \q
[alloyuser@omnidb ~]$ 

If for some reason you want to set the table to a different user, you can run the following commands to generate the SQL statements to rename table owner.

psql -h localhost -U postgres
\set cur_schema_name 'tpch1g'
\set cur_user 'anotheruser'
\set root_path '/home/alloyuser/tpch'

select :'root_path' "root_path";
select :'cur_schema_name' "cur_schema_name";
select :'cur_user' "cur_user";

\c tpch_db
\pset tuples_only
\o :root_path/ddl/tpch_change_table_owner.sql
select 'ALTER TABLE ' || schemaname || '.' || tablename || ' OWNER TO ' || :'cur_user' || ';' 
from pg_tables
where schemaname = :'cur_schema_name';
\o 
\i :root_path/ddl/tpch_change_table_owner.sql
\q

Execute TPCH Queries

Right now we have the data ready. Let’s do some checking with one TPCH Query. The query is Query 6 : Forecasting Revenue Change Query. This query quantifies the amount of revenue increase that would have resulted from eliminating certain companywide discounts in a given percentage range in a given year. Asking this type of “what if” query can be used to look for ways to increase revenues. I will start with enable_columnar_scan parameter off. Run twice to make sure no major impact from caching.

[alloyuser@omnidb ddl]$ psql -h localhost -U tpchuser -d tpch_db
psql (13.11, server 15.2)
WARNING: psql major version 13, server major version 15.
         Some psql features might not work.
Type "help" for help.

tpch_db=> show google_columnar_engine.enable_columnar_scan;
 google_columnar_engine.enable_columnar_scan 
---------------------------------------------
 on
(1 row)

tpch_db=> SET google_columnar_engine.enable_columnar_scan=off;
SET
tpch_db=> show google_columnar_engine.enable_columnar_scan;
 google_columnar_engine.enable_columnar_scan 
---------------------------------------------
 off
(1 row)

tpch_db=> set search_path to tpch1g,public;
SET
tpch_db=> show search_path;
  search_path   
----------------
 tpch1g, public
(1 row)

tpch_db=> \timing on
Timing is on.
tpch_db=> select
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 284.483 ms
tpch_db=> select
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 274.554 ms

Run twice for the queries and timing is around 270~280 ms range with enable_columnar_scan off. Ok, let’s run it again with enable_columnar_scan on.

tpch_db=> SET google_columnar_engine.enable_columnar_scan=on;
SET
Time: 0.200 ms
tpch_db=> select                                             
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 278.193 ms

Interesting. It seems timing is similar with parameter off. Let me do another trick with google_columnar_engine_recommend and let see the result.

tpch_db=> SELECT * FROM g_columnar_columns;
 database_name | schema_name | relation_name | column_name | column_type | column_format | compression_level | status | avg_dictionary_
count | avg_encoded_values_count | dictionary_size_in_bytes | encoded_values_size_in_bytes | size_in_bytes | uncompressed_dictionary_si
ze_in_bytes | uncompressed_encoded_values_size_in_bytes | uncompressed_size_in_bytes | last_accessed_time | num_times_accessed 
---------------+-------------+---------------+-------------+-------------+---------------+-------------------+--------+----------------
------+--------------------------+--------------------------+------------------------------+---------------+---------------------------
------------+-------------------------------------------+----------------------------+--------------------+--------------------
(0 rows)

Time: 0.420 ms
tpch_db=> SELECT google_columnar_engine_recommend();

                          google_columnar_engine_recommend                          
------------------------------------------------------------------------------------
 (3000,"tpch_db.tpch1g.lineitem(l_discount,l_extendedprice,l_quantity,l_shipdate)")
(1 row)

Time: 26422.687 ms (00:26.423)
tpch_db=> 
tpch_db=> SELECT * FROM g_columnar_columns;
 database_name | schema_name | relation_name |   column_name   | column_type | column_format | compression_level | status | avg_diction
ary_count | avg_encoded_values_count | dictionary_size_in_bytes | encoded_values_size_in_bytes | size_in_bytes | uncompressed_dictionar
y_size_in_bytes | uncompressed_encoded_values_size_in_bytes | uncompressed_size_in_bytes | last_accessed_time | num_times_accessed 
---------------+-------------+---------------+-----------------+-------------+---------------+-------------------+--------+------------
----------+--------------------------+--------------------------+------------------------------+---------------+-----------------------
----------------+-------------------------------------------+----------------------------+--------------------+--------------------
 tpch_db       | tpch1g      | lineitem      | l_quantity      | numeric     |             0 |                 1 | Usable |            
       50 |                   214329 |                     7000 |                      6005695 |       6019863 |                       
           7000 |                                   6005695 |                    6019863 |                    |                  0
 tpch_db       | tpch1g      | lineitem      | l_extendedprice | numeric     |             0 |                 1 | Usable |            
   189204 |                   214329 |                 47679597 |                     24020092 |      71707065 |                       
       47679597 |                                  24020092 |                   71707065 |                    |                  0
 tpch_db       | tpch1g      | lineitem      | l_discount      | numeric     |             0 |                 1 | Usable |            
       11 |                   214329 |                     1540 |                      6005695 |       6014403 |                       
           1540 |                                   6005695 |                    6014403 |                    |                  0
 tpch_db       | tpch1g      | lineitem      | l_shipdate      | date        |             0 |                 1 | Usable |            
     2523 |                   214329 |                   282676 |                     12010494 |      12299890 |                       
         282676 |                                  12010494 |                   12299890 |                    |                  0
(4 rows)

Time: 1.787 ms
tpch_db=> 
tpch_db=> select                           
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 28.820 ms
tpch_db=> select
        sum(l_extendedprice * l_discount) as revenue
from
        lineitem
where
        l_shipdate >= date '1995-01-01'
        and l_shipdate < date '1995-01-01' + interval '1' year
        and l_discount between 0.09 - 0.01 and 0.09 + 0.01
        and l_quantity < 24
limit 1;
    revenue     
----------------
 185724059.4734
(1 row)

Time: 26.608 ms

Wow, the result is much better and drop to less than 30 ms, which is about 10 times faster. I think the key reason is that I run SELECT google_columnar_engine_recommend() statement, which is doing some magic to put certain data in the columnar store. You may notice I use g_columnar_columns view. This view is one of 30 views providing stats about AlloyDB on columnar engine and the index advisory. This g_columnar_columns view provides detail information about the columns that are in the columnar store. If a column is frequently used, it is likely to stay in the columnar store. It will have significant impact in the performance of the queries. From above, you can see that there is no rows in view. After I run google_columnar_engine_recommend statement, it has four rows. This caused the performance improvement in the same query. I will have a separate blog to discuss the columnar views in AlloyDB.

So far I show the basic of the AlloyDB Omni. One common question will be asked: how this compare with same configuration of PostgreSQL database? In the next blog, I will create a GCE VM with identical configuration and load the same TPCH data. Let’s see how is performance difference between AlloyDB Omni and PostgreSQL. Stay tune.

GCP AlloyDB Blog Series Part 1 : The Introduction of AlloyDB for PostgreSQL

I haven’t written my blogs for quite some time as I have been quite busy with projects and other activities for the past few years. However, database is always my favorite technology and love to check out new database offering whenever I have time.

I used to work for a database startup, called Calpont (then changed the name to InfiniDB), starting as senior software developer, then as team lead for the database storage engine team. We literally built a column based MPP database from scratch. The InfiniDB is currently released under the name of MariaDB ColumnStore. My stay at Calpont was short, but I really enjoy working with a group of very smart developers at Calpont. One habit I started at Calpont was that I like to read all kinds of blogs or articles about the database internal, various database architecture, and database performance tuning. My job at Enkitec, then later bought by Accenture and currently becomes Accenture Enkitec Group (AEG), allows me to have an opportunity to work with many top database experts in the group and perform migration on some of the largest databases in the world. It’s very challenging, but a lot of fun. My experience at Calpont helps me to think from perspective of technology stack while my AEG‘s experience assists me to think from client’s perspective: what is the benefit to use a new database technology and how to get there?

For the past two years, open source database PostgreSQL has seen rise in popularity. More and more company migrate their on-prem databases to PostgreSQL based databases on the cloud. AlloyDB is the latest database offering based on PostgreSQL from Google Cloud. AlloyDB is a fully-managed, PostgreSQL-compatible database for both transactional and analytic workloads. It provides enterprise-grade performance, extensive HA and DR support with industry leading 99.99% availability. More introduction of the product can be found at Introducing AlloyDB for PostgreSQL: Free yourself from expensive, legacy databases

At its core, drawing from Google’s extensive experience with cloud technologies and underlined supporting infrastructure, AlloyDB takes database management to new heights by combining the best of both worlds. It inherits the well-established features of PostgreSQL while integrating with Google Cloud’s infrastructure, providing scalability and reliability to the customers.

There are some great articles about AlloyDB’s internal and why it can deliver impressive performance. Here is a list of the related articles:

By migrating to AlloyDB, organizations can achieves the following benefits:

  • Reduce cost : One of main reasons people want to move away from traditional databases, such as Oracle/DB2/SQL Server, is the license and support cost. No more cost hike due to license policies change and cloud hostile licensing policies.
  • Avoid license headache : traditional database vendors discourage customers from deploying traditional databases on other clouds. Customers have to engage external license consultants and attorneys to figure out whether they can deploy traditional databases to other clouds.
  • Support both transactional and analytic workload : AlloyDB goes beyond regular row format in PostgreSQL and introduce the dual format: both row format and columnar format. In this hybrid model, you can choose between row store or columnar store based on your need: transaction workload goes to row format while analytic workload goes to analytic workload.
  • Separate compute and storage : This is the similar design used by other GCP databases like, Spanner, BigTable and BigQuery. It separates the compute and storage at every layer of the stack based on the same Google infrastructure that support Gmail, YouTube, and Google Maps.
  • Scalability : AlloyDB is designed with scale-out storage, horizontal read, linear read scalability at up to 1000+ vCPUs and vertically scalable writes.
  • High SLA : Customers request highly reliable databases to their mission-critical workload. AlloyDB can detect and recovers from most database failures automatically independent of database size and workload. It can offer a high availability SLA of 99.99%.
  • No cloud vendor lock in : As the database is PostgreSQL based, the data can be migrated to any other PostgreSQL platforms in the future.
  • Simple pricing : One of the headaches when dealing with cloud based database, not only you need to pay for using the database service, but also complicated calculation based on the I/O usage. AlloyDB is simpler and transparent. Storage is automatically provisioned and customers are only charged for what they use. No additional storage cost for read replicas as well.
  • Full PostgreSQL compatibility : it uses standard PostgreSQL drivers and provide full compatibility with PostgreSQL. Therefore you can use the standard tools for migration, and operation.
  • Autopilot with ML : it includes automatic vacuum management, memory management, storage tiering, data columnarization and query rewrite.

There are two offering in the AlloyDB product family: AlloyDB running on GCP and AlloyDB Omni running anywhere you like. You can consider AlloyDB Omni is a self-managed version of AlloyDB without using Google Cloud’s infrastructure, but with the same underlined engine. You can deploy it on premise, at edge, and on other public or private clouds. AlloyDB Omni also uses the same columnar engine for fast scans, query joins and aggregations. Unlike AlloyDB is fully managed service on GCP, including backups and HA, the backup and HA on AlloyDB Omni are managed by customers. AlloyDB Omni can be used as the first step to test out the migration to AlloyDB from traditional databases, and provide a pathway for customers to migrate to AlloyDB Omni on-prem before moving to the cloud.

For the next few blogs in the series, I’ll explore more on how to install AlloyDB Omni, load TPCH benchmarking data, create AlloyDB cluster, connecting to AlloyDB environment using pgAdmin from your laptop, and performance comparison between AlloyDB Omni and PostgreSQL.