Moving SQL Profile during Exadata Migration

My last post discusses the steps to move stats to Exadata environment. During the same cutover window, we actually did another task: move certain SQL Profiles from UAT environment to PROD environment. Just like photo below, let’s do the moving in one run.
moving_stuff

For SQL Profile, I usually use it as the last approach to resolve production performance issue. I always recommend my clients to change application code to tune the poor performed queries first. Only at the time no code change is allowed and production database is suffering from bad queries significantly, SQL Profiles will be used. For a large legacy database suffering from poor performance in the past, it’s not surprise to see many SQL Profiles in the legacy database. If there are only 10 or 20 SQL Profiles, it can be manageable. But there is no way to maintain hundreds, even thousands SQL Profiles in the database, especially during migration from 11.2.0.3 Non-Exadata to 11.2.0.4 Exadata environment. So after we restore the legacy database into an UAT environment on Exadata, we disabled all SQL Profiles in the database. During the testing, if query performance was not good in Exadata environment, we implemented new SQL Profile or reused the old one if existing one performed better. By using strategy, we cut down the SQL Profiles from 256 before migration to 9 during the migration. During the cutover window, we only need to move these 9 SQL Profiles from UAT to PROD environment.

The followings are the sample steps for moving SQL Profiles. It uses the same db_my_stats schema created in the previous post. Here is the summary of the steps.
1. Copy the enabled SQL Profile to a staging table under the same db_my_stats schema created in the last post.
2. Export the staging table.
3. Copy the export data pump files to PROD database on Exadata.
4. Disable all SQL Profile in PROD database.
5. Import the enabled SQL Profiles in PROD database.

Step 2 and 3 are discussed in my last post and I am not going to repeat them here. Only step 4 and 5 are performed during cutover window. Here are the sample scripts to illustrate the steps:

Copy the enabled SQL Profile to a staging table
Generate necessary sql scripts.
vi gen_exp_imp_sqlprofile.sql

set echo off
set feedback off
set lines 150
set pages 0

prompt ---------------- Generate SQL Profile Export and Import -----------------

prompt
prompt -- generate SQL to export SQL profile
prompt
spool EXPORT_TO_SQLPROFILE_TABLE.SQL
select 'spool EXPORT_TO_SQLPROFILE_TABLE.log' from dual;
select
  'EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLPROF ( ' ||
  'staging_table_name => ''PROFILE_STG'', profile_name=>''' || name || '''); ' content
  from dba_sql_profiles
 where status = 'ENABLED'
order by name;
select 'spool off' from dual;

prompt
prompt -- generate SQL to get total rows in the SQL Profile table
prompt
spool GET_EXPORT_SQL_PROFILE_ROW_COUNT.SQL
select 'spool GET_EXPORT_SQL_PROFILE_ROW_COUNT.log' from dual;
select 'prompt SQL profiles qualifed for export ' from dual;
select 'SELECT name, status FROM dba_sql_profiles where status = ''ENABLED'' order by name;' from dual;

select 'prompt SQL profiles exported ' from dual;
select 'SELECT count(*) FROM DB_MY_STATS.PROFILE_STG;' from dual;
select 'select obj_name, obj_type, sql_handle  status from dbmir.PROFILE_STG order by object_name;' from dual;
select 'spool off' from dual;

prompt
prompt -- generate SQL to import SQL profile
prompt
spool IMPORT_TO_SQLPROFILE_TABLE.SQL
select 'spool IMPORT_TO_SQLPROFILE_TABLE.log' from dual;
select
  'EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(REPLACE => TRUE, ' ||
  'staging_table_name => ''PROFILE_STG'', profile_name=>''' || name || '''); ' content
  from dba_sql_profiles
 where status = 'ENABLED'
order by name;
select 'spool off' from dual;

spool off

Run the gen script.
@gen_exp_imp_sqlprofile

Export SQL Profile
sqlplus / as sysdba
connect db_my_stats/db_my_stats

Create SQL profile staging table
EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (table_name => ‘PROFILE_STG’, schema_name=>’DB_MY_STATS’);

Run the script to export SQL Profile to the staging table and get the row count for the exported profiles.
@EXPORT_TO_SQLPROFILE_TABLE.SQL
@GET_EXPORT_SQL_PROFILE_ROW_COUNT.SQL

At this moment, the export is done. Then follows the export and import step from last post to populate the staging table PROFILE_STG in the target database.

Disable all SQL Profiles
vi disable_all_sqlprofile.sql

BEGIN
for r in
( select name, category, status, sql_text, force_matching
    from dba_sql_profiles
   order by category, name)
  loop
     DBMS_SQLTUNE.ALTER_SQL_PROFILE (name => r.name, attribute_name => 'STATUS', value =>  'DISABLED');
  end loop;
END;
/

Run the script.
@disable_all_sqlprofile.sql

Import SQL Profile
@IMPORT_TO_SQLPROFILE_TABLE.SQL

Verify the the enabled SQL Profile
select name, category, type, status from dba_sql_profiles
where status = ‘ENABLED’;

Ok, we are done with the moving of SQL Profile here.

Moving STATS for Large Database during Exadata Migration

I have done many Exadata migrations for the past two and half years. For the majority of migrations, my clients have enough downtime to complete the statistic gathering during the cutover window. However, this strategy is not going to work for a large database. Recently we migrated a 110+ TB database to Exadata. There were many chanllenges involved in this migration, like STATS gathering, tablespace reaching 32TB limit and 1022 datafile limit, upgrade from 11.2.0.3 to 11.2.0.4 during the cutover window and critial query tuning with only a few weeks before cutover. Anyway, there were many moving parts during the migration. In this blog, I am focusing only on STATS strategy during the cutover window.

This database had issue in gathering stats in the past and run for days without completion in the legacy environment. As the stats gathering had significant impact on the database performance, the client had to turn off the stats gathering and lock the stats. So many tables’ stats were at least two years old. With the database moving from non-Exadata 11.2.0.3 environment to Exadata 11.2.0.4 environment, we need to have the new stats available before the database can be released to business users. The question was how we were going to do it? Just like the drawing below, we need to find the fastest window to get our work done.

fast_stats

Both UAT and PROD environments are using X4 full rack Exadata. Even with eight database servers, the full stats gathering with 384 parallism could still take a few days. Even incremental stats gathering could take 10+ hours. It is definitely not going to work during cutover window with limited downtime available.

For this migration, we used Data Guard and our physical standby on Exadata was exact physical copy of the legacy primary database. We also built the UAT database with RMAN restore and last build of the UAT was just two weeks away from the cutover date. So we use the following strategy to work around the stats gathering problem during cutover window.
1.  Gather stats on UAT database. It doesn’t matter whether it takes one day or two days. As long as it is outside the cutover window, we were fine with that.
2. Copy the STATS in UAT database to STATS staging tables.
3. Export the staging tables.
4. Copy the export data pump files to PROD database on Exadata
5. Import the stats to PROD DB.

Step 1 to 4 do not have downtime requirement and can be done ahead of time. Only the last step needs to be performed during the cutover window. In this blog, I am not going to discuss Step 1 and Step 4, and only on exporting and importing the stats. I used our lab to demonstrate the steps and scripts involved.

Export Stats from Source Database

cd /dbfs/work/wzhou/test_stat

sqlplus / as sysdba
create user db_my_stats identified by db_my_stats default tablespace users temporary tablespace temp; 
grant connect, resource, dba to db_my_stats;  
create directory DB_MY_STATS_DIR as '/dbfs/work/wzhou/test_stat'; 
grant read, write on directory DB_MY_STATS_DIR to system;

Generate necessary sql scripts.
vi gen_exp_imp_stats.sql

-- Generate all the necessary scripts for export and import schema stats
--set time on
--set timing on
set echo off
set feedback off
set lines 150
set pages 0

prompt
prompt generate SQL to create schema stat table
prompt
spool CREATE_SCHEMA_STATS_TABLE.SQL
select 'set timing on' from dual;
select 'spool CREATE_SCHEMA_STATS_TABLE.log' from dual;
select 'prompt '||owner||chr(10)||
  'exec DBMS_STATS.CREATE_STAT_TABLE(''DB_MY_STATS'', ''STATS_' || owner || ''');' content
  from dba_segments
 where owner not in ( 'DB_MY_STATS', 'ANONYMOUS','APPQOSSYS','DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSTEM','WMSYS','XDB','XS$NULL', 'ORDSYS','ORDDATA')
group by owner
order by owner;
select 'spool off' from dual;
spool off

prompt
prompt -- generate SQL to export schema stat to stat table
prompt
spool EXPORT_TO_SCHEMA_STATS_TABLE.SQL
select 'set timing on' from dual;
select 'spool EXPORT_TO_SCHEMA_STATS_TABLE.log' from dual;
select 'prompt '||owner||chr(10)||
  'exec DBMS_STATS.EXPORT_SCHEMA_STATS( ' ||
  'ownname => ''' || owner || ''', ' ||
  'stattab => ''STATS_' || owner || ''', ' ||
  'statid => ''' || owner || ''', ' ||
  'statown => ''DB_MY_STATS'');' content
  from dba_segments
 where owner not in ( 'DB_MY_STATS', 'ANONYMOUS','APPQOSSYS','DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSTEM','WMSYS','XDB','XS$NULL', 'ORDSYS','ORDDATA')
group by owner
order by owner;
select 'spool off' from dual;
spool off


prompt
prompt -- generate SQL to import schema stat to stat table
prompt
spool IMPORT_TO_SCHEMA_STATS_TABLE.SQL
select 'set timing on' from dual;
select 'spool IMPORT_TO_SCHEMA_STATS_TABLE.log' from dual;
select 'prompt '|| owner || chr(10) ||
  --'--exec dbms_stats.delete_schema_stats (ownname => '''||owner||''');' || chr(10)||
  'exec DBMS_STATS.IMPORT_SCHEMA_STATS( ' ||
  'ownname => ''' || owner || ''', ' ||
  'stattab => ''STATS_' || owner || ''', ' ||
  'statid => ''' || owner || ''', ' ||
  'statown => ''DB_MY_STATS'');' content
  from dba_segments
 where owner not in ( 'DB_MY_STATS', 'ANONYMOUS','APPQOSSYS','DBSNMP','ORACLE_OCM','OUTLN','SYS','SYSTEM','WMSYS','XDB','XS$NULL', 'ORDSYS','ORDDATA')
group by owner
order by owner;
select 'spool off' from dual;

Run the gen script.
@gen_exp_imp_stats

Create stats staging table.
@CREATE_SCHEMA_STATS_TABLE.SQL

Export schema stats to the stats staging table
@EXPORT_TO_SCHEMA_STATS_TABLE.SQL

Prepare the export script
vi run_exp_stats
expdp parfile=exp_schema_stats.par

vi exp_schema_stats.par
USERID=db_my_stats/db_my_stats
PARALLEL=4
DIRECTORY=DB_MY_STATS_DIR
DUMPFILE=stats_%U.dmp
LOGFILE=stats_exp.log
METRICS=Y
FILESIZE=4G
SCHEMAS=DB_MY_STATS
JOB_NAME=stats_exp

Run the script to export the schema
run_exp_stats

Ok, at this moment, the stats export is done. Copy the datapump file to target system.

Import Stats to Target Database
On target database, do the similar step as above.

create user db_my_stats identified by db_my_stats default tablespace users temporary tablespace temp; 
grant connect, resource, dba to db_my_stats;  
create directory DB_MY_STATS_DIR as '/dbfs/work/wzhou/test_stat'; 
grant read, write on directory DB_MY_STATS_DIR to system;

Prepare the import script.
vi run_imp_stats
impdp parfile=imp_schema_stats.par

vi imp_schema_stats.par
USERID=db_my_stats/db_my_stats
PARALLEL=4
DIRECTORY=DB_MY_STATS_DIR
DUMPFILE=stats_%U.dmp
LOGFILE=stats_imp.log
METRICS=Y
TABLE_EXISTS_ACTION=REPLACE
SCHEMAS=DB_MY_STATS
JOB_NAME=stats_imp

Run the script to import the schema
run_imp_stats

Import the stats from stats staging tables.
@IMPORT_TO_SCHEMA_STATS_TABLE.SQL

Ok, we’re done with the stats import.

The above strategy worked pretty well during the cutover window. The export of STATS took about half hour and the import of STATS took slight less time than the export. We are happy the STATS work does not take significant amount of time for this 110+ TB databases during the cutover.