Since I grew up in a small sea village I always loved fishing. But when I was young it was always hard to wait for a fish, so I think I preferred diving and instead of waiting I went after fishes :) After years of OLTP performance testing OLAP performance testing was a similar experience, I had to train myself again for patience, here in Türkiye there is an old saying; “Only a patient dervish attains his wish.”
Before getting into details I have mentioned importance of testing several times, again this time the results proved that somethings even written in the documentation may not be the optimum for your case.
We are one of the largest Telecom operators in Europe and our data warehouse database is of course a big one. We are on Solaris 10 and using Hitachi storage devices with Oracle 9i Release 2. With company’s aggressive marketing needs our data warehouse needed to be re-structured from bottom to top. So this was a great opportunity for me to advice both continuing with 10g Release 2 and using Oracle’s OLAP features like star transformation and materialized views which the development team once upon an Oracle release tested and didn’t like the outcomes.
There was no upgrade need, for the new 10g Release 2 environment and we had the luxury of setting up a completely different database. This was a big advantage of course for both not upgrading a xx TB database, getting rid of the old fragmented dictionary and staring with a fresh one.
Hüsnü, one of the two DBAs, made lots of interesting load tests with Orion and I hope he also will be blogging some of the interesting results he got. Below are some of the important resources helped us during our tests;
– Oracle’s Data Warehousing Documentation
– Oracle Database 10g: The Top 20 Features for DBAs by Arup Nanda
– Oracle By Example Series – Business Intelligence and Data Warehousing
– Using Basic Database Functionality for Data Warehousing
– Using Oracle’s Materialized Views Capabilities
So the initial step was to setup a test environment for a specific end user reporting need, after getting the example queries for two big fact and several dimension tables, we immediately started to load several partitions of the facts and all data for the dimensions. Our old friend SQL*Loader with its direct path and nologging options was there for us, which I believe one of the best tools Oracle has ever developed. So here are the next steps of the tests briefly for your informations and comments;
a. Collecting Object and System Statistics after the load
With 10g CBO is the only path, RBO is unsupported anymore. So we gathered object statistics at schema, dictionary and fixed table levels;
-- schema level exec dbms_stats.gather_schema_stats( ownname =>'ABC', degree=>16, options=>'GATHER AUTO', estimate_percent=>dbms_stats.auto_sample_size, cascade=>TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO', granularity=>'ALL'); -- dictionary level exec dbms_stats.gather_dictionary_stats( degree=>16, options=>'GATHER AUTO', cascade=>TRUE); -- fixed table level exec dbms_stats.gather_fixed_objects_stats ;
And also collected system statistics;
SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN'; exec dbms_stats.gather_system_stats('start'); -- produce some load, for both parallel multi-block and single-block reads exec dbms_stats.gather_system_stats('stop'); SELECT pname, pval1 FROM sys.aux_stats$ WHERE sname = 'SYSSTATS_MAIN';
After 10g there is an automated scheduler job, since RBO is unsupported, to gather statistics. To be more in control we disabled this job;
SELECT owner, job_name,enabled FROM DBA_SCHEDULER_JOBS WHERE JOB_NAME = 'GATHER_STATS_JOB'; BEGIN DBMS_SCHEDULER.DISABLE('GATHER_STATS_JOB'); COMMIT; END; /
b. Automatic Storage Management(ASM) and optimizer parameters
Not only it eliminates the performance and budget cost of a file system layer, but especially because of its management advantages ASM with 10g Release 2 was our another test topic. Hüsnü may blog about his Orion test results of ASM versus Veritas file system, here I will only mention that we used external redundancy and set two important ASM parameters as Note:368055.1 advised;
Subject: Deployment of very large databases (10TB to PB range) with Automatic Storage Management (ASM) _asm_ausize=16777216 _asm_stripesize=1048576
ASM disks are divided into allocation units of 1MB by default, but for data warehouses setting _asm_ausize higher like 16MB is important. Also like statistics for the optimizer there are several important OLAP settings which has to be tuned on 10g Release 2;
SELECT name, value FROM v$system_parameter a WHERE a.NAME IN ('compatible', 'optimizer_features_enable', 'optimizer_mode', 'parallel_execution_message_size', 'pga_aggregate_target', 'workarea_size_policy', 'query_rewrite_enabled', 'query_rewrite_integrity', 'parallel_max_servers', 'disk_asynch_io', 'db_file_multiblock_read_count', 'star_transformation_enabled')
Here we left behind our 9iR2 parameters and decided to move on with fresh parameters after 10gR2, our strategy was to leave a parameter to its default unless some tests prove that it has to be changed. Hüsnü again made several detailed tests for parameter settings of like db_file_multiblock_read_count and parallel_execution_message_size and we choosed the optimum values from those results.
c. Bitmap indexes and star transformation
After all above initial steps here comes the segment level initial setups. As we know Bitmap İndexes are best for OLAP usage because of their bitmap level(not row level) locking behavior. Also for a better bitmap index the cardinality, the amount of different values of the column, is important.
For Oracle to do star transformation,
1- We created single-column bitmap indexes on fact table’s all dimension keys. The query scenarios filters were all on these bitmap indexed columns on fact table to dimensions.
2- We created unique key constraints on dimension tables’ joined columns.
3- We also created appropriate indexes on dimension filtered columns.
Here we were excited to see if the second large fact table’s id column would be used or not, because of its high cardinality it was guessed to be having higher costs compared to a full-parallel table scan and hash join alternative. Depending on the filtering factors, selecting at most 5% to 10% of the fact, star transformation always performed better than full-parallel table scan and hash join alternative. Also bitmap index on the second fact table’s reference column was used by the optimizer. When the filtering was not selective than full-parallel table scan and hash join took its multi-block read advantage and performed much more better.
Below sql scripts were run several times and we have taken the average values, we needed to write some shell script in order to keep going at nights :)
himalaya@oracle $ nohup sqlplus ../.. @...sql > ...log 2> ...err & -- inside sql -- full-parallel table scan and hash join test script spool ...log conn ../.. SET LINESIZE 2000 SET AUTOTRACE TRACEONLY SET TIME ON SET TIMING ON ALTER SESSION ENABLE PARALLEL DML ; alter session set max_dump_file_size=unlimited ; alter session set timed_statistics = true ; -- alter session set STATISTICS_LEVEL = ALL ; -- alter session set "_rowsource_execution_statistics" = true ; alter session set tracefile_identifier = senaryo1_full ; -- be sure that this test is not doing star trans. or query rewr. ALTER SESSION SET star_transformation_enabled=FALSE; SELECT /*+ norewrite use_hash(cs,ct,tr,ta,pa,sr,dc) parallel(cs) parallel(ct) parallel(tr) parallel(ta) parallel(pa) parallel(sr) pa rallel(dc) */ /* SENARYO 1 FULL */ .. -- star transformation test script spool ...log conn ../.. SET LINESIZE 2000 SET AUTOTRACE TRACEONLY SET TIME ON SET TIMING ON ALTER SESSION ENABLE PARALLEL DML ; alter session set max_dump_file_size=unlimited ; alter session set timed_statistics = true ; -- alter session set STATISTICS_LEVEL = ALL ; -- alter session set "_rowsource_execution_statistics" = true ; alter session set tracefile_identifier = senaryo1_bi ; -- be sure that this test is not doing query rewr. and star trans. is enabled ALTER SESSION SET star_transformation_enabled=TRUE; SELECT /*+ norewrite STAR_TRANSFORMATION */ /* SENARYO 1 BI */ ...
d. Materialized views and query rewrite
And as a second option for read performance we wanted to see was materialized views. Of course since these are the already calculated summaries, they were the best over both star transformation and full access alternatives.
-- materialized view test script spool ...log conn ../.. SET LINESIZE 2000 SET AUTOTRACE TRACEONLY SET TIME ON SET TIMING ON ALTER SESSION ENABLE PARALLEL DML ; alter session set max_dump_file_size=unlimited ; alter session set timed_statistics = true ; -- alter session set STATISTICS_LEVEL = ALL ; -- alter session set "_rowsource_execution_statistics" = true ; alter session set tracefile_identifier = senaryo1_mv ; -- force to use mat.view ALTER SESSION SET star_transformation_enabled=FALSE; SELECT /*+ rewrite_or_error */ /* SENARYO 1 MV */ ..
d. Microstrategy reporting tool
We use MSTR for the end user reporting needs, so we needed to be sure on MSTR’s behavior after 10gR2. Some questions were, if its driver was up to date and more important since MSTR also does some query rewrite and caching will the same queries from MSTR have the same execution plans like SQL*Plus tests we had, this was our primary concern.
Here we created a database logon trigger for MSTR database user to catch its produced queries to be sure what was going on;
CREATE OR REPLACE TRIGGER trace_trig AFTER LOGON ON DATABASE DECLARE sqlstr VARCHAR2(200) := 'ALTER SESSION SET EVENTS ''10046 TRACE NAME CONTEXT FOREVER, LEVEL 8'''; BEGIN IF (USER = 'MSTR') THEN execute immediate sqlstr; END IF; END trace_trig; /
And we saw that MSTR was adding some dimension key columns to the original queries which affected materialized view query rewrite option. Also MSTR was having much more waits compared to the simple SQL*Plus client which will be another area to study deeper later.
During this first part I wanted to briefly explain our testing needs, setups and read performance tests. In the second part more important issues will be the topic, the write cost of each option because of daily ETL and data quality updates.
Especially incremental refresh results over MATERIALIZED VIEW LOGs and DBMS_MVIEW.REFRESH will be important since they performed the best results during reading tests in part 1. Here we will also be getting the most out of 10gR2, with DBMS_ADVISOR.TUNE_MVIEW, DBMS_ADVISOR.CREATE_FILE and DBMS_MVIEW.EXPLAIN_MVIEW supplied functionalities.
PS : I also want to advice these three blogs which I think are really good starting points for data warehousing with Oracle;
– The Oracle Sponge – Oracle Data Warehouse Design and Architecture by David Aldridge – http://oraclesponge.wordpress.com/
– Pete Scott’s random notes – The rantings of a middle-aged computer consultant and generally nice person – http://pjsrandom.wordpress.com/
– Rittman Mead Consulting – “Delivering Oracle Business Intelligence” – http://www.rittmanmead.com/blog/
1 Comment