Eddie Awad’s http://oraclecommunity.net – I like twitter, here is a small demo :)

oraclebase oraclebase @eddieawad I’ve blogged about the site and I’ve put some links from my blog and blog aggregator. That must be worth a least one member. :)
oraclebase oraclebase @TongucY I’m going to quit my job so I can concentrate on social networking. :)
H.Tonguc YILMAZ TongucY but it is getting so hard; follow blogs, otn oracle-l and local forums, wiki, mix, twitter, facebook, technorati / need one to rule them all
oraclebase oraclebase @TongucY (@eddieawad ) Mix is more LinkedIn. OracleCommunity is more FaceBook. :)
Dan Norris dannorris I’ll be soooo happy when the elections are over and media goes back to providing (mostly) useful information instead of dirt on candidates.
Eddie Awad eddieawad @oraclebase 8 is a very lucky number, at least for the Chinese people.
Eddie Awad eddieawad @TongucY Sure not. mix and oraclecommunity.net are different even though they share a few common things.
oraclebase oraclebase I’m member number 8 in the Oracle Community…
H.Tonguc YILMAZ TongucY @eddieawad thank you Eddie, so this is not an alternative for mix.oracle.com right? :)
Eddie Awad eddieawad @TongucY have fun, meet new people, anything you and the community want it to be.
H.Tonguc YILMAZ TongucY my first impressions, http://oraclecommunity.net is really awesome! but what will be its mission Eddie? :)
Eddie Awad eddieawad http://oraclecommunity.net is here. DM me your email or go to http://awads.net/wp/contact to request an invite to join.
Advertisement

“It is all about understanding.”

These words belong to Tom Kyte, written on the preface of Jonathan Lewis‘s latest book Cost Based Fundamentals of Oracle. If you are paying your bills as an Oracle database developer or dba without doubt it is of course all about understanding the optimizer, why it does what it does as Kyte mentions.

Last year we were very excited about this book and immediately organized a weekend summit :) to study as a group. The members were our internship program students and the weekend was a real success. We prepared an hour presentation for each chapter, group discussion was very effective and educative.

So this year we planned to repeat this event with the support of the new internees of last summer. Another good news is my company, Turkcell Teknoloji, will be our sponsor this year, for the place and the foods :)

Opening Preface Never say “never again” – H.Tonguç Yılmaz
Chapter 1 What do you mean by cost? – Merve Olamlı
Chapter 2 Tablescans – Merve Olamlı
Chapter 3 Single table selectivity – Ersin Ünkar
Chapter 4 Simple B-tree access – Özay Akdora
Chapter 5 The Clustering Factor – Engin Zorlu
Chapter 6 Selectivity Issues – Mert İnan
Chapter 7 Histograms – Özgür Macit
Chapter 8 Bitmap Indexes – Aykut Öztürk
Chapter 9 Query Transformation – Baran İpek
Chapter 10 Join Cardinality – Engin Kurtoğlu
Chapter 11 Nested Loops – Övünç Bozcan
Chapter 12 Hash Joins – Ertük Diriksoy
Chapter 13 Sorting and Merge Joins – Ertürk Diriksoy
Chapter 14 The 10053 trace file – Hüsnü Şensoy
Appendix A Upgrade Headaches – Hüsnü Şensoy
Appendix B Optimizer Parameters – Hüsnü Şensoy
Additional topic : Oracle 11g OCI New Features – Ogün Heper

Here are two additional resources to the book I may advice;
Jonathan Lewis’s comments on each chapter
Wolfgang Breitling’s presentations and articles

Thanks to Jonathan Lewis for his efforts and making this book possible, it is a great value to the Oracle community and we are waiting for the triology to continue :)

Also as we have Jonathan’s permission to share the presentations we prepare for the weekend studies, I will be updating this post with the material available ~10 days later, see you than.

Oracle OLAP features performance testing Part 2

In Part 1 I shared our testing environment setup, read performance tests and their results and some important resources for data warehousing with Oracle. Before continuing with the write cost of the options in this post, let me mention two things; first I am sorry that I can not share all details of the table structures and queries etc. in these posts, so please accept these posts as non-technical. Second, I think in both worlds, OLAP or OLTP doesn’t matter, first thing to do is to govern your end users needs and wants, if you let them govern you then the result will be most probably a very complex(flexible they will call:) model which will inevitably end up with of course lots of challenging performance issues. When it comes to OLAP this complexity at the transformation step of your ETL may cost you a lot, especially if we are talking about some XX TBs of daily change.

During this post I will be using “BI” for Bitmap Index, “MV” for Materialized View and “BJI” for Bitmap Join Index.

a. Cost of Bitmap Indexes

We planned seven BIs on our main fact table for star transformation tests with different kinds of queries having different filtering levels. During below tests there were no MV logs or BJIs in the test schema;

Listing 192a – Cost of Bitmap Indexes

-- a simple local bitmap index creation template
CREATE BITMAP INDEX BI.. NOLOGGING PARALLEL COMPUTE STATISTICS LOCAL ;

-- %2 and %10 sampled data inserted to simulate ETL cost
create table FACT_SMP2 nologging parallel pctused 99 pctfree 1 as
select /*+ PARALLEL */ * from FACT SAMPLE(2);

insert /*+ append parallel(a) */ into FACT a
select /*+ PARALLEL(b) */ * from FACT_SMP2 b ;
commit ;

b. Cost of Materialized Views

For our reporting needs we created three different kinds of summaries. And during refresh tests;
– all BIs were dropped,
– all MVs were created on a single partition of the main fact because of the test system’s limited resources,
– because of our ETL character we tested %2(2,642,727 rows) and %10(13,210,698 rows) sampling inserts before MVs incremental refreshes

Listing 192b – Cost of Materialized Views

alter session set query_rewrite_enabled=true;
alter session set query_rewrite_integrity=enforced;

-- a simple template of MV log for incremental refresh of MVs
DROP MATERIALIZED VIEW LOG ON FACT ;
CREATE MATERIALIZED VIEW LOG ON FACT
WITH ROWID, SEQUENCE(all_fact_columns_joined_with_dimensions, ..)
INCLUDING NEW VALUES;

-- a simple template to create a MV
DROP MATERIALIZED VIEW MV ;
CREATE MATERIALIZED VIEW MV
PCTFREE 1 PCTUSED 99 INITRANS 7 MAXTRANS 255
STORAGE ( INITIAL 50M NEXT 50M MINEXTENTS 5 MAXEXTENTS 512 PCTINCREASE 0 FREELISTS 7)
NOLOGGING PARALLEL
BUILD DEFERRED
REFRESH FORCE ON DEMAND
WITH ROWID
ENABLE QUERY REWRITE AS
SELECT ...

-- a simple template to FULL refresh of MV and gather statistics
exec dbms_mview.refresh('MV', method => 'C', parallelism => 4, atomic_refresh=>false);
exec dbms_stats.gather_table_stats( USER, tabname=>'MV', degree=>4, estimate_percent=>dbms_stats.auto_sample_size,
cascade=>TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO', granularity=>'ALL');
commit ;

-- a simple template to INCREMENTAL refresh of MV and gather statistics
exec dbms_mview.refresh('MV', method => 'F', parallelism => 4);
exec dbms_stats.gather_table_stats( USER, tabname=>'MV', degree=>4, estimate_percent=>dbms_stats.auto_sample_size,
cascade=>TRUE, method_opt=>'FOR ALL COLUMNS SIZE AUTO', granularity=>'ALL');
commit ;

MVs bring important performance costs in order to be kept fresh and compared to star transformation with BIs MVs read performance advantage, this cost may be something to be avoided.

c. Cost of Bitmap Join Indexes

BJIs advantages and disadvantages are summarized like below in the documentation;

Advantages –
Good performance for join queries, space efficient
Especially useful for large dimension tables in star schemas

Disadvantages –
Maintenance costs are higher, building or refreshing a bitmap join index requires a join
More indexes are required, up to one index per dimension-table column rather than one index per dimension table is required

During our tests for reporting needs appropriate BJIs were nearly impossible to create. Additionally to create a BJI you need to have a primary or unique key constraint on the referenced table.

-- BJI requires a PRIMARY or UNIQUE key constraint
CREATE UNIQUE INDEX UI.. ON REFERENCED_DIMENSION(..) NOLOGGING PARALLEL COMPUTE STATISTICS ;   

-- will use above index by default
ALTER TABLE REFERENCED_DIMENSION
ADD CONSTRAINT PK_REFERENCED_DIMENSION
PRIMARY KEY (..) NOLOGGING PARALLEL ;

-- a simple BJI creation template
CREATE BITMAP INDEX BJI..
ON FACT(columns_will_be_used_on_referenced_dimension, ..)
FROM FACT, REFERENCED_DIMENSION
WHERE ..
NOLOGGING PARALLEL COMPUTE STATISTICS LOCAL ;

BJI 1 creation time and size
	Elapsed: 04:56:25.04
	15,625 GB

BJI 2(an alternative for BJI 1) creation time and size
	Elapsed: 04:59:28.05
	15,625 GB

For this series the question was when to choose which Oracle OLAP option;
– Star Transformation with Bitmap Indexes or
– Summary Management with Materialized Views or
– Bitmap Join Indexes or
– of course full and parallel partitioned table scans with hash joins

I may easily say after ending the first round of the performance tests on the test system it will all depend on your own reporting needs(read) and ETL character(write). So you need to test both read and write ends and decide for each condition’s best fit. Usually you will end up with a mixture of these solutions I guess. And remember that some of the performance gains may be so attractive that these results might guide you to modify your model, so it is always better to have this kind of performance test results during the development phase.

Last words goes to 10G Release 2(10.2.0.3) ; you have my word on that all mentioned options are very stable, so they really worth your testing efforts on this release and truncate all your past experiences :)

This one is for old C Lovers

I was reading Bilal Hatipoğlu‘s

Whence C? Why C? Whither C?

Differences between C and C++

posts and I felt how I loved and I am missing programming with C.

Nowadays PL/SQL is my favorite development environment but one can never forget his first love right. Also I believe C is still the only platform you may choose when you decide to implement a million dolar software :)

I will be attending a presentation by Ogün Heper on 11g Oracle Call Interface(OCI ) New Features next weekend, so I plan to blog a little more later.

Good SQL Practices Videos by Stephane Faroult

This email is from Oracle-L list and I really loved the videos, educative, also funny examples and images are used so I wanted to share;

On Jan 27, 2008 4:03 PM, Stephane Faroult wrote:

> I have uploaded to Youtube (in 3 parts) a video derived from a
> presentation I did to some IT managers several months ago about “Good
> SQL practices”. It’s a kind of experiment for me, there may be others in
> the future.
> Enjoy.
>
> Part 1: http://www.youtube.com/watch?v=40Lnoyv-sXg
> Part 2: http://www.youtube.com/watch?v=GbZgnAINjUw
> Part 3: http://www.youtube.com/watch?v=y70FmugnhPU
>
> Stephane Faroult

Thank you Mr.Stephane Faroult for your efforts :)

External table’s performance for a row and undocumented Oracle functions

I wanted to share two OTN forum threads I found interesting this weekend. First one is this ;

http://forums.oracle.com/forums/thread.jspa?messageID=2313216

Here Daniel Morgan advises a function called WM_CONCAT for the opponent’s need. There are several interesting undocumented Oracle functions which you may test carefully and decide to use or not to, and here is some of them listed at psoug;

http://www.psoug.org/reference/undocumented.html

The second thread is about using an external table for retrieving a row several times. External tables are great tools when you need to access a huge flat file on the operating system. You can process and with functions at select side also transform this bulk data into a custom Oracle table with create table as select which can be done nologging and parallel.

But as mentioned in this thread for an external table selecting a row’s cost is 48 consistent gets where as for a permanent table it is 3 consistent gets;

SQL> set autotrace traceonly statistics
SQL> SELECT enckey FROM ext_ops_sec;

Statistics
----------------------------------------------------------
         17  recursive calls
          0  db block gets
         48  consistent gets
          0  physical reads
          0  redo size
        417  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

SQL> SELECT enckey FROM tab_ops_sec;

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          3  consistent gets
          0  physical reads
          0  redo size
        417  bytes sent via SQL*Net to client
        384  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

This cost’s details are listed in the thread with event 10046 level 8 trace’s tkprof report sys=yes and waits=yes options enabled. So if you are not supposed to use an external table for a row you may test and choose one of the alternatives mentioned in the thread;

http://forums.oracle.com/forums/thread.jspa?threadID=610678

Performance Tuning Protocol

Performance problem posts are maybe one of the most popular posts at forums. Usually what I see is the opponent does not provide any Oracle release information or any statistics related to his/her problem. So since usually we are after quick dirty solutions, answers come immediately of course based on guests and previous similar experiences. This is a natural result because of the questions way of asking.

I think any forum must force the opponent to provide these informations, like metalink does. When I am commenting on these kind of forum threads I use below brief introduction of Bilal Hatipoğlu;

Oracle performance analysis – Tracing and performance evaluation

A very efficient summary, as he mentions on Oracle SQL and PL/SQL performance analysis, the aim and the tools.

With this post I will try to summaries these tools in two categories. When your users know where the performance problem is you are really lucky, but usually it is just this sentence you face; “Database is so slow today..” In this case it is better to start with an instance level snapshot of the problematic period, no time wasting guesses are needed especially if this a production database, catch the highest prioritized one or two SQL or PL/SQL from the report and continue to focus on them with an appropriate session based tuning tool. So when is which tool appropriate is another question, below you will find some of my experiences;

A. Instance Level Tools –

1. Statspack –

It needs a setup and configuration step, its biggest advantage is it is free to use on each edition. The top waits and top consumers(SQLs ordered by buffer gets usually) are the starting points of your report. Level 7 report includes segment statistics and execution plans additionally, usually not needed but for specific tasks this additional information may be worthy for its gathering cost.

2. AWR / ADDM(Advisors) / ASH after 10g –

If you are working for a rich company with their extra costs these toys are addictive. AWR is like a performance statistics data warehouse of your database, informations inside AWR are used by Oracle’s automatic advisors. Database or Grid Control is the complementary tool for these two, under performance tab especially Top Activity is the best place to have a quick snapshot of the problem and for each of these tools to be started with a click. Also you may choose to configure and produce these reports with their supplied packages from SQL*Plus.

B. Session Level Tools –

1. SQL*Plus’s timing –

The easiest way to compare two things I guess, but just shows you the elapsed time of the database call you made. Also dbms_utility.get_cpu_time inside your pl/sql applications return the current CPU time in 100th’s of a second. Be careful and run several times your sql query or pl/sql call to avoid the caching cost of the first run.

2. Explain plan and dbms_xplan –

dbms_xplan formats the plan_table rows produced with explain plan. Especially with 10g there are lots of useful information in the output report. Be careful this is a guess, at runtime things may change. And you may also take the advantage of producing plans from v$sql_plan or awr repository with dbms_xplan.

3. SQL*Plus’s autotrace –

In my opinion especially after 10g the optimum tool, easy to create the output and the output may show both the execution plan and statistics. I use the set autotrace traceonly option to avoid the long outputs of the query and see only the statistics or/and execution plan.

4. Tom Kyte’s runstats –

Another handy tool, additionally shows you the detail latching information based on v$ dynamic performance views which may be critical for an heavy oltp environment. runstats needs some simple configuration steps, you may need dba assistance since there are several v$ grants. Also this package can be customized easily.

5. DBMS_PROFILER –

Handy PL/SQL profiler, I do not know the reason but this one doesn’t has the attraction it ought to have. Easy to setup and use, also some development IDEs provide special pluggins based on dbms_profiler.

6. Event 10046, SQL Tracing and tkprof –

Since Event 10046 is my favorite, I wrote a dedicated introduction post before and this post is up to now the most visited post of this blog. With Level 8 you have the waits included in your trace file. After 9i tkprof, free sql trace format utility of Oracle, has also waits option.

Event 10046 Level 8 trace file has the most detailed information compared to above other tools, there are several steps to be careful while producing the trace, which are mentioned at my post. And also tkprof has its own limitations, so usually I prefer to use itrprof and produce more manager friendly reports after my tuning studies.

Also 10046 can be used for other reasons like reverse engineering an Oracle DDL or supplied package, especially if there is an ORA-xxxxx error raised.

7. Event 10053, Optimizer Debugging

When you really do not understand the optimizer’s decision on an access path, join order or join method, this trace file will guide you for the reasons behind. If you have several tables joined it may be really hard to follow the output of this event, the trace file does not have a tkprof like utility to turn it into a more human readable format. If there is any, please comment it and let us know :)

10046 and 10053 event traces are important with Oracle support interactions when you have performance SRs.

I know lot of senior DBAs and Developers have their own home grown monitoring and tuning scripts like Tanel Poder, so please feel free to comment on this post about yours.

Continue reading with Part 2

Let’s get (more) connected :)

After the recent tagging storm I feel much more closer to some of the bloggers. It is a strange feeling I know, especially for someone you never met or talked before. But I accept that this is the life nowadays, we connect more and more to each other globally through these cables. So I thought of the places where I am active and tried to put them together to get “more” connected :)

My Friendfeed

My Google Profile

My Linkedin Profile

H.Tonguç Yılmaz – Oracle Blog

H.Tonguç Yılmaz – Private Blog

Turkish Oracle Users Forum – OracleTURK

My Eksi Sozluk Profile

My Email / Gtalk : tonguc DOT yilmaz AT gmail DOT com

My Oraclecommunity.net profile

My videos on Vimeo

My Technorati profile

My Twitter profile

My OTN forums profile

My Oracle Mix Profile

My Oracle ACE profile

My Ceturk forum profile

My Google Reader Shared Items

Oracle Database Job Trends – “oracle pl/sql” vs. “oracle java” vs. “oracle apex” ..

I was informed by one of my colleagues about this post on Spring Overtakes EJB as a Skills Requirement and of course immediately I wanted to try what I am interested :) And here are the results of some of my searches;

No need to lie; for a 10g, SQL and PL/SQL lover like me, I was very happy to see above trends :)

_optimizer_ignore_hints; please optimizer ignore all hints inside my applications

After 10g Rule Based Optimizer(RBO) is not supported, DBMS_STATS must be one of your best friends now, you even collect statistics on Oracle’s dictionary and fixed tables. Hints, optimizer directives can be a better name as Jonathan Lewis advises, can be very dangerous sometimes. They may behave differently release to release, they do not produce error messages if something is wrong, but much more important they may stay in your code hard-coded forever. As a result you may never benefit optimizer new features.

The applications in time may be hinted because of some quick dirty performance fixing need of course, inside your views and packages you will easily find them when you query the dictionary. We said after 10g RBO is not supported but still it can be used as an option with the old RULE hint, and even Oracle uses this hint for its own dictionary access time to time. You can catch these calls inside 10046 event trace file. There are really interesting undocumented hints after 10g when you study the trace files.

So what if you want to get most out of the optimizer and you do not want to go over each line of your code to clean up the hard-coded hints, okey this one is like a dream come true;

alter session set "_optimizer_ignore_hints" = TRUE ;

I first learned and blogged this at Jonathan Lewis’s seminar. Normally under score(they are also called hidden, un-supported, un-documented) might be used only with Oracle supports guidance, but there are several exceptions, like _trace_files_public which may cause a security problem as a side affect, you may still use them without experiencing any problems.

It is always better to test these kind of a parameter first at session level and at your test database of course, monitor the results and then if you are ~100% sure about their side affects then take the risk for their advantages. I used some of them before several times even Oracle support didn’t advised me to after my testing results were so dramatic.

So here is a small demo for this one, I hope it also helps you; Code Listing 186 – _optimizer_ignore_hints; please optimizer ignore all hints inside my applications

Testing Information : the scripts mentioned are tested on Oracle Database 11g Enterprise Edition Release 11.1.0.6.0