Extract(Read) > Transform(Test&Apply) > Load(Learn) Blog

January 26, 2008

Performance Tuning Protocol

Filed under: Oracle Performance — H.Tonguç Yılmaz @ 10:18 am

Performance problem posts are maybe one of the most popular posts at forums. Usually what I see is the opponent does not provide any Oracle release information or any statistics related to his/her problem. So since usually we are after quick dirty solutions, answers come immediately of course based on guests and previous similar experiences. This is a natural result because of the questions way of asking.

I think any forum must force the opponent to provide these informations, like metalink does. When I am commenting on these kind of forum threads I use below brief introduction of Bilal Hatipoğlu;

Oracle performance analysis – Tracing and performance evaluation

A very efficient summary, as he mentions on Oracle SQL and PL/SQL performance analysis, the aim and the tools.

With this post I will try to summaries these tools in two categories. When your users know where the performance problem is you are really lucky, but usually it is just this sentence you face; “Database is so slow today..” In this case it is better to start with an instance level snapshot of the problematic period, no time wasting guesses are needed especially if this a production database, catch the highest prioritized one or two SQL or PL/SQL from the report and continue to focus on them with an appropriate session based tuning tool. So when is which tool appropriate is another question, below you will find some of my experiences;

A. Instance Level Tools –

1. Statspack -

It needs a setup and configuration step, its biggest advantage is it is free to use on each edition. The top waits and top consumers(SQLs ordered by buffer gets usually) are the starting points of your report. Level 7 report includes segment statistics and execution plans additionally, usually not needed but for specific tasks this additional information may be worthy for its gathering cost.

2. AWR / ADDM(Advisors) / ASH after 10g –

If you are working for a rich company with their extra costs these toys are addictive. AWR is like a performance statistics data warehouse of your database, informations inside AWR are used by Oracle’s automatic advisors. Database or Grid Control is the complementary tool for these two, under performance tab especially Top Activity is the best place to have a quick snapshot of the problem and for each of these tools to be started with a click. Also you may choose to configure and produce these reports with their supplied packages from SQL*Plus.

B. Session Level Tools –

1. SQL*Plus’s timing -

The easiest way to compare two things I guess, but just shows you the elapsed time of the database call you made. Also dbms_utility.get_cpu_time inside your pl/sql applications return the current CPU time in 100th’s of a second. Be careful and run several times your sql query or pl/sql call to avoid the caching cost of the first run.

2. Explain plan and dbms_xplan –

dbms_xplan formats the plan_table rows produced with explain plan. Especially with 10g there are lots of useful information in the output report. Be careful this is a guess, at runtime things may change. And you may also take the advantage of producing plans from v$sql_plan or awr repository with dbms_xplan.

3. SQL*Plus’s autotrace -

In my opinion especially after 10g the optimum tool, easy to create the output and the output may show both the execution plan and statistics. I use the set autotrace traceonly option to avoid the long outputs of the query and see only the statistics or/and execution plan.

4. Tom Kyte’s runstats -

Another handy tool, additionally shows you the detail latching information based on v$ dynamic performance views which may be critical for an heavy oltp environment. runstats needs some simple configuration steps, you may need dba assistance since there are several v$ grants. Also this package can be customized easily.

5. DBMS_PROFILER –

Handy PL/SQL profiler, I do not know the reason but this one doesn’t has the attraction it ought to have. Easy to setup and use, also some development IDEs provide special pluggins based on dbms_profiler.

6. Event 10046, SQL Tracing and tkprof -

Since Event 10046 is my favorite, I wrote a dedicated introduction post before and this post is up to now the most visited post of this blog. With Level 8 you have the waits included in your trace file. After 9i tkprof, free sql trace format utility of Oracle, has also waits option.

Event 10046 Level 8 trace file has the most detailed information compared to above other tools, there are several steps to be careful while producing the trace, which are mentioned at my post. And also tkprof has its own limitations, so usually I prefer to use itrprof and produce more manager friendly reports after my tuning studies.

Also 10046 can be used for other reasons like reverse engineering an Oracle DDL or supplied package, especially if there is an ORA-xxxxx error raised.

7. Event 10053, Optimizer Debugging

When you really do not understand the optimizer’s decision on an access path, join order or join method, this trace file will guide you for the reasons behind. If you have several tables joined it may be really hard to follow the output of this event, the trace file does not have a tkprof like utility to turn it into a more human readable format. If there is any, please comment it and let us know :)

10046 and 10053 event traces are important with Oracle support interactions when you have performance SRs.

I know lot of senior DBAs and Developers have their own home grown monitoring and tuning scripts like Tanel Poder, so please feel free to comment on this post about yours.

Continue reading with Part 2

About these ads

2 Comments »

  1. some more additional notes and tools which may not be forgotten :)

    dbms_sqltune(10g) : http://www.psoug.org/reference/dbms_sqltune.html
    suggestions(yet limited) are provided by Oracle, automatic sql tuning api.

    dbms_monitor(10g) : http://www.psoug.org/reference/dbms_monitor.html
    tracing is problematic on three tier web applications, parallel queries and shared(mts) server architecture. because the session information is changed there are more than one trace files produced. this supplied package helps to identify and join the trace files.

    _trace_files_public : if you do not have a unix account which is a member of dba group this hidden parameter may help you to see the trace files which are generated under user_dump_dest with read privilege. but this may cause a security problem since when you produce traces someone accessing unix may read your trace files.

    Comment by H.Tonguç Yılmaz — January 27, 2008 @ 6:38 pm | Reply

  2. […] Protocol – Part 2 Filed under: Oracle Performance — H.Tonguç Yılmaz @ 7:19 am In Part 1 I tried to summaries why we need these tools and when to use which one. Also additional to the post […]

    Pingback by Performance Tuning Protocol - Part 2 « H.Tonguç YILMAZ Oracle Blog — February 6, 2008 @ 7:19 am | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 81 other followers

%d bloggers like this: