Extract(Read) > Transform(Test&Apply) > Load(Learn) Blog

August 25, 2009

Materialized views and sys.sumdelta$ UPDATE-DELETE performance

Filed under: Oracle Business Intelligence,Oracle Performance — H.Tonguç Yılmaz @ 5:08 pm

On our ODS environment(HP-UX IA 64-bit & Oracle Database EE 11.1.0.7.0 64bit) we have several materialized views whose base tables are loaded with direct path insert(nologging segments+APPEND hint) and these materialized views are refreshed with these options:


dbms_mview.refresh( v_schema_name || '.' || v_mv_name,
		method => 'C',
		atomic_refresh => FALSE,
		parallelism => pin_parallelism );

Lately I started to see sys.sumdelta$ table and its index at our hottest segments list on AWR reports and v$segstats.


Segments by DB Blocks Changes DB/Inst: ODS/ods Snaps: 9159-9166
-> % of Capture shows % of DB Block Changes for each top segment compared
-> with total DB Block Changes for all segments captured by the Snapshot

Tablespace Subobject Obj. DB Block % of
Owner Name Object Name Name Type Changes Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM I_SUMDELTA$ INDEX 4,386,512 21.95
SYS SYSTEM SUMDELTA$ TABLE 2,921,600 14.62
ODS SCRATCH_BT I_SNAP$_SBL_CONTACT INDEX 2,555,296 12.78
ODS SCRATCH_BT I_SNAP$_SBL_PARTY_PE INDEX 1,994,592 9.98
ODS SCRATCH_BT I_SNAP$_MV_SBL_ACCOU INDEX 1,924,304 9.63
------------------------------------------------------------- 

Segments by Buffer Busy Waits DB/Inst: ODS/ods Snaps: 9159-9166
-> % of Capture shows % of Buffer Busy Waits for each top segment compared
-> with total Buffer Busy Waits for all segments captured by the Snapshot

Buffer
Tablespace Subobject Obj. Busy % of
Owner Name Object Name Name Type Waits Capture
---------- ---------- -------------------- ---------- ----- ------------ -------
SYS SYSTEM SUMDELTA$ TABLE 128,275 49.75
SYS SYSTEM I_SUMDELTA$ INDEX 86,186 33.43
SYS SYSTEM JOB$ TABLE 11,130 4.32
ODS SCRATCH_BT I_SNAP$_SBL_CONTACT INDEX 8,112 3.15
ODS SCRATCH_BT I_SNAP$_MV_SBL_ACCOU INDEX 3,968 1.54
-------------------------------------------------------------

This table was getting larger and larger after each base table load for each materialized view and below two queries were called recursively each time:


SQL> select sum(bytes)/(1024*1024) MB from dba_segments
2 where owner = 'SYS' and segment_name = upper('sumdelta$') ;

MB
----------
811

SQL> select count(*) from sys.sumdelta$ t ;

COUNT(*)
----------
9345556

SQL> SELECT plan_table_output
2 FROM TABLE(dbms_xplan.display_cursor('9sbsudzr87ztj')) ;

-------------------------------------
update sys.sumdelta$ s set s.timestamp = :1, s.scn = :2 where
s.tableobj# = :3 and s.timestamp >=
to_date('4000-01-01:00:00:00','YYYY-MM-DD:HH24:MI:SS')
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|Time
--------------------------------------------------------------------------------
| 0 | UPDATE STATEMENT | | | | 3 (100)|
| 1 | UPDATE | SUMDELTA$ | | | |
|* 2 | INDEX RANGE SCAN| I_SUMDELTA$ | 15758 | 1107K| 2 (0)|00:00:01
--------------------------------------------------------------------------------

SQL> SELECT plan_table_output
2 FROM TABLE(dbms_xplan.display_cursor('9wb7u7x708yvr')) ;

-------------------------------------
DELETE FROM sys.sumdelta$ sd WHERE sd.tableobj# = :1 AND sd.timestamp less than :2
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|Time
--------------------------------------------------------------------------------
| 0 | DELETE STATEMENT | | | | 6189 (100)|
| 1 | DELETE | SUMDELTA$ | | | |
|* 2 | INDEX RANGE SCAN| I_SUMDELTA$ | 1669 | 98471 | 6189 (1)|00:01:27
--------------------------------------------------------------------------------

After a small investigation since execution plan showed that two queries used dynamic sampling, I found that this sys table is excluded from gather_stats_job. Since expected behavior is after refreshes the number of rows at this dictionary table should reduce we decided to open an SR. After some more tests I saw that neither fast refresh nor complete refresh made any difference regarding sumdelta$ table’s number of rows at our environment.

For SR we provided the usual 10046 aspirin and saw that all trace files provided from 10046 event output(as per being run in parallel there is QC and slaves trace files) there were no sumdelta$ reference at all. So this guided us to believe that this dictionary table not getting purged can be a bug.


ALTER SESSION ENABLE PARALLEL DDL ;
ALTER SESSION ENABLE PARALLEL DML ;

-- create table sr_test nologging parallel 8 as select * from src_siebel.s_contact where 1=2;
-- create materialized view mv_sr_test enable query rewrite AS SELECT EMP_NUM, CREATED FROM sr_test ;

Select /*+ parallel(sm,8) parallel_index(sm,8) */ count(*) from sys.sumdelta$ sd ;

COUNT(*)
----------
2601080

insert /*+ append */ into sr_test select * from src_siebel.s_contact where rownum  'C', parallelism => 8, atomic_refresh=>false);

PL/SQL procedure successfully completed

commit ;

select sysdate, user from dual;
ALTER SESSION SET EVENTS '10046 TRACE NAME CONTEXT OFF';

Select /*+ parallel(sm,8) parallel_index(sm,8) */ count(*) from sys.sumdelta$ sd ;

COUNT(*)
----------
2601188

We started to look for some immediate workaround options, dropping and recreating the materialized views did not reduce the number of rows on the dictionary table. So we decided to leave using materialized views and change them with create table as selects, we started from the most long refreshing ones and 3-4 hours refresh times decreased to minutes with this workaround. But of course query rewrite was the alternative cost of this action, we needed to go over all report queries and change them to use the new tables.

We also tried to truncate the dictionary table first at dummy clone databases and then at our development environment, until now there were no problems experienced, so this truncate can also be scheduled weekly for example. As a result until this bug is fixed we feel lucky to have at least two different workaround options, sometimes things get much more uglier :)

July 1, 2009

Instrumented Code is Better: An Example on How to Generate Session Level ASH Reports

Filed under: Oracle How To,Oracle Performance — H.Tonguç Yılmaz @ 9:08 pm

After 10g if you have its license ASH based performance monitoring is an alternative to the traditional SQL Tracing in my opinion. You may either use EM to generate ASH reports or directly query ASH dictionary. The handy additional hint here is that you can set filters to ASH data in order to focus on a specific session, the filter options at ASH report screen at EM can be used for this purpose or below is an example to monitor the waits from v$active_session_history view.


-- at session 1
begin
	dbms_session.set_identifier('your_identifier');
	/* other useful options may be */
	dbms_application_info.set_action('your_action');
	dbms_application_info.set_client_info('your_client_info');
	dbms_application_info.set_module('your_module', 'your_action');
	/* */
end;
/

select ... ;

-- while query at session 1 continues to execute switch to session 2
-- note that no commit is needed to see the client_identifier set from the first session(autonomous transaction)
SELECT sid,
       client_identifier,
       module,
       action,
       client_info
  FROM v$session
 WHERE client_identifier = 'your_identifier';
 
       SID CLIENT_IDENTIFIER                                                MODULE                                           ACTION                           CLIENT_INFO
---------- ---------------------------------------------------------------- ------------------------------------------------ -------------------------------- ----------------------------------------------------------------
       273 your_identifier                                                  your_module                                      your_action                      your_client_info

-- filtered ASH information 
--
-- Now query v$active_session_history columns you may be interested with client_id = 'your_identifier' and additional filters you may like 
--

dbms_session and dbms_application_info supplied packages’ set options can be very handy like in the above example, credits goes to Mark Rittman for this hint.

[Update on July 2 2009 ]
After Doug Burns’s comments(check out the pingback below) on this post I updated the header and the example.

With this short post I wanted to share the idea of setting client identifier in an application may help especially during monitoring/troubleshooting and wanted to give an example on ASH dictionary. For example using database services even you are running on single node environment also helps as filters with the instance activity reports. Additionally I wanted to highlight the effort to get the ASH information over 10046 SQL Tracing efforts for a database developer, since with 10046 tracing a typical developer will need a DBA or unix admin assistance to ftp the produced trace files where as simple SQL queries to ASH dictionary will guide you to understand the waits for example specific to a session.

So anyway, thanks for Doug Burns’s time to correct the ASH example on this post and provided details on his blog.

March 21, 2009

Hot discussion: SQL or PL/SQL

Filed under: Oracle Performance — H.Tonguç Yılmaz @ 9:50 am

I read Rob, Laurent and Chen on this topic. As far as I follow OTN SQL and PL/SQL forum is the place for SQL madness, any kind of question here is discussed in a race to have a more performant single SQL solution. Oracle provides very rich SQL options like analytic, xml and data mining functions or model clause for this kind of a race.

Additionally we know by experience that any data intensive need will have several different solutions and *usually* one set based SQL solutions will be faster than its alternatives especially row based(cursor looping) PL/SQL ones. So parallel to the need there can be exceptions, you may check Nicolas Gasparotto‘s approach on this example with pipelined PL/SQL function option.

IMHO if you are not solving a puzzle but developing an application in a team, PL/SQL based solutions are easier to understand for all team members and as a result it is easier to maintain the application in long run. After all performance is not the single metric for an application’s success, PL/SQL provides exception handling and is also rich with bulk processing and pipelined functions kind of options for related performance needs.

March 5, 2009

How constraints may affect cost based optimizer’s choises

Filed under: Oracle Performance — H.Tonguç Yılmaz @ 12:01 pm

Especially on data warehouse environments constraints are perceived as evil because of their ETL load costs. Where as their reporting performance gains may motivate you for using constraints even with these known costs.

One of the most common example is the NOT NULL constraint with ANTI JOINs, for joining massive amount of data HASH JOIN is something you are always after but if you do not let CBO to know that there are no NULL values on the columns you are joining you may end up with a NESTED LOOP join method which usually lasts for the eternity. Please check this paper for this case’s details; Speeding Up Queries with Semi-Joins and Anti-Joins: How Oracle Evaluates EXISTS, NOT EXISTS, IN and NOT IN

So as always, we need to test, calculate and decide on the alternative cost of using or not using any option available with Oracle for your own scenarios, after investigating below examples I shared you may also choose to go with VALIDATED constraints or not.

Not Null constraints and the optimizer after 10g

Check constraints and the optimizer after 10g

Telling the database that one-to-one mandatory relationship exists on the join columns with a foreign key, you can eliminate the unnecessary table’s join cost over a view after 10g Release 2

Having a Novalidate constraint hides valuable information from the CBO

SQL*Loader direct path load disables the foreign key constraint&ignores the foreign key constraint&ignores the trigger where as direct path insert just ignores the foreign key constraint

December 29, 2008

Using Pipelined Table Functions for Siebel’s Audit_Read Data Transformation

It has been a while, I thought it is time that I should write on something? :)

25-12-2008 was my second anniversary on this blog: 233 posts on 18 categories resulted 383 approved comments and ~330,000 page views and lots of good friendships until now. So many thanks to all who contributed!

I was very surprised that I didn’t write on one of my best features of Oracle until today, the Pipelined Table Functions. Beginning with the 9i days I benefited from Pipelined Functions for lots of complex transformations needs.

Here with this post I will share a recent example need to demonstrate this feature. Siebel as a source system for our warehouse has a logging table called S_AUDIT_READ, this table has a CLOB column AUDIT_LOG which holds a string to be parsed with the rules mentioned in the below link.

How to parse audit log value (Doc ID 555553.1)

It can be possible to parse the string in several methods as usual, with even a single SQL of course. But PL/SQL is more readable/maintainable compared to SQL, so Pipelined Table Fuctions is always a good alternative to transform a massive data, in terms of performance also. In the below demonstration I also got rid of the algorithm mentioned above because of the cases in our data.

Siebel’s Audit_Read Data Transformation with Oracle’s Pipelined Table Functions

With the help of Pipelined Table Functions while reading the source data in parallel, it is possible to transform and insert to target table through pipes. For more examples please visit:

http://www.oracle-developer.net/display.php?id=207

http://psoug.org/reference/pipelined.html

http://www.oracle.com/technology/products/oracle9i/daily/may30.html

Just before closing, I wish a very happy new year to all of us! :)

September 30, 2008

On Formating Treasures of Execution Plan Interpretation

Filed under: Oracle Performance — H.Tonguç Yılmaz @ 4:37 pm

I usually use SQL*Plus’s AUTOTRACE option for a quick understanding of some query’s execution steps. But with DBMS_XPLAN’s formatting options someone can find much more detailed information which may shorten a tuning workshop.


set serveroutput off        
set linesize 2500
set autotrace traceonly explain

select /*+ gather_plan_statistics */ 
       first_name, salary, department_name
  from employees e, departments d
 where e.department_id = d.department_id
   and d.department_name like 'A%' ;

Execution Plan
----------------------------------------------------------
Plan hash value: 1021246405

--------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name              | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                   |    10 |   300 |     4   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |                   |       |       |            |          |
|   2 |   NESTED LOOPS               |                   |    10 |   300 |     4   (0)| 00:00:01 |
|*  3 |    TABLE ACCESS FULL         | DEPARTMENTS       |     1 |    16 |     3   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN          | EMP_DEPARTMENT_IX |    10 |       |     0   (0)| 00:00:01 |
|   5 |   TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |    10 |   140 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("D"."DEPARTMENT_NAME" LIKE 'A%')
   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")

set autot off
 
select /*+ gather_plan_statistics */ 
       first_name, salary, department_name
  from employees e, departments d
 where e.department_id = d.department_id
   and d.department_name like 'A%' ;

select * from table(dbms_xplan.display_cursor(null, null, 'TYPICAL IOSTATS LAST')) ; 

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  134hnsg0n3tr6, child number 0
-------------------------------------
select /*+ gather_plan_statistics */        first_name, salary,
department_name   from employees e, departments d  where
e.department_id = d.department_id    and d.department_name like 'A%'

Plan hash value: 1021246405

-----------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name              | Starts | E-Rows |E-Bytes| Cost (%CPU)| E-Time   | A-Rows |   A-Time   | Buffers | Reads  |
-----------------------------------------------------------------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   1 |  NESTED LOOPS                |                   |      1 |        |       |            |          |      3 |00:00:00.08 |      13 |      8 |
|   2 |   NESTED LOOPS               |                   |      1 |     10 |   300 |     4   (0)| 00:00:01 |      3 |00:00:00.08 |      11 |      7 |
|*  3 |    TABLE ACCESS FULL         | DEPARTMENTS       |      1 |      1 |    16 |     3   (0)| 00:00:01 |      2 |00:00:00.06 |       8 |      6 |
|*  4 |    INDEX RANGE SCAN          | EMP_DEPARTMENT_IX |      2 |     10 |       |     0   (0)|          |      3 |00:00:00.02 |       3 |      1 |
|   5 |   TABLE ACCESS BY INDEX ROWID| EMPLOYEES         |      3 |     10 |   140 |     1   (0)| 00:00:01 |      3 |00:00:00.01 |       2 |      1 |
-----------------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("D"."DEPARTMENT_NAME" LIKE 'A%')

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
   4 - access("E"."DEPARTMENT_ID"="D"."DEPARTMENT_ID")


24 rows selected.

In above example I used GATHER_PLAN_STATISTICS hint to force row source execution statistics(check V$STATISTICS_LEVEL) generation, the alternative is to set STATISTICS_LEVEL parameter to ALL at session level, as a result now you have very important additional information compared to the standard AUTOTRACE output, like;
– “A-Rows” – the number of rows actually produced by the corresponding row source(remember the ‘tuning by cardinality‘ paper by Mr.Wolfgang Brietling)
– “Buffers” – the number of consistent reads done by the row source
– “Starts” – indicating how many times the corresponding operation was repeated

Unfortunately in my opinion information on these options are limited in the documentation here, so for more detailed explanation reading Mr.Jonathan Lewis’s post here may be good starting point.

September 28, 2008

11g Enhancement for ALTER TABLE .. ADD COLUMN Functionality

Filed under: Oracle 11g New Features,Oracle Performance — H.Tonguç Yılmaz @ 10:45 pm

Before Oracle 11g adding new columns with DEFAULT values and NOT NULL constraint required both an exclusive lock on the table and the default value to be stored in all existing records.

Now in Oracle 11g the database can optimize the resource usage and storage requirements for this operation, the default value is stored in the data dictionary instead of updating the table column as a result especially for large tables the execution time is reduced and space is saved.

In addition, the following ADD COLUMN operations can now run concurrently with DML operations:
* Add a NOT NULL column with a default value
* Add a nullable column without a default value
* Add a virtual column

release 1002000300 -


drop table tst_source purge ;
create table tst_source nologging as
select rownum id, text
  from dba_source;

set timing on

ALTER TABLE tst_source ADD (name VARCHAR2(16) DEFAULT 'N' NOT NULL);

Elapsed: 00:00:30.43

set timing off

exec dbms_stats.gather_table_stats(user, 'tst_source');

select count(*) from tst_source where name is NULL ;

  COUNT(*)
----------
         0

release 1101000600 –


drop table tst_source purge ;
create table tst_source nologging as
select rownum id, text
  from dba_source;

set timing on

ALTER TABLE tst_source ADD (name VARCHAR2(16) DEFAULT 'N' NOT NULL);

Elapsed: 00:00:00.10

set timing off

exec dbms_stats.gather_table_stats(user, 'tst_source');

select count(*) from tst_source where name is NULL ;

  COUNT(*)
----------
         0

On SAGE and Oracle’s new 11g SQL Tuning Workshop Education Content

Filed under: Oracle 11g New Features,Oracle Performance — H.Tonguç Yılmaz @ 5:48 pm

Most probably you think SAGE (this was its project name before Larry’s announcement, much more compact naming so I prefer it :) -Oracle Exadata Storage Server and the HP Oracle Database Machine- will not make a change in your daily life and you maybe unsatisfied with this announcement but I promise to give you something cool whether you are a massive OLAP or an XE developer with this post. :)

But first let me share with you one of my friend’s, who is a warehouse developer, first response to SAGE: “Oh I knew Oracle should work much more faster for years, the fault was not ours you see!” .:) So does these new features promise to reduce the importance of the warehouse development best practices? I really don’t think so..

So on my 22 hours way back home I had a chance to look at this new Oracle University education and I strongly think that this is the best content you may get from a tuning education. First, with this post, let me briefly explain the table of contents and then I think I will be doing some series of blog posts parallel to this content.

If you had 10g’s SQL Tuning Workshop here is not only new features but new chapters in the 11g’s content now, like a whole chapter dedicated to Star Transformation and a very long chapter dedicated to the optimizer operations like access path options of the Oracle database. I tried to motivate developers around me for not doing any SQL Tuning, just do tuning as they are developing and I am very happy to see this chapter as it is a part I always dreamed of. :)

Let me mention another interesting observation of mine, if you attended Jonathan Lewis’s optimizer seminars I think you will also think that the content and even some pictures are very similar in this education to that content, to my understanding this is a kind of a kind approval for people like Mr.Lewis who are trying to improve the communities understanding of the complex technologies like optimizer. No pain no gain right, so I do not think this education now will be easily consumed by starters as it should be.

By the way, as we own one of the largest warehouses in the world and there is a very important possibility that we may be testing and using Exadata and so I may be blogging something more than “reading billions of rows in seconds” at the end of the day ETL(I mean lots of writes), sort operations parallel scans and hash joins are in total what will make the difference since lots of reports(I mean reads) are already tuned to be fast enough by logical design and access path options like partitioning, bitmap indexes and materialized views in today’s optimized warehouses.

September 13, 2008

Top Ten Mistakes Found in Oracle Systems – Bad connection management -

Filed under: Oracle Performance — H.Tonguç Yılmaz @ 9:14 am

In the documentation this is mentioned as: The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. Additionally, simultaneous connections from the same client are also a waste of system and network resources.

Until now I have never experienced a performance problem example similar to this case, but last week one my friends asked me a question and yes as mentioned above I saw how it can make a huge difference. He was doing nearly 20-30 SQL calls over JDBC without connection pooling for each of the business actions, and I advised him to package these calls inside a database function for each business action and call these PL/SQL packages as Callable Statements.

Below is a simple example I tried to set up in order to demonstrate this kind of a case: in test1 we will do 1000 times DUAL calls inside a PL/SQL block where as we will open a connection for each of 1000 DUAL calls in test2.


[oracle@tonghost tmp]$ date ; ./test1.ksh ; date ;
Sat Sep 13 11:40:18 EEST 2008
Sat Sep 13 11:40:19 EEST 2008
[oracle@tonghost tmp]$ cat test1.log

Session altered.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.01

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.42

in test1 1000 DUAL calls is done under a second inside the database,


[oracle@tonghost tmp]$ date ; ./test2.ksh ; date ;
Sat Sep 13 11:50:29 EEST 2008
Sat Sep 13 11:52:27 EEST 2008
[oracle@tonghost tmp]$ cat test2.log

Session altered.

Elapsed: 00:00:00.00

Session altered.

Elapsed: 00:00:00.00

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.00

where as in test2 it takes nearly two minutes to do the same task.

Above tests were done on a OEL5 11.1 instance: Bad Connection Management Cost Demo Source Codes

September 3, 2008

On Extended Transaction Monitoring Statistics After 10g

Filed under: Oracle 10g New Features,Oracle Performance — H.Tonguç Yılmaz @ 5:13 am

After 10g when a long-running transaction is rolling back and if the process takes more than six seconds this event is recorded in the view V$SESSION_LONGOPS.


-- to estimate when the monitored rollback process will finish
SELECT TIME_REMAINING, SOFAR / TOTALWORK * 100 PCT
  FROM V$SESSION_LONGOPS
 WHERE SID = :sid
   AND OPNAME = 'Transaction Rollback' ;

Also after 10g addition to the user issued rollback statements, transactions recovered by SMON and parallel instance recovery sessions are monitored. You can view historical information about transaction recovery and transaction rollback and you can calculate average rollback duration to estimate better transaction recovery time and set the FAST_START_PARALLEL_ROLLBACK initialization parameter more appropriately to optimize system performance. Historical information is kept in V$FAST_START_TRANSACTIONS until the next instance shutdown.

V$FAST_START_TRANSACTIONS contains information both for transactions that the Oracle server is recovering(the STATE is RECOVERING) and for transactions that the Oracle server has recovered(the STATE is RECOVERED). New columns are added to this view as;

• XID: Transaction ID of this transaction
• PXID: Transaction ID of the parent transaction
• RCVSERVERS: Number of servers working on this transaction including the coordinator server(It can be 1 if only SMON is doing the recovery).

For example this statement can be used to track transaction recovery after instance startup;


SELECT state, undoblocksdone, undoblockstotal, cputime
  FROM v$fast_start_transactions;

STATE UNDOBLOCKSDONE UNDOBLOCKSTOTAL CPUTIME
---------- -------------- --------------- ---------
RECOVERING 574 1945 16

SQL> /

STATE UNDOBLOCKSDONE UNDOBLOCKSTOTAL CPUTIME
---------- -------------- --------------- ---------
RECOVERING 1300 1945 34

SQL> /

STATE UNDOBLOCKSDONE UNDOBLOCKSTOTAL CPUTIME
---------- -------------- --------------- ---------
RECOVERED 1945 1945 65

SQL> SELECT message FROM v$session_longops;

MESSAGE
---------------------------------------------
Transaction Rollback: xid:0x0001.00a.00000812 : 1945 out of 1945 Blocks done

For more reading on this topic please visit the article Oracle Database 10g: The Top 20 Features for DBAs by Arup Nanda on OTN; Week 2 – How Much Longer?: Rollback Monitoring

Next Page »

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 81 other followers