Extract(Read) > Transform(Test&Apply) > Load(Learn) Blog

July 5, 2009

OracleTurk Hints: Running Huge SQL Files from SQL*Plus and Deleting Huge Amount of Data Best Practices

Filed under: Oracle How To — H.Tonguç Yılmaz @ 4:02 pm

Last week there were several good hints at local user group’s mailing list. I chosed to mention two of them here for public interest. Lets assume you are provided a SQL script which is ~500 MB from a vendor, with unix fifo’s you may decrease the need of the memory to run this kind of a script.


-- Posted by Ogün Heper; Using FIFO to Run Huge SQL Files from SQL*Plus 

oracle@camel # cat s.sql 
select * from dual;
exit
oracle@camel # mkfifo f.sql
oracle@camel # cat s.sql >f.sql &
[1] 1224
oracle@camel # sqlplus / as sysdba @f.sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 24 14:38:07 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


D
-
X

Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[1]+  Done                    cat s.sql >f.sql
oracle@camel # 

Huge INSERTs can be done via direct path with APPEND hint and taking the related segments to NOLOGGING during the load but huge DELETEs and UPDATEs are problematic with Oracle since they produce lots of redo and undo I/O. So a very common question is how to handle these tasks. If you have purchased partitioning feature dropping historic date ranged old partitions is a good workaround for DELETEs. So a very common advise for both huge DELETEs and UPDATEs is a CTAS(Create Table As Select) since it can be done NOLOGGING. Instead of deleting 50 millions of rows out of 250 millions of rows creating a new table with 200 millions of rows with NOLOGGING option may usually be preffered, similar path is always an option for UPDATEs. But the table holding the data here may have lots of grants, triggers etc. and if you do not want to mess up with migrating these metadata to a new table below actions can be a workaround.


-- Posted by Fatih Özerol
-- lets assume we want to delete 41 mil. out of 50 mil. 
-- PQ may decrease the response time if the system is not heavy-loaded already
ALTER SESSION ENABLE PARALLEL DML;
ALTER SESSION ENABLE PARALLEL DDL;
-- at this point you have to make the table read-only not to loose any transaction
-- initially we create a temp table with other 9 mil.
create table tab_tmp nologging parallel  ... as 
select select /*+ parallel(t, some_dop) */.... from ori_tab t where ... ;
-- if the table is an OLTP table and you are running on ARCHIVELOG mode 
-- better to take it to NOLOGGING and PARALLEL
alter table ori_tab nologging parallel some_dop ;
-- at this point setting only the table to nologging is not enough if you have indexes, 
-- you may prefer to set unusable each of them and rebuild them after the load
-- truncate and insert the other 9 mil. rows to the target table
truncate table ori_tab reuse storage ;
insert /*+ append parallel(a, some_dop) */ into ori_tab a nologging 
select /*+ parallel(b, some_dop) */ * from tab_tmp b ;
commit;
-- if the table is an OLTP table revert back the settings
alter table ori_tab NOPARALLEL LOGGING ;
-- at this point you can alter the table to read-write again
-- if any index is set unusable you must rebuild them now

Some additional reading I may advise are:

Direct path inserts, nologging option and index cost(setting indexes to unusable during load) – http://tonguc.wordpress.com/2007/01/20/direct-path-inserts-nologging-option-and-index-cost/

Preventing DDL operations on a table(Read-Only Table prior 11gR1) – http://tonguc.wordpress.com/2007/09/25/preventing-ddl-operations-on-a-table/

About these ads

1 Comment »

  1. [...] H.Tonguc Yilmaz – Oracleturk hints running huge sql files from sqlplus and deleting huge amount of data best-practices [...]

    Pingback by Blogroll Report 03/07/2009 – 10/07/2006 « Coskan’s Approach to Oracle — July 10, 2009 @ 6:40 pm | Reply


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Create a free website or blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 79 other followers

%d bloggers like this: