Extract(Read) > Transform(Test&Apply) > Load(Learn) Blog

June 4, 2013

TROUG 2013 DW/BI SIG

Filed under: Oracle How To — H.Tonguç Yılmaz @ 9:21 am

Selam, ikinci Türk Oracle Kullanıcıları Derneği, BI/DW özel ilgi grubu toplantımız 21 Haziran günü İTÜ Maslak ‘da gerçekleşecek. Draft plan şu şekilde:

09:00 – 09:30 Kayıt ve açılış
09:30 – 10:15 Ersin İhsan Ünkar / Oracle Big Data Appliance & Oracle Big Data Connectors – Hadoop Introduction
10:30 – 11:15 Ferhat Şengönül / Exadata TBD
11:30 – 12:15 Sponsor sunumu
12:30 – 13:30 Öğle arası
13:30 – 14:15 Ahmet Selahattin Güngörmüş / OBIEE TBD
14:30 – 15:15 Sanem Seren Sever / ODI TBD
15:30 – 16:15 TBD / DW SQL Tuning
16:30 – 17:15 TBD / ODM TBD

Katılmayı arzu edebilirsiniz düşüncesi ile takviminize işlemeniz için önden iletiyorum, içerik netleştiğinde genel bir duyuru da yapacağız. Salon uygun olduğu için ilgileneceğini düşündüğünüz arkadaşlara bilgi verebilirsiniz.

http://troug.org

Advertisements

July 5, 2009

OracleTurk Hints: Running Huge SQL Files from SQL*Plus and Deleting Huge Amount of Data Best Practices

Filed under: Oracle How To — H.Tonguç Yılmaz @ 4:02 pm

Last week there were several good hints at local user group’s mailing list. I chosed to mention two of them here for public interest. Lets assume you are provided a SQL script which is ~500 MB from a vendor, with unix fifo’s you may decrease the need of the memory to run this kind of a script.


-- Posted by Ogün Heper; Using FIFO to Run Huge SQL Files from SQL*Plus 

oracle@camel # cat s.sql 
select * from dual;
exit
oracle@camel # mkfifo f.sql
oracle@camel # cat s.sql >f.sql &
[1] 1224
oracle@camel # sqlplus / as sysdba @f.sql

SQL*Plus: Release 10.2.0.4.0 - Production on Wed Jun 24 14:38:07 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.


Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options


D
-
X

Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[1]+  Done                    cat s.sql >f.sql
oracle@camel # 

Huge INSERTs can be done via direct path with APPEND hint and taking the related segments to NOLOGGING during the load but huge DELETEs and UPDATEs are problematic with Oracle since they produce lots of redo and undo I/O. So a very common question is how to handle these tasks. If you have purchased partitioning feature dropping historic date ranged old partitions is a good workaround for DELETEs. So a very common advise for both huge DELETEs and UPDATEs is a CTAS(Create Table As Select) since it can be done NOLOGGING. Instead of deleting 50 millions of rows out of 250 millions of rows creating a new table with 200 millions of rows with NOLOGGING option may usually be preffered, similar path is always an option for UPDATEs. But the table holding the data here may have lots of grants, triggers etc. and if you do not want to mess up with migrating these metadata to a new table below actions can be a workaround.


-- Posted by Fatih Özerol
-- lets assume we want to delete 41 mil. out of 50 mil. 
-- PQ may decrease the response time if the system is not heavy-loaded already
ALTER SESSION ENABLE PARALLEL DML;
ALTER SESSION ENABLE PARALLEL DDL;
-- at this point you have to make the table read-only not to loose any transaction
-- initially we create a temp table with other 9 mil.
create table tab_tmp nologging parallel  ... as 
select select /*+ parallel(t, some_dop) */.... from ori_tab t where ... ;
-- if the table is an OLTP table and you are running on ARCHIVELOG mode 
-- better to take it to NOLOGGING and PARALLEL
alter table ori_tab nologging parallel some_dop ;
-- at this point setting only the table to nologging is not enough if you have indexes, 
-- you may prefer to set unusable each of them and rebuild them after the load
-- truncate and insert the other 9 mil. rows to the target table
truncate table ori_tab reuse storage ;
insert /*+ append parallel(a, some_dop) */ into ori_tab a nologging 
select /*+ parallel(b, some_dop) */ * from tab_tmp b ;
commit;
-- if the table is an OLTP table revert back the settings
alter table ori_tab NOPARALLEL LOGGING ;
-- at this point you can alter the table to read-write again
-- if any index is set unusable you must rebuild them now

Some additional reading I may advise are:

Direct path inserts, nologging option and index cost(setting indexes to unusable during load) – https://tonguc.wordpress.com/2007/01/20/direct-path-inserts-nologging-option-and-index-cost/

Preventing DDL operations on a table(Read-Only Table prior 11gR1) – https://tonguc.wordpress.com/2007/09/25/preventing-ddl-operations-on-a-table/

July 1, 2009

Instrumented Code is Better: An Example on How to Generate Session Level ASH Reports

Filed under: Oracle How To,Oracle Performance — H.Tonguç Yılmaz @ 9:08 pm

After 10g if you have its license ASH based performance monitoring is an alternative to the traditional SQL Tracing in my opinion. You may either use EM to generate ASH reports or directly query ASH dictionary. The handy additional hint here is that you can set filters to ASH data in order to focus on a specific session, the filter options at ASH report screen at EM can be used for this purpose or below is an example to monitor the waits from v$active_session_history view.


-- at session 1
begin
	dbms_session.set_identifier('your_identifier');
	/* other useful options may be */
	dbms_application_info.set_action('your_action');
	dbms_application_info.set_client_info('your_client_info');
	dbms_application_info.set_module('your_module', 'your_action');
	/* */
end;
/

select ... ;

-- while query at session 1 continues to execute switch to session 2
-- note that no commit is needed to see the client_identifier set from the first session(autonomous transaction)
SELECT sid,
       client_identifier,
       module,
       action,
       client_info
  FROM v$session
 WHERE client_identifier = 'your_identifier';
 
       SID CLIENT_IDENTIFIER                                                MODULE                                           ACTION                           CLIENT_INFO
---------- ---------------------------------------------------------------- ------------------------------------------------ -------------------------------- ----------------------------------------------------------------
       273 your_identifier                                                  your_module                                      your_action                      your_client_info

-- filtered ASH information 
--
-- Now query v$active_session_history columns you may be interested with client_id = 'your_identifier' and additional filters you may like 
--

dbms_session and dbms_application_info supplied packages’ set options can be very handy like in the above example, credits goes to Mark Rittman for this hint.

[Update on July 2 2009 ]
After Doug Burns’s comments(check out the pingback below) on this post I updated the header and the example.

With this short post I wanted to share the idea of setting client identifier in an application may help especially during monitoring/troubleshooting and wanted to give an example on ASH dictionary. For example using database services even you are running on single node environment also helps as filters with the instance activity reports. Additionally I wanted to highlight the effort to get the ASH information over 10046 SQL Tracing efforts for a database developer, since with 10046 tracing a typical developer will need a DBA or unix admin assistance to ftp the produced trace files where as simple SQL queries to ASH dictionary will guide you to understand the waits for example specific to a session.

So anyway, thanks for Doug Burns’s time to correct the ASH example on this post and provided details on his blog.

April 14, 2009

Embedded Software License For Oracle Products

Filed under: Oracle How To — H.Tonguç Yılmaz @ 6:29 pm

Let’s say you are a product development company and you want to develop your products on Oracle technologies, but of course for a competitive product pricing strategy Oracle’s list prices may frighten you :)

Here is a solution you may get interested if you are a member of the Oracle PartnerNetwork; Embedded Software License(ESL). This is a very restrictive license type available for Independent Software Vendors(ISVs) who embed Oracle technology into their product which is available after 10g. With this option the end user you sell your product may not even be aware of Oracle technology behind and should not be able to access it directly as a developer nor an administrator.

You choose the Oracle edition and options you need for your product and then you receive %80 discount over their list price if your application gets accepted by Oracle. Also you only pay %19 for support over this discounted %20 amount. But is only pricing the main benefit here? No, I don’t think so; embedding an Oracle Database and/or Oracle Application Server directly into an application can make you very time-to-market if you already have team with Oracle development experience and lower your operational costs at long run.

Here is some more details of this Oracle’s very attractive licensing option but for much more information I advise you to contact to your local Oracle sales representative.

ps: this may be a helpful additional reading Oracle Product Family and Pricing Highlights

March 21, 2009

Force RMAN to ignore corrupted blocks with COPY command

Filed under: Oracle How To — H.Tonguç Yılmaz @ 2:40 pm

To tell RMAN to permit corrupt blocks to be backed up you must use the SET MAXCORRUPT command(note that this goes inside the RUN command);

SET MAXCORRUPT FOR DATAFILE ‘foo’ TO n;

where ‘n’ is the number of corrupt blocks which will be allowed in the backup. However the ‘set maxcorrupt’ command only applies to the rman BACKUP command not the rman COPY command. To work around this problem for the rman COPY command you must modify “recover.bsq”.

Find the line in “$ORACLE_HOME/rdbms/admin/recover.bsq” which looks like this:

sys.dbms_backup_restore.copyDataFile(full_name =full_name,

and add the following line immediately after that line:

max_corrupt =1,

For more information please see metalink note 1069093.6.

March 10, 2009

Another Support Case Study: ORA-00354 ORA-00353 ORA-00312: Redolog Block Corruption

Filed under: Oracle How To — H.Tonguç Yılmaz @ 6:46 am

Here is another interesting one you might want to check.

ps: you may use this link for the older support case studies.

February 20, 2009

How to find the parallelism degree of a query that has been finished?(on 10.2)

Filed under: Oracle How To — H.Tonguç Yılmaz @ 6:50 am

This was the question of one of my colleague DBAs yesterday. Since he was on 10.2 and had AWR as a cost option we worked on DBA_HIST_ACTIVE_SESS_HISTORY first. After a while I decided to have the opinions of the experts at oracle-l and there Deepak Sharma advised a query based on DBA_HIST_SQLSTAT and Niall Litchfield advised DBMS_XPLAN.

So there is a guy who ran his query and insists that it took too long since he couldn’t get the DOP he requested that morning and he thinks this is DBA’s fault since he configured the Resource Manager not appropriate. As a result my DBA colleague wants to show that he really got the DOP what he requested for that query and he needs an accurate method to prove this on 10.2.

For the details of the discussion, it is still ongoing here and if you have any comments please let me know :)

February 3, 2009

Another Support Case Study : ‘direct path read temp’ hangs on read() system call when ASMLIB in use

Filed under: Oracle How To — H.Tonguç Yılmaz @ 6:52 am

Once I shared some support case studies, and now here I share another interesting one you might want to check.

January 29, 2009

OraPub’s Free Technical Papers and Presentations

Filed under: Oracle How To — H.Tonguç Yılmaz @ 5:03 pm

Craig Shallahamer is someone we always listen from Danişment Gazi Ünal, who has written Microstate Response-time Performance Profiling paper at January 2003 and who is one of the most knowledgeable Oracle experts in Türkiye and world I have ever seen.

I saw Mr. Shallahamer’s recent Oracle Server Consolidation presentation, so here are the free publications of OraPub you might want to study.

December 29, 2008

Using Pipelined Table Functions for Siebel’s Audit_Read Data Transformation

It has been a while, I thought it is time that I should write on something? :)

25-12-2008 was my second anniversary on this blog: 233 posts on 18 categories resulted 383 approved comments and ~330,000 page views and lots of good friendships until now. So many thanks to all who contributed!

I was very surprised that I didn’t write on one of my best features of Oracle until today, the Pipelined Table Functions. Beginning with the 9i days I benefited from Pipelined Functions for lots of complex transformations needs.

Here with this post I will share a recent example need to demonstrate this feature. Siebel as a source system for our warehouse has a logging table called S_AUDIT_READ, this table has a CLOB column AUDIT_LOG which holds a string to be parsed with the rules mentioned in the below link.

How to parse audit log value (Doc ID 555553.1)

It can be possible to parse the string in several methods as usual, with even a single SQL of course. But PL/SQL is more readable/maintainable compared to SQL, so Pipelined Table Fuctions is always a good alternative to transform a massive data, in terms of performance also. In the below demonstration I also got rid of the algorithm mentioned above because of the cases in our data.

Siebel’s Audit_Read Data Transformation with Oracle’s Pipelined Table Functions

With the help of Pipelined Table Functions while reading the source data in parallel, it is possible to transform and insert to target table through pipes. For more examples please visit:

http://www.oracle-developer.net/display.php?id=207

http://psoug.org/reference/pipelined.html

http://www.oracle.com/technology/products/oracle9i/daily/may30.html

Just before closing, I wish a very happy new year to all of us! :)

Next Page »

Create a free website or blog at WordPress.com.