Some Best Practices of Upgrading Oracle Database

I will be upgrading my experiment mouse 10g Release 2 database(an emotional period for me to get apart :) to 11g Release 1 soon, so I thought remembering, consolidating and sharing some of my past experiences on upgrading an Oracle database can be useful. You may comment on these depending to your own experiences.

If an upgrade is again at my door step(usually within 3 years at most :) –

a. I first read the new features, concepts and upgrade guides of the new release. Also I check to my platform specific release notes for especially special cases.

b. I test the upgrade process carefully, load and test my applications on upgraded environment and of course test the restore/recovery plan for an unexpected situations. Here do not skip to test the third party tools(if you are using) like data extraction(unloader) or performance monitoring tools you purchased extra. Also test your backup and disaster recovery strategies, cronjobs, if there is an active-passive clustering your switchover scripts.

c. If you are also needing and believing in control and performance of the upgrade steps, you may also plan to do it manually like me. For example I plan to;
– take the database temporarily to NOARCHIVELOG mode,
– close the other instances and increase sga and pga for the database to be upgraded,
– create large redo log files,
– spooling each step to a log file during migration and
– carefully monitor os cpu and memory usage for example with top utulity, tail -f alert_SID.log, plan and guarantee sufficient space in your SYSTEM, UNDO and TEMP tablespaces.
Also there is always a well detailed metalink note for this manual upgrade path :)

Of course when I have the tolerance for downtime, prior to migration and after a successful migration a cold(clean closed) backup will always feel good. During migration restricting database connection to users, stopping listener, unix cron and oracle scheduler(jobs) for unwanted change or interrupts is very important.

Since I believe and trust in Cost based optimizer-CBO(hints are always last resorts as Mr.Lewis advices) I re-collect both object and system statistics as a post migration step(still staying in NOARCHIVELOG may be useful at this point). Also setting COMPATIBLE and OPTIMIZER_FEATURES_ENABLE init.ora parameters to enable new optimizer features is important for the quality of your tests. Always one of the last steps is checking and compiling the invalid objects after upgrading, here a report of the invalid objects prior to the migration will be useful for comparison. In fact I create lots of temporary tables with create table as select for the copies of important dictionary views like dba_tables/indexes/tab_partitions/constraints/triggers etc.

d. For a meaningful(best test strategy let me say), I clone my production database onto a machine with similar resources, this means budget of course(I linked a reference guide for cloning a database) . I run my applications under a load similar to its production and monitor the database with OEM, V$ views, events like 10046-10053 and AWR reports. Here CBO enhancements need the primary attention and you will be needing some reports, prior execution plans and reference metrics prior to upgrade for detail comparisons in order to catch a problematic change in CBO behavior.

e. Of course, no matter what the deadline is, I do not accept to upgrade the real production database until I successfully upgrade and finish my planned tests on the cloned test environment..

I will also share some notes from the 11g new features guide I saw which I think you may also be interested –

i. 11g Release 1 deprecated features;
– Oracle Ultra Search,
– Java Development Kit(JDK) 1.4,
– CTXXPATH index

ii. Important initialization parameter changes;
– USER_DUMP_DEST/BACKGROUND_DUMP_DEST/CORE_DUMP_DEST is replaced with DIAGNOSTIC_DEST parameter which defaults to $ORACLE_BASE/diag.
– If there is a null value for the UNDO_MANAGEMENT initialization parameter which means UNDO_MANAGEMENT is not set, after 11g this implies to AUTO mode.
– So to migrate to automatic undo management if you are not already some how :)
1. Set UNDO_MANAGEMENT=MANUAL
2. create a reasonable load compared to your production
3. and execute DBMS_UNDO_ADV.RBU_MIGRATION function to get the advised size for undo tablespace;

DECLARE
   undo_tablespace_MB NUMBER;
BEGIN
   undo_tablespace_MB := DBMS_UNDO_ADV.RBU_MIGRATION;
END;
/

4. create new undo tablespace as adviced sized
5. last step will be setting UNDO_MANAGEMENT=AUTO

And some extra related readings –

Some migration experiences to share
Deffensive upgrade methods, but still no pain no gain
Take the risk and migrate to 10gR2
Cloning a Database by Howard Rogers

Advertisement