Oracle Concepts and Architecture – Part 1

Recently I posted three entries on this category;

1- Oracle 10g Flashback Versions Query, Drop Table and Recyclebin Management
2- How shared pool works in Oracle and binding example
3- The Storage Hierarchy Summary in an Oracle Database

4- Concurrency and Isolation Levels Concepts

Databases uses locking to ensure transactional integrity and database consistency. Locking prevents users from reading data being changed by other users, and prevents multiple users from changing the same data at the same time. If locking is not used, data within the database may become logically incorrect, and queries executed against that data may produce unexpected results.

If locking is not available and several users access a database concurrently, problems may occur if their transactions use the same data at the same time.

* Dirty Reads; A dirty read occurs when a transaction can see uncommitted changes to a row. If another transaction changes a value, your transaction can read that changed value, but the other transaction will roll back its transaction, making the value invalid, or dirty.

* Non-repeatable Reads; A non-repeatable read occurs when a row not updated during the transaction is read twice within that transaction, and different results are seen from the two reads. If your transaction reads a value, and another transaction commits a change to that value (or deletes the row), then your transaction could read the changed value (or find the row missing) even though your transaction has not committed or rolled back.

* Phantom Reads; A phantom read occurs when a transaction reads a row inserted by another transaction that has not been committed. If another transaction inserts a row to a table, when your transaction queries that table it can read the new row even if the other transaction subsequently rolls back.

The ANSI/ISO SQL standard SQL92 defines three possible kinds of transaction interaction, and four levels of isolation that provide increasing protection against these interactions. These interactions and isolation levels are:

Isolation Level Dirty Read Non-Repeatable Read Phantom Read
READ UNCOMMITTED Possible Possible Possible
READ COMMITTED Not possible Possible Possible
REPEATABLE READ Not possible Not possible Possible
SERIALIZABLE Not possible Not possible Not possible

The behavior of Oracle is:

READ UNCOMMITTED; Oracle never permits “dirty reads.” Although some other database products use this undesirable technique to improve thoughput, it is not required for high throughput with Oracle.

READ COMMITTED; Oracle meets the READ COMMITTED isolation standard. This is the default mode for all Oracle applications. Because an Oracle query only sees data that was committed at the beginning of the query (the snapshot time), Oracle actually offers more consistency than is required by the ANSI/ISO SQL92 standards for READ COMMITTED isolation.

REPEATABLE READ; Oracle does not normally support this isolation level, except as provided by SERIALIZABLE.

SERIALIZABLE; You can set this isolation level using the SET TRANSACTION command or the ALTER SESSION command.

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 13 Data Concurrency and Consistency
Akadia Information Technology Publications

5- Automated Checkpoint Tuning and Mean time to recover(MTTR) Concepts

Check-pointing is an important Oracle activity which records the highest system change number(SCN), a stamp that defines a committed version of a database at a point in time and Oracle assigns every committed transaction a unique SCN, so that all data blocks less than or equal to the SCN are known to be written out to the data files. If there is a failure and then subsequent cache recovery, only the redo records containing changes at SCN(s) higher than the checkpoint need to be applied during recovery.

Instance and crash recovery happens in two steps – cache recovery followed by transaction recovery. During the cache recovery phase, also known as the rolling forward stage, Oracle applies all committed and uncommitted changes in the redo log files to the affected data blocks. The work required for cache recovery processing is proportional to the rate of change to the database and the time between checkpoints.

Fast-start recovery can greatly reduce the mean time to recover(MTTR), The desired time required to perform instance or media recovery on the database, with minimal effects on online application performance. Oracle continuously estimates the recovery time and automatically adjusts the check-pointing rate to meet the target recovery time.

With 10g, the Oracle database can now self-tune check-pointing to achieve good recovery times with low impact on normal throughput. You no longer have to set any checkpoint-related parameters. This method reduces the time required for cache recovery and makes the recovery bounded and predictable by limiting the number of dirty buffers and the number of redo records generated between the most recent redo record and the last checkpoint. Administrators specify a target (bounded) time to complete the cache recovery phase of recovery with the FAST_START_MTTR_TARGET initialization parameter, and Oracle automatically varies the incremental checkpoint writes to meet that target.

The TARGET_MTTR field of V$INSTANCE_RECOVERY contains the MTTR target in effect. The ESTIMATED_MTTR field of V$INSTANCE_RECOVERY contains the estimated MTTR should a crash happen right away.


SELECT TARGET_MTTR,
ESTIMATED_MTTR,
CKPT_BLOCK_WRITES
FROM V$INSTANCE_RECOVERY;

TARGET_MTTR ESTIMATED_MTTR CKPT_BLOCK_WRITES
———– ————– —————–
0             16           1611201

Whenever you set FAST_START_MTTR_TARGET to a nonzero value, then set the following parameters to 0.

LOG_CHECKPOINT_TIMEOUT = 0
LOG_CHECKPOINT_INTERVAL = 0
FAST_START_IO_TARGET = 0

Because these initialization parameters either override FAST_START_MTTR_TARGET or potentially drive checkpoints more aggressively than FAST_START_MTTR_TARGET does, they can interfere with the simulation.

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 14 Manageability
Akadia Information Technology Publications

6- Concurrency and Consistency Concepts

Performance is interested with avaoiding unneccessary work on limited system resouces, Scalability is interested with avaoiding blocking others doing same operations(serialization). These need different attentions, especially second is very important for heavy loaded OLTP systems. Sometimes you may sacrifice in one to gain in other.

In a single-user database, the user can modify data in the database without concern for other users modifying the same data at the same time. However, in a multiuser database, the statements within multiple simultaneous transactions can update the same data. Transactions executing at the same time need to produce meaningful and consistent results. Therefore, control of data concurrency and data consistency is vital in a multiuser database.

  • Data concurrency means that many users can access data at the same time.
  • Data consistency means that each user sees a consistent view of the data, including visible changes made by the user’s own transactions and transactions of other users.

In Oracle locking by default is row level, this makes Oracle a high concurrent database system. In Oracle with its Undo mechanism readers do not block writers and writers do not block readers. Writers only wait for other writers if they attempt to update identical rows in concurrent transactions. Where as a simple update query in Oracle does a lot behind to guarantee these and in this system your primary performance target is I/O.

Also Oracle allows you a high degree of read consistency, if you start a long running query you do not see the changes of the other users until your query ends. Also the system protects you from phantom-uncommited data changes. This is also handled with undo mechanism.

In Oracle you learn that concurrency and consistency can be achieved very fast and correct, every time.

Refences Used :
Oracle® Database Concepts 10g Release 2 (10.2) Chapter 13 Data Concurrency and Consistency

7- Latch and Lock Concepts

Recently I bought this book;
Oracle Wait Interface: A Practical Guide to Performance Diagnostics & Tuning
Table of Contents

Really a nice handbook that make the life of the Oracle performance tuning people easier. In Chapter 6: Interpreting Locks-Related Wait Events, the topic is on Concurrency vs. Integrity. In the DBMS concept this is the “I” of the ACID properties(Atomicity, Consistency, Isolation and Durability). In real world applications, concurrency must be sacrificed for the sake of integrity. The Oracle SGA architecture could be a lot simpler if concurrent access was the only goal :)

When it comes to a latch, it is a form of lock. Although both are locking mechanisms, latches are different from locks(enqueues) in several ways; Table 6-1: Latches vs. Locks.xls

  Latches Locks
Purpose Surve a single purpose: to provide exclusive accessto memory structures. Serve two purposes: to allow multiple processes to share the same resource when lock modes are compatible and to enforce exclusive access to the resource when the lock modes are incompatible.
Jurisdiction Apply only to data structures in the Sga. Protect memory objects, which are temporary. Control access to a memory structure for a single operation. Not transactional. Protect database objects such as tables, data blocks, and state objects. Apllication driven and control access to data or metadata in the database. Transactional.
Acquisition Can be requested in two modes: willing-to-wait or no-wait Can be requested in six different modes: null, row share, row exclusive, share, share row exclusive, or exclusive
Scope Information is kept in the memory and is only visible on the local instance-latches operate at instance level. Information is kept in the database and is visible to all instances accessing the database-locks operate at database level.
Compexity Implemented using simple instructions typically, test-and-set, compare-and-swap, or simple CPU instructions. Implementation is port spesific because the CPU instructions are machine dependent. Lightweight. Implemented using a series of instructions with context switches. Heavyweight.
Duration Held briefly(in microseconds). Normally held for an extended period of time(transactional duration).
Queue When a process goes to sleep after failing to acquire a latch, its request is not queued and serviced in order. Latches are fair game and up for grabs. When a process fails to get a lock, its request is queued and serviced in order, unless the NOWAIT option is specified.
Deadlock Latches are implemented in such a way that they are not subject to deadlocks. Locks support queueing and are subject to deadlocks. A trace file is generated each time a deadlock occurs.

Continue reading with Part 2