Day: May 3, 2020

FRA space full oracle

Make it simple

we often as a dba run into trouble ,when flash recovery area of one of critical database gets full due to bulk DML as a cause of business transactions.

This causes the database to hang and cause impact to the applications.Though ,it is not in our hands as we cant control it until the architecture .

Hence a good monitoring system should be implemented to overcome these situations.

The parameter set for FRA is db_recovery_file_dest_size

If the parameter is not set sufficient to balance the archivelog generations ,then database get issues.

We get “ORA-12154 ” and “ORA-00257: archiver error”

SQL>show parameter db_recovery_file_dest_size;

db_recovery_file_dest_size 100G

check the reason for the FRA space full and free space using below useful views

SQL>select * from v$flash_recovery_area_usage;

SQL>select * from v$recovery_file_dest;

To resolve this error,we can either increase the FRA or delete the archivelogs from disk which are older.

Solution 1:

SQL>alter system set db_recovery_file_dest_size=200G;

Solution 2:

Connect to RMAN ,then we can delete the archivelogs which are older.If backups are taken every 12 hours in database we can consider deleting archivelogs generated before 1 day.

You can delete obsolete,expired backups which are no longer needed for recovery to clean up space.You can write a shell script to monitor the space for FRA and make them clear automatically which we will see in the future posts.

In case you have a standby database ,then archivelog deletion policy can be enabled in RMAN configuration.

~]rman target /

run { 
configure archivelog deletion policy to applied on all standby;
allocate channel "c1" device type disk; 
allocate channel "c2" device type disk; 
crosscheck backup; 
crosscheck archivelog all; 
delete noprompt expired backup; 
crosscheck archivelog all; 
delete noprompt expired archivelog all; 
delete noprompt obsolete recovery window of 30 days; 
backup archivelog all delete input; 
crosscheck archivelog all;
delete noprompt archivelog until time 'sysdate-1' backed up 1 times to device type disk; 

You can also run periodic archive backups and delete them.Put backup configuration script in crontab to run every hour

something like this

~]crontab -e

0 */1 * * * /home/oracle/ >/dev/null 2>&1

The maximum that you can do to clear FRA are above solutions. You can notify your application user to batch jobs during off peak hours to avoid bulk transactions concurrently.

Database Blocking Sessions

It is easy to fool people than to convince them that they are fools

In oracle, we often fall into performance issues due to problems inherited from sessions which block each other.

For a dba, blocking is often a problem which should be identified and solved with the confirmation from application team.

This is due to DML transactions on the table rows and locking the table to avoid other users modifying the table until the first user issues commit.

Hence oracle has a very good feature called Multiversion concurrency control(MVCC)

Sounds confusing ?Lets give an example to understand.

Say there are two users test1 and dev1.

I create a new table called emp and inserting some values to it.

SQL>create table emp( emp_id INTEGER NOT NULL PRIMARY KEY, emp_name VARCHAR2(100) NOT NULL, sur_name VARCHAR2(100) NOT NULL, role VARCHAR2(20), dob DATE )

SQL>insert into emp values(1, 'abc', 'cba', 'b', dr'1990-04-10')

SQL>insert into emp values(2, 'xyz', 'zyx', 'y', dr'1999-09-18')

Then select the rows from the table and two rows created under emp table.

SQL>select * from emp;

Create two test users.

SQL>create user test1 identified by password;

SQL>create user dev1 identified by password;

grant emp table access to the users test1 and dev1.

SQL>grant connect,resource,select,insert,update on sys.emp to dev1;

SQL>grant connect,resource,select,insert,update on sys.emp to test1;

connect to test1 user

SQL>conn test1

update role column on emp table

SQL>update emp set role=eng where emp_id=2;

connect to dev1 user and update same column with different value

SQL>conn dev1

SQL>update emp set role=eng1 where emp_id=2;

Now two users are trying to modify the same column and same row

The test1 user modified the rows and not commited the transaction

This creates a lock on the table until the user issues a commit

when dev1 user also modify the role column where emp_id=2 ,the session gets hung until test1 user releases the lock with commit.This also causes performance issues if multiple DML transactions occur

identify the blocking sessions with query

SQL>select sid,serial#,username,blocking_session,wait_class,seconds_in_wait

Warning !!! : In production databases ,once there is a confirmation from application team we can kill the sessions. Dont kill the session without any proper clarity.

Get the sid and serial# from v$session and apply them

SQL>alter system kill session '&sid,&serial#';

Once the test1 user session killed ,the dev1 user session can modify the row and should issue a commit

“””This post is for learning purpose don’t blindly implement in production”””