error: %pre() scriptlet failed, exit status 4error: install: %pre scriptlet failed (2), skipping cell-11.2.3.2.1_LINUX.X64_130109-1

Solution:

[root@EXDBADM001 cellbits]# rpm -qa|grep jdk

jdk-1.5.0_15-fcs.x86_64 <<<<<<<

There is a mismatch in format of rpm installation package

Create a macros file on the configuration directory and specify the correct format!! and retry rpm install with nodeps option. 
eg:rpm -Uvh xxx.rpm --nodeps

[root@EXDBADM001 cellbits]# cd ~/.

[root@EXDBADM001 cellbits]# cat ~/.rpmmacros
%_query_all_fmt %%{name}-%%{version}-%%{release}

ORA-16038 – ORA-19809 – ORA-00312 – ORA-03113 – ORA-00257: archiver error

Archivelogs are filling up fast on your FRA.Soon there will be a space crunch on your disk unless you get rid of the archives from the disk.

This issue is a nightmare in case of very high transaction databases like telecom or banking applications.

ORA-16038: log 2 sequence# 113 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: '/app01/base/oradata/orcl12x/redo02.log'
Sun Oct 11 04:21:13 2020

I get the above errors on my alert logfile

We can increase the FRA to a high value and take a backup of all archivelog files with delete input

I couldnot open my database due to the space crunch and redologs couldnot be archived but i can mount it successfully

SQL> startup;
ORACLE instance started.

Total System Global Area 1644167168 bytes
Fixed Size                  2925024 bytes
Variable Size             973082144 bytes
Database Buffers          654311424 bytes
Redo Buffers               13848576 bytes
Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 3906
Session ID: 1 Serial number: 7537

When i checked the files that occupy lot of space,i can find backuppiece and archivelog files filling up space

ARCHIVED LOG                      53655639                         0
            115          0


FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE
----------------------- ------------------ -------------------------
NUMBER_OF_FILES     CON_ID
--------------- ----------
BACKUP PIECE                    19701765.1                         0
              5          0

i removed obsolete backups and deleted old backuppieces to release some space

RMAN> delete noprompt obsolete;

RMAN retention policy will be applied to the command
RMAN retention policy is set to redundancy 1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=7 device type=DISK
Deleting the following obsolete backups and copies:
Type                 Key    Completion Time    Filename/Handle
-------------------- ------ ------------------ --------------------
Backup Set           4      29-SEP-20
  Backup Piece       4      29-SEP-20          /home/oracle/FRA/ORCL12X/backupset/2020_09_29/o1_mf_annnn_TAG20200929T070113_hq64zsf0_.bkp

FRA size is also not sufficient to hold the archives

In logs i can see lot of ora errors related to archive

ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 50577408 bytes disk space from 10000 limit
Sun Oct 11 04:21:13 2020
ARC1: Error 19809 Creating archive log file to '/home/oracle/FRA/ORCL12X/archivelog/2020_10_11/o1_mf_1_114_%u_.arc'
Sun Oct 11 04:21:13 2020
Errors in file /app01/base/diag/rdbms/orcl12x/orcl12x/trace/orcl12x_ora_4441.trc:
ORA-16038: log 2 sequence# 113 cannot be archived
ORA-19809: limit exceeded for recovery files
ORA-00312: online log 2 thread 1: '/app01/base/oradata/orcl12x/redo02.log'

SQL> show parameter db_recovery

NAME                                 TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
db_recovery_file_dest                string
/data01
db_recovery_file_dest_size           big integer
7000M
SQL> alter system set db_recovery_file_dest_size=10000;

System altered.

I couldnot take archive backup as well .So i have to increase my FRA and change the destination to a mountpoint with more space than current FRA directory

RMAN-03009: failure of backup command on ORA_DISK_1 channel at 10/11/2020 04:53:35
ORA-19809: limit exceeded for recovery files
ORA-19804: cannot reclaim 2143025664 bytes disk space from 8388608000 limit

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        20G   11G  7.2G  61% /
tmpfs           2.0G  1.1G  953M  52% /dev/shm
/dev/sda2        15G   11G  3.5G  75% /app01
/dev/sda5        12G  6.7G  4.2G  62% /data01

SQL> alter system set db_recovery_file_dest_size=12000m;

System altered.

SQL> show parameter db_recovery

NAME                                 TYPE
------------------------------------ --------------------------------
VALUE
------------------------------
db_recovery_file_dest                string
/home/oracle/FRA
db_recovery_file_dest_size           big integer
12000M

So i changed mountpoint for FRA with more seats and increased the FRA size

After increasing the FRA size,archive backup was successful and issue resolved !!!

RMAN> crosscheck archivelog all;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=48 device type=DISK
validation succeeded for archived log
archived log file name=/home/oracle/FRA/ORCL12X/archivelog/2020_10_11/o1_mf_1_151_hr5ljhlb_.arc RECID=160 STAMP=1053493328
validation succeeded for archived log

RMAN> backup archivelog all delete input;

Starting backup at 11-OCT-20
current log archived
using target database control file instead of recovery catalog
..........................
Starting Control File and SPFILE Autobackup at 11-OCT-20
piece handle=/home/oracle/FRA/ORCL12X/autobackup/2020_10_11/o1_mf_s_1053493634_hr5lt4nt_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 11-OCT-20

Identify duplicate rows in a table and delete them

I create a table and insert some rows into the table t11.

SQL> create table t11 (name varchar2(20),id number(10) unique,salary decimal(10),address varchar2(10));

Table created.

SQL> BEGIN
  2  FOR I IN 20000 .. 300000 LOOP
  3  INSERT INTO T11 VALUES('XYZ',I,I,'ABC');
  4  END LOOP;
  5  COMMIT;
  6  END;
  7  /

PL/SQL procedure successfully completed.

SQL> BEGIN
  2  FOR I IN 20000 .. 300000 LOOP
  3  INSERT /* +append */ T11 VALUES('XYZ',I,I,'ABC');
  4  END LOOP;
  5   COMMIT;
  6  END;
  7  /

BEGIN
*
ERROR at line 1:
ORA-00001: unique constraint (SYS.SYS_C009998) violated
ORA-06512: at line 3

When i create id column with unique constraint,the data dictionary is not allowing to insert duplicates to the table.

So we can either drop or disable the constraint on the id column

SQL> alter table t11 drop constraint SYS_C009998;

Table altered.

SQL> BEGIN
  2  FOR I IN 20000 .. 300000 LOOP
  3  INSERT /* +append */ into T11 VALUES('XYZ',I,I,'ABC');
  4  END LOOP;
  5  COMMIT;
  6  END;
  7  /

PL/SQL procedure successfully completed.

I disable the constraint and somehow inserted the rows by mistake without knowing what metadata is stored inside the table.

Now the end user is complaining about the duplicates values while checking the application.

First ,we have to identify the duplicate values in the table.

For that ,we need to use the unique column on the table to identify duplicates.Here,as we know that id is the unique column,we use id column to identify the table which have > 1 rows and limited the row to 5.

SQL> select id,count(id) from t11
  2  group by id
  3  having count(id) > 1
  4  fetch next 5 rows only;

        ID  COUNT(ID)
---------- ----------
     20001          2
     20003          2
     20009          2
     20010          2
     20012          2

SQL> select count(*) FROM t11
WHERE rowid not in
(SELECT MAX(rowid)
FROM t11
GROUP BY id);

  COUNT(*)
----------
    280001

I identified a total of 280001 duplicate counts from 560002 rows

Total row count in the table:

SQL> select count(*) from t11;

  COUNT(*)
----------
    560002

Now we got the duplicate rows.we can remove these rows by an efficient ctas shift method.

I create a temporary table to store the original data.

SQL> create table t11_temp as
  2  select * from t11
WHERE rowid in
(SELECT MAX(rowid)
FROM t11
GROUP BY id);  3    4    5    6

Table created.

SQL> select count(*) from t11_temp;

  COUNT(*)
----------
    280001

Drop the original table

SQL> drop table t11;

Table dropped.

Create the original table with old data from temporary buffer table

SQL> create table t11 as select * from t11_temp;

Table created.

Check the row count on the new table

SQL> select count(*) from t11_temp;

  COUNT(*)
----------
    280001

Drop the temporary table

SQL> drop table t11_temp;

Table dropped.

We can also use delete statement to remove the duplicate rows but the method is slow and takes much longer to complete.

At the end ,we can create the constraint on the id column.

SQL> alter table t11 add constraint SYS_C009998 unique(id);

Table altered.

To ensure the uniqueness of the table,cross verify the table.

SQL> select count(*) FROM t11
WHERE rowid not in
(SELECT MAX(rowid)
FROM t11
GROUP BY id);
no rows selected

How many log switches per hour is normal?

we had a production database for one of our cloudera application.The application users started scheduled and batch jobs on the database.There was a complaint from application team about the slow performance.When we diagnosed the logfile,we found out that there was frequent log switches and archive generation which filled FRA space very fast.So we had to increase the number of redo groups and members and increase the size of the redolog files.

Redologs are the logs which records the changes happening in a database with a SCN value during every checkpoints.The LGWR is the process which writes or flushes the redos from log buffer to the online redolog files for every 3 seconds,1/3 rd of the redolog file full and when the user issues a commit.

If we have a production database which performs lot of transactions due to peak workload in the application activity,then the database instance generates high redos and these redos has to be switched between the groups

Hence the redolog file has to be sized properly for a production database.

I create a sample table

SQL> create table t1 (emp varchar2(10),emp_id number(10) not null,company varchar2(10),salary decimal(10));

Table created.

I insert some huge data into the table

SQL> set timing on
SQL> set time on
07:46:32 SQL> BEGIN
07:46:34   2  FOR I IN 10000 .. 200000 LOOP
07:46:37   3  INSERT INTO T1 values('xyz',i,'abc',i);
07:46:49   4  END LOOP;
07:46:52   5  COMMIT;
07:46:55   6  END;
07:46:57   7  /


The logs shows the redolog switch between groups with a new sequence number and redologs are archived in a physical file.

[oracle@orcl12x ~]$ tail -f /app01/base/diag/rdbms/orcl12x/orcl12x/trace/alert_orcl12x.log

Wed Sep 23 07:43:40 2020
Resize operation completed for file# 3, old size 706560K, new size 716800K
Wed Sep 23 07:47:26 2020
Resize operation completed for file# 1, old size 921600K, new size 931840K
Wed Sep 23 07:47:34 2020
Thread 1 advanced to log sequence 69 (LGWR switch)
  Current log# 3 seq# 69 mem# 0: /app01/base/oradata/orcl12x/redo03.log
Wed Sep 23 07:47:35 2020
Archived Log entry 57 added for thread 1 sequence 68 ID 0x741b2754 dest 1:
Wed Sep 23 07:50:58 2020
Thread 1 advanced to log sequence 70 (LGWR switch)
  Current log# 1 seq# 70 mem# 0: /app01/base/oradata/orcl12x/redo01.log
Wed Sep 23 07:51:01 2020
Archived Log entry 58 added for thread 1 sequence 69 ID 0x741b2754 dest 1:
Wed Sep 23 07:51:08 2020
Thread 1 cannot allocate new log, sequence 71
Checkpoint not complete
  Current log# 1 seq# 70 mem# 0: /app01/base/oradata/orcl12x/redo01.log
Wed Sep 23 07:51:08 2020
Resize operation completed for file# 1, old size 931840K, new size 942080K
Wed Sep 23 07:51:11 2020
Thread 1 advanced to log sequence 71 (LGWR switch)
  Current log# 2 seq# 71 mem# 0: /app01/base/oradata/orcl12x/redo02.log
Wed Sep 23 07:51:15 2020
Archived Log entry 59 added for thread 1 sequence 70 ID 0x741b2754 dest 1:
Wed Sep 23 07:51:21 2020
Thread 1 cannot allocate new log, sequence 72
Checkpoint not complete
  Current log# 2 seq# 71 mem# 0: /app01/base/oradata/orcl12x/redo02.log
Wed Sep 23 07:51:24 2020
Thread 1 advanced to log sequence 72 (LGWR switch)
  Current log# 3 seq# 72 mem# 0: /app01/base/oradata/orcl12x/redo03.log
Wed Sep 23 07:51:27 2020
Archived Log entry 60 added for thread 1 sequence 71 ID 0x741b2754 dest 1:
Wed Sep 23 07:51:36 2020
Thread 1 advanced to log sequence 73 (LGWR switch)
  Current log# 1 seq# 73 mem# 0: /app01/base/oradata/orcl12x/redo01.log
Wed Sep 23 07:51:40 2020
Archived Log entry 61 added for thread 1 sequence 72 ID 0x741b2754 dest 1:
Wed Sep 23 07:51:40 2020
Resize operation completed for file# 1, old size 942080K, new size 952320K
Wed Sep 23 07:51:49 2020
Thread 1 advanced to log sequence 74 (LGWR switch)
  Current log# 2 seq# 74 mem# 0: /app01/base/oradata/orcl12x/redo02.log
Wed Sep 23 07:51:53 2020
Archived Log entry 62 added for thread 1 sequence 73 ID 0x741b2754 dest 1:

Database has three redolog groups which switch more frequently when there is a transaction.

08:01:07 SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
NEXT_CHANGE# NEXT_TIME     CON_ID
------------ --------- ----------
         1          1         73   52428800        512          1 YES INACTIVE      2153831 23-SEP-20
     2154410 23-SEP-20          0

         2          1         74   52428800        512          1 NO  CURRENT       2154410 23-SEP-20
  2.8147E+14                    0

         3          1         72   52428800        512          1 YES INACTIVE      2153252 23-SEP-20
     2153831 23-SEP-20          0


Elapsed: 00:00:00.04

The size of the redologs are too small to accomodate and handle the incoming flood of log and log switch which causes delay in processing of workload and performance issues in application.

DAY                  00  01  02  03  04  05  06  07  08   09  10  11  12  13  14  15  16  17  18  19  20  21  22  23
-------------------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2020-SEP-23            0   0   0   0   0   0   17   33   0   0   0   0   0   0   0   0   0   0   0   0   0   0  0   0

We can observe high number of log switch due to small size of the redolog groups which is not good at all.In the below graph,there is a peak point from hour 6 to 8 due to the log switch frequency.

Graph makes it clear and simple to visualize the pattern

A picture describes 1000 words

As per my thought,Redolog switch per hour should be at most 10 which is acceptable.

If less than 10 log switches say 5 per hour,then the redologs are perfectly sized for the application workload.

When we create a database ,the default logfile size is 50MB.This is not sufficient for a production database with peak workloads.

SQL> select a.group#,a.bytes/1024/1024 from v$log a
  2  inner join v$logfile b on a.group# = b.group#;

    GROUP# A.BYTES/1024/1024
---------- -----------------
         3                50
         2                50
         1                50

So we can increase the size of the redologs to accomodate the incoming logs and reduce the log switch interval for higher database performance.

SQL> select session_id,sample_time,session_state,event,wait_time,time_waited from v$active_session_history where event like '%log%' order by time_waited desc;

SESSION_ID SAMPLE_TIME               SESSION EVENT                           WAIT_TIME TIME_WAITED
---------- ------------------------- ------- ------------------------------ ---------- -----------
        13 23-SEP-20 07.51.53.002 AM WAITING log file parallel write                0      3764935
        13 23-SEP-20 07.51.15.007 AM WAITING log file parallel write                0      3390574
        13 23-SEP-20 07.51.39.007 AM WAITING log file parallel write                0      3366487
         1 23-SEP-20 07.51.53.002 AM WAITING log file sync                          0      2953591
        13 23-SEP-20 07.51.26.007 AM WAITING log file parallel write                0      1747222
        13 23-SEP-20 07.47.34.949 AM WAITING log file parallel write                0      1452470
        13 23-SEP-20 07.51.00.997 AM WAITING log file parallel write                0      1179508
        48 23-SEP-20 10.33.53.221 AM WAITING log file sync                          0      1166026
        13 23-SEP-20 10.33.53.221 AM WAITING log file parallel write                0      1164236

Multiple log file sync and parallel wrtie wait events occured during the DML transaction which cause delay in acknowledging the user session about the redologs being written on the disk.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC STATUS          FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
NEXT_CHANGE# NEXT_TIME     CON_ID
------------ --------- ----------
         1          1         73   52428800        512          1 YES INACTIVE      2153831 23-SEP-20
     2154410 23-SEP-20          0

         2          1         74   52428800        512          1 NO  CURRENT       2154410 23-SEP-20
  2.8147E+14                    0

         3          1         72   52428800        512          1 YES INACTIVE      2153252 23-SEP-20
     2153831 23-SEP-20          0


SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database add logfile group 1 '/app01/base/oradata/orcl12x/redo01.log';

Database altered.

SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01623: log 2 is current log for instance orcl12x (thread 1) - cannot drop
ORA-00312: online log 2 thread 1: '/app01/base/oradata/orcl12x/redo02.log'


SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database add logfile group 3 '/app01/base/oradata/orcl12x/redo03.log' size 500M;

Database altered.

Switch the logfile to make the active redolog group to inactive.The redolog group should be inactive in order to drop the file.

SQL> alter system switch logfile;

System altered.

SQL> /
/

System altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database add logfile group 2 '/app01/base/oradata/orcl12x/redo02'  size 500M;

Database altered.

All the redolog groups are resized and more logfile groups are added

SQL> alter database add logfile group 4 '/app01/base/oradata/orcl12x/redo04'  size 500M;

Database altered.

SQL> alter database add logfile group 5 '/app01/base/oradata/orcl12x/redo05'  size 500M;

Database altered.

SQL> alter database add logfile group 6 '/app01/base/oradata/orcl12x/redo06'  size 500M;

Database altered.

we insert some bulk row again into the table t1

SQL> BEGIN
  2  FOR I IN 30000 .. 400000 LOOP
  3  INSERT INTO T1 values('xyz',i,'abc',i);
  4  END LOOP;
  5  COMMIT;
  6  END;
  7  /

PL/SQL procedure successfully completed.

we can see the reduction in log switch frequency where there is only one log file switch at 12 hours

Wed Sep 23 12:56:20 2020
Thread 1 cannot allocate new log, sequence 80
Private strand flush not complete
  Current log# 3 seq# 79 mem# 0: /app01/base/oradata/orcl12x/redo03
Wed Sep 23 12:56:23 2020
Thread 1 advanced to log sequence 80 (LGWR switch)
  Current log# 2 seq# 80 mem# 0: /app01/base/oradata/orcl12x/redo02
Wed Sep 23 12:56:23 2020
DAY                  00  01  02  03  04  05  06  07  08   09  10  11  12  13  14  15  16  17  18  19  20  21  22  23
-------------------- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- --- ---
2020-SEP-23            0   0   0   0   0   0   17   33   0   0   0   0 1<-   0   0   0   0   0   0   0   0   0  0   0


The above scenario is just for testing purpose and not a production database.

Calculate the bandwidth of IO subsystem from OS linux

We can calculate the IO rate of the disk from the OS to get an estimate of read/write to and from disk.

Locate a particular datafile on any of storage partition

[oracle@orcl12x ~]$ locate *.dbf
/app01/oradata/orcl12x/temp01.dbf
/app01/product/12.1.0/dbhome_1/dbs/cntrlorcl12x.dbf
/data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/o1_mf_sysaux_hkg3odmm_.dbf
/data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/o1_mf_system_hkg3k7kk_.dbf
/data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/o1_mf_undotbs1_hkg3pxkh_.dbf
/data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/o1_mf_users_hkg3v57d_.dbf
/data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/testencry.dbf

Use dd to read the datafile and write the output to a output file and use time to determine the time taken for round trip.

[oracle@orcl12x ~]$ time dd if=/data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/o1_mf_sysaux_hkg3odmm_.dbf of=IORATE.out
1679376+0 records in
1679376+0 records out
859840512 bytes (860 MB) copied, 48.0447 s, 17.9 MB/s

real    0m48.070s
user    0m0.682s
sys     0m8.571s
[oracle@orcl12x ~]$

Here we get ~ 48 seconds round trip time

IO rate = filesize/total time taken for I/O

Calculating the filesize:

[oracle@orcl12x ~]$ du -sh /data01/oracle/oradata/orcl12x/datafile/ORCL12X/datafile/o1_mf_sysaux_hkg3odmm_.dbf


---> 821M     -->filesize

821/48 = 17 MB/sec

Thus we got an bandwidth estimate of the IO subsystem

Cloning Oracle home

When we migrate a database to new server due to out of support by vendor eg: physical server,it is good approach to clone the oracle home from source to destination and recover the database.It depends on individuals to install fresh database software or clone.

Check the oracle home of the database

[oracle@orcl12x ~]$ echo $ORACLE_HOME
/app01/product/12.1.0/dbhome_1/

locate to the oracle home directory

[oracle@orcl12x ~]$ cd $ORACLE_HOME
[oracle@orcl12x dbhome_1]$ cd ..
[oracle@orcl12x 12.1.0]$

Check space on the FS

[oracle@orcl12x 12.1.0]$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda1        20G   10G  8.2G  55% /
tmpfs           2.0G  491M  1.5G  25% /dev/shm
/dev/sda2        15G  7.2G  6.4G  54% /app01
/dev/sda5        12G  6.3G  4.7G  58% /data01

Tar the oracle home

[oracle@orcl12x 12.1.0]$ tar -cvf /data01/dbclone.tar ./dbhome_1 >/dev/null 2>/dev/null

Copy the tar file to the destination host

[oracle@orcl12x 12.1.0]$ scp /data01/dbclone.tar oracle@192.168.56.36:/data01
The authenticity of host 'xxx (xxx)' can't be established.
RSA key fingerprint is 8b:ad:8c:a1:be:23:91:28:ca:5a:8c:4e:1c:c9:00:1b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'xxx' (RSA) to the list of known hosts.
oracle@192.168.56.36's password:
dbclone.tar                                    63% 2995MB   7.2MB/s   03:58 ETA

create the oracle home directory on the destination

[root@exdbadm02 bin]# mkdir -p /app01/product/12.1.0/

Extract the tar file

[root@exdbadm02 bin]# tar -xvf dbclone.tar

Check the inventory file if the path exists else database installation will fail

[oracle@exdbadm02 bin]$ cat /etc/oraInst.loc
inventory_loc=/app01/orainventory
install_group=oinstall
inst_group=oinstall

Create the oracle base directory in destination

[root@exdbadm02 app01]# mkdir base

Give appropriate permissions

[root@exdbadm02 app01]# chmod -R 777 base/

Go to clone perl script

[root@exdbadm02 app01]# su - oracle
[oracle@exdbadm02 ~]$ cd /app01/product/12.1.0/dbhome_1/clone/bin/

Clone the Oracle 12c home

[oracle@exdbadm02 bin]$ perl clone.pl "ORACLE_BASE=/app01/base" "ORACLE_HOME=/app01/product/12.1.0/dbhome_1" "ORACLE_HOME_NAME=orahome"
./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/app01/base" "ORACLE_HOME=/app01/product/12.1.0/dbhome_1" "ORACLE_HOME_NAME=orahome" -silent -paramFile /app01/product/12.1.0/dbhome_1/clone/clone_oraparam.ini
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 500 MB.   Actual 3130 MB    Passed
Checking swap space: must be greater than 500 MB.   Actual 2047 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2020-09-08_09-37-21AM. Please wait ...You can find the log of this install session at:
 /apps01/orainventory/logs/cloneActions2020-09-08_09-37-21AM.log
..................................................   5% Done.
..................................................   10% Done.
..................................................   15% Done.
..................................................   20% Done.
..................................................   25% Done.
..................................................   30% Done.
..................................................   35% Done.
..................................................   40% Done.
..................................................   45% Done.
..................................................   50% Done.
..................................................   55% Done.
..................................................   60% Done.
..................................................   65% Done.
..................................................   70% Done.
..................................................   75% Done.
..................................................   80% Done.
..................................................   85% Done.
..........
Copy files in progress.

Copy files successful.

Link binaries in progress.

Link binaries successful.

Setup files in progress.
WARNING:This install was unsuccessful because: The response filename you have specified for silent operation of the Net8 Configuration Assistant (/app01/product/12.1.0/dbhome_1/network/install/netca_typ.rsp) does not exist on the system.  Please re-check the value you entered for "s_responseFileName" in the [oracle.assistants.netca.client] section of the OUI response file and try again.
Would you like to end this session now?

Setup files successful.

Setup Inventory in progress.

Setup Inventory successful.

Finish Setup successful.
The cloning of orahome was successful.
Please check '/apps01/orainventory/logs/cloneActions2020-09-08_09-37-21AM.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   95% Done.

As a root user, execute the following script(s):
        1. /app01/product/12.1.0/dbhome_1/root.sh



..................................................   100% Done.

Run the root.sh script to link the binaries

[root@exdbadm02 bin]# /app01/product/12.1.0/dbhome_1/root.sh
Check /app01/product/12.1.0/dbhome_1/install/root_exdbadm02_2020-09-08_09-46-47.log for the output of root script

[root@exdbadm02 bin]# cat /app01/product/12.1.0/dbhome_1/install/root_exdbadm02_2020-09-08_09-46-47.log
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /app01/product/12.1.0/dbhome_1
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

We are good to install a new database on the cloned home!!!

./runInstaller: line 106: /: No such file or directory

Junior DBA’s struggle sometimes and scratch their head without knowing the cause of simple errors.

Instead of trying to search for the error on internet,it is better to look at the error and identify the cause.

I try to execute runInstaller from a directory path to install the grid software where the software files actually exists but the name of directory has spaces when i copied from windows using winscp like this

[root@exadb2 ~]# cd /cell/GRIDDATABASE\ 11G\ 64\ BIT/

The installation file should be unzipped completely before copying the files to linux.So runInstaller consider the space as multiple folders and couldnot find the absolute path from the root folder.

Another point to keep in mind is appropriate permissions on the folders with ownership.Followed by the oracle inventory file.

Once i renamed the directory with single name,runInstaller was able to identify the oui from the absolute path.

[root@exadb2 GRIDDATABASE 11G 64 BIT]# cd ..
[root@exadb2 cell]# mv GRIDDATABASE\ 11G\ 64\ BIT/ software
[root@exadb2 cell]# cd software/

[root@exadb2 software]# chmod -R 777 *
[root@exadb2 software]# cd grid/
[root@exadb2 grid]# ./runInstaller

The user is root. Oracle Universal Installer cannot continue installation if the user is root.
: No such file or directory
[root@exadb2 grid]# su - oracle
[oracle@exadb2 ~]$
[oracle@exadb2 ~]$ cd /cell/software/grid/
[oracle@exadb2 grid]$ ./runInstaller
You do not have sufficient permissions to access the inventory '/oracle/orainventory'. Installation cannot continue. It is required that the primary group of the install user is same as the inventory owner group. Make sure that the install user is part of the inventory owner group and restart the installer.: No such file or directory
[oracle@exadb2 grid]$ vi /etc/oraInst.loc
[oracle@exadb2 grid]$
[oracle@exadb2 grid]$ su
Password:
[root@exadb2 grid]#  vi /etc/oraInst.loc
[root@exadb2 grid]#
[root@exadb2 grid]# su - oracle
[oracle@exadb2 ~]$ cd /cell/
base/         home/         lost+found/   orainventory/ software/
[oracle@exadb2 ~]$ cd /cell/software/grid/
[oracle@exadb2 grid]$ ./runInstaller
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 9496 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 2047 MB    Passed
Checking monitor: must be configured to display at least 256 colors

Does hard parsing lead to sort in memory???

I was running a simple sql select query to fetch some details from a table.I generated an execution plan for the query to check for optimizers plan for the query processing

I observe some interesting details on the execution plan results

SQL> select * from t2 where id between 10000 and 20000;

20002 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1513984157

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 10102 |   256K| 11723   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T2   | 10102 |   256K| 11723   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"<=20000 AND "ID">=10000)


Statistics
----------------------------------------------------------
         47  recursive calls
          0  db block gets
      44387  consistent gets
      43008  physical reads
          0  redo size
     455310  bytes sent via SQL*Net to client
      15214  bytes received via SQL*Net from client
       1335  SQL*Net roundtrips to/from client
          3  sorts (memory)  <--------
          0  sorts (disk)
      20002  rows processed

I can observe 3 sorts in PGA when i ran the query for the first time.

I was checking for the reason behind the sorts without order by or group by clause

When i ran the query for the second time,there was no sorting on memory!! and there are no recursive calls which means the datablocks are read from buffer cache directly!!

SQL> /

20002 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1513984157

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 10102 |   256K| 11723   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T2   | 10102 |   256K| 11723   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"<=20000 AND "ID">=10000)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      44341  consistent gets
      43007  physical reads
          0  redo size
     455310  bytes sent via SQL*Net to client
      15214  bytes received via SQL*Net from client
       1335  SQL*Net roundtrips to/from client
          0  sorts (memory) <-----
          0  sorts (disk)
      20002  rows processed

This concludes that whenever a new sql run on a database,the optimizer searches for the previous hash value of the query.If hash value for the query doesnot exist on the library cache,the optimizer has to hard parse the query with new hash value.

Drop online redolog group

It is very dangerous when we deal with online redolog files in database which records the active transaction changes into it and needed for media ,block and crash recovery based on their states

We have four usual states of redologs in a typical oracle database

1)active 2)inactive 3)current 4) unused

active – require for instance crash recovery in case of failure and block recovery

inactive – not require for recovery and no logs written by lgwr

current – lgwr writing the logs to current group

unused – states fresh newly added group

If we want to drop an online redolog group , we have to first bring the particular group to inactive state.

SQL> select a.bytes/1024/1024,a.thread#,a.sequence#,a.thread#,a.status,b.member from v$log a,v$logfile b where a.GROUP# = b.GROUP#;

A.BYTES/1024/1024    THREAD#  SEQUENCE#    THREAD# STATUS           MEMBER
----------------- ---------- ---------- ---------- ---------------- ----------------------
               50          1        213          1 INACTIVE         /app01/oradata/orcl12x/redo03.log
               50          1        215          1 CURRENT          /app01/oradata/orcl12x/redo02.log
               50          1        214          1 INACTIVE         /app01/oradata/orcl12x/redo01.log

If the redologs are switching frequently due to heavy transactions,then issue the checkpoint to save the changes to the datafile.

SQL> alter system checkpoint;

System altered.

Oracle forces the checkpoint on the instance and saves the buffered blocks to the disk

we can drop the logfiles when checkpoint completes.

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC
---------- ---------- ---------- ---------- ---------- ---------- ---
STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------------- ------------- --------- ------------ --------- ----------
         1          1        214   52428800        512          1 NO
INACTIVE               3922477 26-AUG-20      3931944 26-AUG-20          0

         2          1        215   52428800        512          1 NO
CURRENT                3931944 26-AUG-20   2.8147E+14                    0

         3          1        213   52428800        512          1 NO
INACTIVE               3811949 26-AUG-20      3922477 26-AUG-20          0

We can drop either group 1 or group 3 as we need atleast minimum two redo groups for logging and add new groups.

If there is frequent log switch happen again,we can add two more groups before dropping the existing one and drop the old groups

SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database drop logfile group 3;
alter database drop logfile group 3
*
ERROR at line 1:
ORA-01567: dropping log 3 would leave less than 2 log files for instance
orcl12x (thread 1)
ORA-00312: online log 3 thread 1: '/app01/oradata/orcl12x/redo03.log'

Once we drop the logfile,we can add a logfile with more size

SQL> alter database add logfile group 1 '/app01/oradata/orcl12x/redo01.log
  2  ' size 200M;

Database altered.

Now to verify the newly added logfile

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES  BLOCKSIZE    MEMBERS ARC
---------- ---------- ---------- ---------- ---------- ---------- ---
STATUS           FIRST_CHANGE# FIRST_TIM NEXT_CHANGE# NEXT_TIME     CON_ID
---------------- ------------- --------- ------------ --------- ----------
         1          1          0  209715200        512          1 YES
UNUSED                       0                      0                    0

         2          1        215   52428800        512          1 NO
CURRENT                3931944 26-AUG-20   2.8147E+14                    0

         3          1        213   52428800        512          1 NO
INACTIVE               3811949 26-AUG-20      3922477 26-AUG-20          0