Archives November 2020

[INS-41208] None of the available network subnets is marked for use by Oracle Automatic Storage Management (ASM)

Cause – At least one of the available network subnets should be configured for use by Oracle Automatic Storage Management (ASM).

Action – Configure one of the available network subnets for Oracle Automatic Storage Management (ASM) use if you intend to configure Near Oracle ASM or Client Oracle ASM

This error occurs when we install flex clusters in RAC

Solution:

Choose interface for both ASM and private interconnect

[INS-41107] The interface (eth1) chosen as Public or Private is not on a shared subnet. Nodes not defining are: [rac2]

Network interface subnet are not same for both the RAC nodes

Cause – The Public or Private interface on the nodes indicated are on a different subnet than the ones used on other nodes.
Action – Ensure that the specified Public or Private interface share the same subnet across the nodes.

NODE 1:

[root@rac1 oracle]# ifconfig

eth1      Link encap:Ethernet  HWaddr 08:00:27:80:C4:46
   -----> inet addr:192.168.57.110  <-------  Bcast:192.168.57.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe80:c446/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:824 errors:0 dropped:0 overruns:0 frame:0
          TX packets:77 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:185320 (180.9 KiB)  TX bytes:13507 (13.1 KiB)

NODE 2:

[oracle@rac2 ~]$ ifconfig


eth2      Link encap:Ethernet  HWaddr 08:00:27:71:E8:1A
         -----> inet addr:192.168.56.194 <------- Bcast:192.168.56.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe71:e81a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:71 errors:0 dropped:0 overruns:0 frame:0
          TX packets:51 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:12448 (12.1 KiB)  TX bytes:15858 (15.4 KiB)

Solution:

Contact your network admin

identify the mac address of both nodes nic interface card

configure the ethernet interface to the same subnet!

Reconfigure ipaddress for nodes

[INS-40401] The Installer has detected a configured Oracle Clusterware home on the system.

RAC installation fail due to some misconfiguration

When we reinstall RAC,we see the below error

It is better to deconfigure and cleanup the debris which are partially configured in grid home

Deconfigure the oracle high availability using perl script on the grid home

[root@rac1 grid]# cd /apps01/home/12.1.0/grid/crs/install/
[root@rac1 install]# ./roothas.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Delete failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resourc                                                                                                             es on 'rac1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2673: Attempting to stop 'ora.cssdmonitor' on 'rac1'
CRS-2677: Stop of 'ora.drivers.acfs' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'ra                                                                                                             c1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2020/11/29 12:53:46 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Run the deinstall script to cleanup the failed inventory

[oracle@rac1 ~]$ cd /apps01/home/12.1.0/grid/deinstall/
[oracle@rac1 deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2020-11-29_00-56-24PM/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /apps01/home/12.1.0/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Cluster
Oracle Base selected for deinstall is: /apps01/base
Checking for existence of central inventory location /apps01/orainventory
Checking for existence of the Oracle Grid Infrastructure home
The following nodes are part of this cluster: rac1,rac2
Active Remote Nodes are rac2
Checking for sufficient temp space availability on node(s) : 'rac1,rac2'

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2020-11-29_00-56-24PM/logs//crsdc_2020-11-29_12-57-45PM.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2020-11-29_00-56-24PM/logs/netdc_check2020-11-29_12-57-46-PM.log

Specify all RAC listeners (do not include SCAN listener) that are to be de-configured. Enter .(dot) to deselect all. [LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2020-11-29_00-56-24PM/logs/asmcadc_check2020-11-29_12-58-06-PM.log

ASM configuration was not detected in this Oracle home. Was ASM configured in this Oracle home (y|n) [n]: y
Automatic Storage Management (ASM) instance is detected in this Oracle home /apps01/home/12.1.0/grid.
ASM Diagnostic Destination : /apps01/base
ASM Diskgroups :
ASM diskstring : /dev/oracleasm/disks/*
Diskgroups will not be dropped
 If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you  want to modify above information (y|n) [n]: y
Is Grid Infrastructure Storage(OCR) in ASM diskgroup (y|n) [n]: y
Enter the OCR/Voting Disk diskgroup name []: OCR
Specify the ASM Diagnostic Destination [/apps01/base]:
Specify the diskstring [/dev/oracleasm/disks/*]:
Specify the diskgroups that are managed by this ASM instance []:

Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2020-11-29_00-56-24PM/logs/databasedc_check2020-11-29_12-58-32-PM.log

Oracle Grid Management database was not found in this Grid Infrastructure home

Database Check Configuration END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is:
The following nodes are part of this cluster: rac1,rac2
Active Remote Nodes are rac2
The cluster node(s) on which the Oracle home deinstallation will be performed are:rac1,rac2
Oracle Home selected for deinstall is: /apps01/home/12.1.0/grid
Inventory Location where the Oracle home registered is: /apps01/orainventory
Following RAC listener(s) will be de-configured: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
ASM instance will be de-configured from this Oracle home
Oracle Grid Management database was not found in this Grid Infrastructure home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2020-11-29_00-56-24PM/logs/deinstall_deconfig2020-11-29_00-57-38-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2020-11-29_00-56-24PM/logs/deinstall_deconfig2020-11-29_00-57-38-PM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2020-11-29_00-56-24PM/logs/databasedc_clean2020-11-29_12-58-35-PM.log
ASM de-configuration trace file location: /tmp/deinstall2020-11-29_00-56-24PM/logs/asmcadc_clean2020-11-29_12-58-35-PM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2020-11-29_00-56-24PM/logs/netdc_clean2020-11-29_12-58-36-PM.log

De-configuring RAC listener(s): LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1

De-configuring listener: LISTENER
    Stopping listener: LISTENER
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN3
    Stopping listener: LISTENER_SCAN3
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN2
    Stopping listener: LISTENER_SCAN2
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring listener: LISTENER_SCAN1
    Stopping listener: LISTENER_SCAN1
    Warning: Failed to stop listener. Listener may not be running.
Listener de-configured successfully.

De-configuring Naming Methods configuration file on all nodes...
Naming Methods configuration file de-configured successfully.

De-configuring Local Net Service Names configuration file on all nodes...
Local Net Service Names configuration file de-configured successfully.

De-configuring Directory Usage configuration file on all nodes...
Directory Usage configuration file de-configured successfully.

De-configuring backup files on all nodes...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

The deconfig command below can be executed in parallel on all the remote nodes. Execute the command on  the local node after the execution completes on all the remote nodes.

Run the following command as the root user or the administrator on node "rac2".

/tmp/deinstall2020-11-29_00-56-24PM/perl/bin/perl -I/tmp/deinstall2020-11-29_00-56-24PM/perl/lib -I/tmp/deinstall2020-11-29_00-56-24PM/crs/install /tmp/deinstall2020-11-29_00-56-24PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2020-11-29_00-56-24PM/response/deinstall_OraGI12Home1.rsp"

Press Enter after you finish running the above commands

<----------------------------------------

Run the crs perl script on node 2

[root@rac1 oracle]# /tmp/deinstall2020-11-29_00-56-24PM/perl/bin/perl -I/tmp/deinstall2020-11-29_00-56-24PM/perl/lib -I/tmp/deinstall2020-11-29_00-56-24PM/crs/install /tmp/deinstall2020-11-29_00-56-24PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstall2020-11-29_00-56-24PM/response/deinstall_OraGI12Home1.rsp"
Using configuration parameter file: /tmp/deinstall2020-11-29_00-56-24PM/response/deinstall_OraGI12Home1.rsp
Usage: srvctl <command> <object> [<options>]
    commands: enable|disable|start|stop|status|add|remove|modify|update|getenv|setenv|unsetenv|config|upgrade|downgrade
    objects: database|service|asm|diskgroup|listener|home|ons
For detailed help on each command and object and its options use:
  srvctl <command> -help [-compatible] or
  srvctl <command> <object> -help [-compatible]
PRKO-2012 : nodeapps object is not supported in Oracle Restart
CRS-4047: No Oracle Clusterware components configured.
CRS-4000: Command Stop failed, or completed with errors.
2020/11/29 13:01:12 CLSRSC-463: The deconfiguration or downgrade script could not stop current Oracle Clusterware stack.

2020/11/29 13:01:12 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector.

2020/11/29 13:01:42 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector.

Failure in execution (rc=-1, 0, 2) for command /etc/init.d/ohasd deinstall
2020/11/29 13:01:46 CLSRSC-557: Oracle Clusterware stack on this node has been successfully deconfigured. There were some errors which can be ignored.

2020/11/29 13:01:49 CLSRSC-46: Error: '/apps01/base/crsdata/rac1/crsconfig/cluutil2.log' does not exist

2020/11/29 13:01:49 CLSRSC-46: Error: '/apps01/base/crsdata/rac1/crsconfig/cluutil2.log' does not exist

Remove all the resources of clusterware and files under grid

[root@rac2 oracle]# /tmp/deinstall2020-11-29_00-56-24PM/perl/bin/perl -I/tmp/dei29_00-56-24PM/crs/install/rootcrs.pl -force  -deconfig -paramfile "/tmp/deinstal
Using configuration parameter file: /tmp/deinstall2020-11-29_00-56-24PM/response
PRCR-1070 : Failed to check if resource ora.net1.network is registered
CRS-0184 : Cannot communicate with the CRS daemon.
PRCR-1070 : Failed to check if resource ora.helper is registered
CRS-0184 : Cannot communicate with the CRS daemon.
PRCR-1070 : Failed to check if resource ora.ons is registered
CRS-0184 : Cannot communicate with the CRS daemon.

2020/11/29 13:04:01 CLSRSC-336: Successfully deconfigured Oracle Clusterware stack on this node

2020/11/29 13:04:02 CLSRSC-46: Error: '/apps01/base/crsdata/rac2/crsconfig/cluutil1.log' does not exist

2020/11/29 13:04:02 CLSRSC-46: Error: '/apps01/base/crsdata/rac2/crsconfig/cluutil1.log' does not exist

2020/11/29 13:04:03 CLSRSC-46: Error: '/apps01/base/crsdata/rac2/crsconfig/cluutil2.log' does not exist

2020/11/29 13:04:03 CLSRSC-46: Error: '/apps01/base/crsdata/rac2/crsconfig/cluutil2.log' does not exist

######################### DECONFIG CLEAN OPERATION END #########################


####################### DECONFIG CLEAN OPERATION SUMMARY #######################
There is no Oracle Grid Management database to de-configure in this Grid Infrastructure home
ASM instance was de-configured successfully from the Oracle home
Following RAC listener(s) were de-configured successfully: LISTENER,LISTENER_SCAN3,LISTENER_SCAN2,LISTENER_SCAN1
Oracle Clusterware was already stopped and de-configured on node "rac1"
Oracle Clusterware is stopped and successfully de-configured on node "rac2"
Oracle Clusterware is stopped and de-configured successfully.
#######################################################################


############# ORACLE DECONFIG TOOL END #############

Using properties file /tmp/deinstall2020-11-29_00-56-24PM/response/deinstall_2020-11-29_00-57-38-PM.rsp
Location of logs /tmp/deinstall2020-11-29_00-56-24PM/logs/

############ ORACLE DEINSTALL TOOL START ############





####################### DEINSTALL CHECK OPERATION SUMMARY #######################
A log of this session will be written to: '/tmp/deinstall2020-11-29_00-56-24PM/logs/deinstall_deconfig2020-11-29_00-57-38-PM.out'
Any error messages from this session will be written to: '/tmp/deinstall2020-11-29_00-56-24PM/logs/deinstall_deconfig2020-11-29_00-57-38-PM.err'

######################## DEINSTALL CLEAN OPERATION START ########################
## [START] Preparing for Deinstall ##
Setting LOCAL_NODE to rac1
Setting REMOTE_NODES to rac2
Setting CLUSTER_NODES to rac1,rac2
Setting CRS_HOME to false
Setting oracle.installer.invPtrLoc to /tmp/deinstall2020-11-29_00-56-24PM/oraInst.loc
Setting oracle.installer.local to false

## [END] Preparing for Deinstall ##

Setting the force flag to false
Setting the force flag to cleanup the Oracle Base
Oracle Universal Installer clean START

Detach Oracle home '/apps01/home/12.1.0/grid' from the central inventory on the local node : Done

Delete directory '/apps01/home/12.1.0/grid' on the local node : Done

Delete directory '/apps01/orainventory' on the local node : Done

Delete directory '/apps01/base' on the local node : Done

Detach Oracle home '/apps01/home/12.1.0/grid' from the central inventory on the remote nodes 'rac2' : Done

Delete directory '/apps01/home/12.1.0/grid' on the remote nodes 'rac2' : Done

Delete directory '/apps01/orainventory' on the remote nodes 'rac2' : Failed <<<<

The directory '/apps01/orainventory' could not be deleted on the nodes 'rac2'.
The Oracle Base directory '/apps01/base' will not be removed on node 'rac2'. The directory is not empty.

Oracle Universal Installer cleanup completed with errors.

Oracle Universal Installer clean END


## [START] Oracle install clean ##

Clean install operation removing temporary directory '/tmp/deinstall2020-11-29_00-56-24PM' on node 'rac1'
Clean install operation removing temporary directory '/tmp/deinstall2020-11-29_00-56-24PM' on node 'rac2'

## [END] Oracle install clean ##


######################### DEINSTALL CLEAN OPERATION END #########################


####################### DEINSTALL CLEAN OPERATION SUMMARY #######################
Successfully detached Oracle home '/apps01/home/12.1.0/grid' from the central inventory on the local node.
Successfully deleted directory '/apps01/home/12.1.0/grid' on the local node.
Successfully deleted directory '/apps01/orainventory' on the local node.
Successfully deleted directory '/apps01/base' on the local node.
Successfully detached Oracle home '/apps01/home/12.1.0/grid' from the central inventory on the remote nodes 'rac2'.
Successfully deleted directory '/apps01/home/12.1.0/grid' on the remote nodes 'rac2'.
Failed to delete directory '/apps01/orainventory' on the remote nodes 'rac2'.
Oracle Universal Installer cleanup completed with errors.


Run 'rm -r /etc/oraInst.loc' as root on node(s) 'rac1' at the end of the session.

Run 'rm -r /opt/ORCLfmap' as root on node(s) 'rac1,rac2' at the end of the session.
Oracle deinstall tool successfully cleaned up temporary directories.
#######################################################################


############# ORACLE DEINSTALL TOOL END #############

All good,we can install fresh grid

Wipe out and reuse ASM disk header !

Grid installation fails due to issues to execute root.sh scripts on both nodes

It is always a challenge,if the installation get failed at half of progress and there is no other option that we have to reinstall grid because of ignorance of failed prerequisites

[root@rac1 grid]# /apps01/home/12.1.0/grid/bin/kfod status=TRUE asm_diskstring='/dev/oracleasm/asm*' disks=all dscvgroup=TRUE
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME
================================================================================
     +ASM1 /apps01/home/12.1.0/grid
[root@rac1 grid]# /apps01/home/12.1.0/grid/bin/kfod status=TRUE asm_diskstring='/dev/oracleasm/disks/*' disks=all dscvgroup=TRUE
--------------------------------------------------------------------------------
 Disk          Size Header    Path                                     Disk Group   User     Group
================================================================================
   1:      12284 Mb CANDIDATE /dev/oracleasm/disks/DATA                #            oracle   oinstall
   2:      12284 Mb CANDIDATE /dev/oracleasm/disks/FRA                 #            oracle   oinstall
   3:       6142 Mb MEMBER    /dev/oracleasm/disks/OCR                 OCR          oracle   oinstall
--------------------------------------------------------------------------------
ORACLE_SID ORACLE_HOME
================================================================================
     +ASM1 /apps01/home/12.1.0/grid

Read the disk header to check the validity of the disk

[root@rac1 grid]# /apps01/home/12.1.0/grid/bin/kfed read /dev/oracleasm/disks/OCR AUNUM=0 BLKNUM=0
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD <<<<====

clear the asm disk header using dd

[root@rac1 grid]# dd if=/dev/zero of=/dev/oracleasm/disks/DATA bs=8192 count=128                                                                                                             00
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 3.36437 s, 31.2 MB/s
[root@rac1 grid]# dd if=/dev/zero of=/dev/oracleasm/disks/FRA bs=8192 count=1280                                                                                                             0
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 2.39119 s, 43.9 MB/s
[root@rac1 grid]# dd if=/dev/zero of=/dev/oracleasm/disks/OCR bs=8192 count=1280                                                                                                             0
12800+0 records in
12800+0 records out
104857600 bytes (105 MB) copied, 0.0719423 s, 1.5 GB/s

Query the disks if their headers are marked

[root@rac1 grid]# oracleasm querydisk DATA
Disk "DATA" defines an unmarked device
[root@rac1 grid]# oracleasm querydisk FRA
Disk "FRA" defines an unmarked device
[root@rac1 grid]# oracleasm querydisk OCR
Disk "OCR" defines an unmarked device

Read the header and the KFBTYP should be invalid !

[root@rac1 grid]# /apps01/home/12.1.0/grid/bin/kfed read /dev/oracleasm/disks/OCR AUNUM=0 BLKNUM=0
kfbh.endian:                          0 ; 0x000: 0x00
kfbh.hard:                            0 ; 0x001: 0x00
kfbh.type:                            0 ; 0x002: KFBTYP_INVALID <<<<====
kfbh.datfmt:                          0 ; 0x003: 0x00
kfbh.block.blk:                       0 ; 0x004: blk=0
kfbh.block.obj:                       0 ; 0x008: file=0
kfbh.check:                           0 ; 0x00c: 0x00000000
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
000000000 00000000 00000000 00000000 00000000  [................]
  Repeat 255 times
KFED-00322: file not found; arguments: [kfbtTraverseBlock] [Invalid OSM block type] [] [0]

The asm disk headers are cleared.Now we are good to reuse the asm disks for new installation of grid

ORA-01722: invalid number[sqlldr]

When loading data using sqlloader from a csv file to a table,due to improper format of integers in specific columns, we get ora-1722 error

"Record 2: Rejected - Error  on table sales, column Total_Profit.
ORA-01722: invalid number "

Solution:

Add filler option on the column where you have format mismatch of numbers

load data
infile '/home/oracle/bigtable.csv'
badfile '/home/oracle/bigtable.bad'
discardfile '/home/oracle/bigtable.dsc'
into table sales
fields terminated by ","
(Country,Item_Type,Sales_Channel,Order_Priority,Order_Date,Order_ID,Ship_Date,Units_Sold,Unit_Price,Unit_Cost,Total_Revenue,Total_Cost,Total_Profit FILLER)

The error is bypassed!

Commit point reached - logical record count 1048576

Table SALES:
  1048576 Rows successfully loaded.

Check the log file:
  sale.log
for more information about the load.

Recovery catalog creation for your database

Create a catalog user

[oracle@orcl19x admin]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Nov 24 21:45:33 2020

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create user rcat identified by password;

User created.

grant permissions to user

SQL> grant recovery_catalog_owner to rcat;

Grant succeeded.
SQL> grant unlimited tablespace to rcat;

Grant succeeded.

Create tablespace for catalog and map it to rcat user

SQL> create tablespace rcat datafile '/apps01/oradata/orcl19x/rcattbs' size 500M;

Tablespace created.

SQL> alter user rcat default tablespace rcat;

User altered.

SQL> exit

Connect to recovery catalog with the user

create a catalog and register to the database

RMAN> connect catalog rcat/password;

connected to recovery catalog database

RMAN> create catalog;

recovery catalog created

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Crosscheck backup of the database to verify the catalog in action

RMAN> crosscheck backup;

allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=69 device type=DISK
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/data01/FRA/ORCL19X/backupset/2020_11_23/o1_mf_annnn_TAG20201123T184819_hvqfnw1r_.bkp RECID=1 STAMP=1057258100
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/data01/FRA/ORCL19X/backupset/2020_11_23/o1_mf_nnndf_TAG20201123T184835_hvqfod5s_.bkp RECID=2 STAMP=1057258116
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/data01/FRA/ORCL19X/backupset/2020_11_23/o1_mf_ncsnf_TAG20201123T184835_hvqfrg9h_.bkp RECID=3 STAMP=1057258214
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/data01/FRA/ORCL19X/backupset/2020_11_23/o1_mf_annnn_TAG20201123T185015_hvqfrhqh_.bkp RECID=4 STAMP=1057258215
crosschecked backup piece: found to be 'AVAILABLE'
backup piece handle=/data01/FRA/ORCL19X/backupset/2020_11_23/o1_mf_ncnnf_TAG20201123T185354_hvqfzf5d_.bkp RECID=5 STAMP=1057258437
Crosschecked 5 objects

Verify the databases registered with recovery catalog

SQL> conn rcat/password
Connected.

SQL> select * from rc_database;

    DB_KEY  DBINC_KEY       DBID NAME     RESETLOGS_CHANGE# RESETLOGS FINAL_CHANGE#
---------- ---------- ---------- -------- ----------------- --------- -------------
         1          2 4045524769 ORCL19X            1594143 22-NOV-20


ORA-06598: insufficient INHERIT PRIVILEGES privilege

I get the below error while generating sqltxtract and sqltxplain analysis with sqltxadmin user and my session got disconnected and kicked me out !

BEGIN SQLTXADMIN.sqlt$i.xtract(p_statement_id => :v_statement_id, p_sql_id_or_hash_value => '1rzgj3atcc8vd', p_password => 'Y'); END;
 
*
ERROR at line 1:
ORA-06598: insufficient INHERIT PRIVILEGES privilege
ORA-06512: at "SQLTXADMIN.SQLT$I", line 1
ORA-06512: at line 1
In case of a disconnect review log file in current directory
If running as SYS in 12c make sure to review sqlt_instructions.html first

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Solution: Ensure that you have connected to sqltxplain user and the sqlt users have following privileges to access the sqltx packages.


SQL>grant INHERIT ANY PRIVILEGES to sqltxadmin;

Grant succeeded.

SQL>grant INHERIT ANY PRIVILEGES to sqltxplain;

Grant succeeded.

SQL>grant SQLT_USER_ROLE to sqltxplain;
Grant succeeded.

SQL>grant SQLT_USER_ROLE to sqltxadmin;
Grant succeeded.

SQL> conn sqltxplain;
Enter password:
Connected.
SQL> START sqltxtract.sql 1rzgj3atcc8vd
Configure logical partitions for oracle asm

Use fdisk to list the physical disks available to create logical sectors on the device.Here we have external disks added with persistent storage.

[root@RACFLEX1 oracle]# fdisk -l

Disk /dev/sda: 5472 MB, 5472104448 bytes
255 heads, 63 sectors/track, 665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x1195760c

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1         665     5341581   83  Linux

Disk /dev/sdc: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x4699c8c4

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1         261     2096451   83  Linux

Disk /dev/sdb: 5472 MB, 5472104448 bytes
255 heads, 63 sectors/track, 665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x96f39711

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1         665     5341581   83  Linux

Disk /dev/sdd: 5472 MB, 5472104448 bytes
255 heads, 63 sectors/track, 665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sde: 5472 MB, 5472104448 bytes
255 heads, 63 sectors/track, 665 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000


Disk /dev/sdg: 55.1 GB, 55053647872 bytes
255 heads, 63 sectors/track, 6693 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00031e11

   Device Boot      Start         End      Blocks   Id  System
/dev/sdg1   *           1        2550    20480000   83  Linux
/dev/sdg2            2550        4463    15360000   83  Linux
/dev/sdg3            4463        4724     2097152   82  Linux swap / Solaris
/dev/sdg4            4724        6694    15824896    5  Extended
/dev/sdg5            4724        6694    15823872   83  Linux

Disk /dev/sdf: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

We add primary partitions for all the devices /dev/sda , /dev/sdb, /dev/sdc, /dev/sdd

[root@RACFLEX1 oracle]# fdisk /dev/sdd
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x0b5c3711.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-665, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-665, default 665):
Using default value 665

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@RACFLEX1 oracle]# fdisk /dev/sde
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xc148f4a1.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-665, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-665, default 665):
Using default value 665

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@RACFLEX1 oracle]# fdisk /dev/sdg

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
No free sectors available

Command (m for help): ^Z
[1]+  Stopped                 fdisk /dev/sdg
[root@RACFLEX1 oracle]# ^C
[root@RACFLEX1 oracle]# fdisk /dev/sdf
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0xa10fefb6.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-261, default 1):
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-261, default 261):
Using default value 261

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Download oracleasmlib and oracleasmsupport rpms using yum or download from support website to use oracleasm utility !

[root@RACFLEX1 oracle]# yum install oracleasm-support
Loaded plugins: refresh-packagekit, security, ulninfo
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package oracleasm-support.x86_64 0:2.1.11-2.el6 will be installed
--> Finished Dependency Resolution
[root@RACFLEX1 oracle]# yum install kmod-oracleasm -y
Loaded plugins: refresh-packagekit, security, ulninfo
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package kmod-oracleasm.x86_64 0:2.0.8-16.1.el6_10 will be installed
--> Processing Dependency: kernel(__x86_indirect_thunk_rax) = 0x2ea2c95c for package: kmod-oracleasm-2.0.8-16.1.el6_10.x86_64
--> Processing Dependency: kernel >= 2.6.32-754.el6 for package: kmod-oracleasm-2.0.8-16.1.el6_10.x86_64
--> Running transaction check
---> Package kernel.x86_64 0:2.6.32-754.35.1.el6 will be installed
--> Processing Dependency: dracut-kernel >= 004-408.el6 for package: kernel-2.6.32-754.35.1.el6.x86_64
--> Running transaction check
---> Package dracut-kernel.noarch 0:004-388.0.1.el6 will be updated
---> Package dracut-kernel.noarch 0:004-411.0.9.el6 will be an update
--> Processing Dependency: dracut = 004-411.0.9.el6 for package: dracut-kernel-004-411.0.9.el6.noarch
--> Running transaction check
---> Package dracut.noarch 0:004-388.0.1.el6 will be updated
---> Package dracut.noarch 0:004-411.0.9.el6 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

================================================================================
 Package           Arch      Version                 Repository            Size
================================================================================
Installing:
 kmod-oracleasm    x86_64    2.0.8-16.1.el6_10       public_ol6_latest     36 k
Installing for dependencies:
 kernel            x86_64    2.6.32-754.35.1.el6     public_ol6_latest     32 M
Updating for dependencies:
 dracut            noarch    004-411.0.9.el6         public_ol6_latest    130 k
 dracut-kernel     noarch    004-411.0.9.el6         public_ol6_latest     29 k

Transaction Summary
================================================================================
Install       2 Package(s)
Upgrade       2 Package(s)

Total download size: 33 M
Downloading Packages:
(1/4): dracut-004-411.0.9.el6.noarch.rpm                 | 130 kB     00:00
(2/4): dracut-kernel-004-411.0.9.el6.noarch.rpm          |  29 kB     00:00
(3/4): kernel-2.6.32-754.35.1.el6.x86_64.rpm             |  32 MB     00:50
(4/4): kmod-oracleasm-2.0.8-16.1.el6_10.x86_64.rpm       |  36 kB     00:00
--------------------------------------------------------------------------------
Total                                           654 kB/s |  33 MB     00:51
Running rpm_check_debug
Running Transaction Test
Transaction Test Succeeded
Running Transaction
  Updating   : dracut-004-411.0.9.el6.noarch                                1/6
  Updating   : dracut-kernel-004-411.0.9.el6.noarch                         2/6
  Installing : kernel-2.6.32-754.35.1.el6.x86_64                            3/6
  Installing : kmod-oracleasm-2.0.8-16.1.el6_10.x86_64                      4/6
  Cleanup    : dracut-kernel-004-388.0.1.el6.noarch                         5/6
  Cleanup    : dracut-004-388.0.1.el6.noarch                                6/6
  Verifying  : kernel-2.6.32-754.35.1.el6.x86_64                            1/6
  Verifying  : dracut-004-411.0.9.el6.noarch                                2/6
  Verifying  : dracut-kernel-004-411.0.9.el6.noarch                         3/6
  Verifying  : kmod-oracleasm-2.0.8-16.1.el6_10.x86_64                      4/6
  Verifying  : dracut-kernel-004-388.0.1.el6.noarch                         5/6
  Verifying  : dracut-004-388.0.1.el6.noarch                                6/6

Installed:
  kmod-oracleasm.x86_64 0:2.0.8-16.1.el6_10

Dependency Installed:
  kernel.x86_64 0:2.6.32-754.35.1.el6

Dependency Updated:
  dracut.noarch 0:004-411.0.9.el6     dracut-kernel.noarch 0:004-411.0.9.el6

Complete!

[root@RACFLEX1 oracle]# rpm -qa|grep oracleasm
oracleasm-support-2.1.11-2.el7.x86_64
oracleasmlib-2.0.12-1.el7.x86_64

First configure oracleasm

[root@RACFLEX1 oracle]# oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: oinstall
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

Then initialize oracleasm

[root@RACFLEX1 oracle]# oracleasm init
Creating /dev/oracleasm mount point: /dev/oracleasm
Loading module "oracleasm": oracleasm
Configuring "oracleasm" to use device physical block size
Mounting ASMlib driver filesystem: /dev/oracleasm

Create the logical disks on top of partitions for the ASM diskgroups

[root@RACFLEX1 oracle]# oracleasm createdisk DATA1 /dev/sdd1
Writing disk header: done
Instantiating disk: done
[root@RACFLEX1 oracle]# oracleasm createdisk FRA1 /dev/sde1
Writing disk header: done
Instantiating disk: done
[root@RACFLEX1 oracle]# oracleasm createdisk CRS1 /dev/sdf1
Writing disk header: done
Instantiating disk: done

Scan to verify stale or corrupted disks and list each ASM disks

[root@RACFLEX1 oracle]# oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
[root@RACFLEX1 oracle]# oracleasm listdisks
CRS
CRS1
DATA
DATA1
FRA
FRA1
[root@RACFLEX1 oracle]#

while installing a database ,we can specify these disks to create ASM diskgroups

Why to choose an optimizer goal? (oracle)

Optimizer is a software or an algorithmic code which has the intelligence to decide the path to access the data in an efficient way in oracle database.

We have options to choose optimizer mode to prioritize the utilization of resources to serve their own purpose.

There are two options in cost based optimizer mode

  1. ALL_ROWS –> Better throughput –>Used for batch jobs –>Primary purpose is to complete batch of multiple sql statement to get the final report in shortest possible time –> Return all the rows mode
  2. FIRST_ROWS_N –> Better response time –>Used for interactive applications –> Primary purpose is to get first n row of result set of sql statement (i.e) n =1,10,1000 … can be positive whole number–>Optimizer estimates the number of rows that will be returned by completely analyzing the 1st join order –> Minimize the resources required to return fraction of total dataset the query is supposed to fetch.

Example:

SQL> show parameter optimizer_mode

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
optimizer_mode                       string      ALL_ROWS

By default, ALL_ROWS is used in every databases.But we can set the optimizer mode at session level with first_rows_1000, first_rows_100, first_rows_10 and first_rows_1

FIRST_ROWS,RULE and CHOOSE mode are deprecated in earlier releases and no longer used

SQL> alter session set optimizer_mode = first_rows_100;

Session altered.

Elapsed: 00:00:00.00
SQL> alter session set optimizer_mode = first_rows_10000;
ERROR:
ORA-00096: invalid value FIRST_ROWS_10000 for parameter optimizer_mode, must be
from among first_rows_1000, first_rows_100, first_rows_10, first_rows_1,
first_rows, all_rows, choose, rule

FIRST_ROWS behaviour in a select query :

I provide a hint FIRST_ROWS(1000) to force optimizer to give preference and retrieve the first 1000 rows as fast as possible and the preceding rows are retrieved later!

The execution plan says that the optimizer used index range scan + table access by rowid batched to satisfy this condition

SQL> exec dbms_stats.gather_table_stats('','t');

PL/SQL procedure successfully completed.

SQL> select /*+ FIRST_ROWS(1000) */ name from t where id < 100000;

99999 rows selected.

Elapsed: 00:00:00.25

Execution Plan
----------------------------------------------------------
Plan hash value: 3571126032

--------------------------------------------------------------------------------
--------------------

| Id  | Operation                           | Name         | Rows  | Bytes | Cos
t (%CPU)| Time     |

--------------------------------------------------------------------------------
--------------------

|   0 | SELECT STATEMENT                    |              |  1000 |  8000 |
 6   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| T            |  1000 |  8000 |
 6   (0)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | SYS_C0010315 |       |       |
 3   (0)| 00:00:01 |

--------------------------------------------------------------------------------
--------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"<100000)


Statistics
----------------------------------------------------------
          1  recursive calls
          0  db block gets
      13754  consistent gets
          0  physical reads
          0  redo size
    3172665  bytes sent via SQL*Net to client
      73877  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      99999  rows processed

ALL_ROWS hint behaviour in a select query! Here the optimizer went for a full table scan to retrieve complete set of rows as a whole and not giving preference to first n rows.

SQL> select /*+ ALL_ROWS */ name from t where id < 100000;

99999 rows selected.

Elapsed: 00:00:00.29

Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 10001 | 80008 |     9   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    | 10001 | 80008 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("ID"<100000)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      11321  consistent gets
       2638  physical reads
          0  redo size
    1773824  bytes sent via SQL*Net to client
      73877  bytes received via SQL*Net from client
       6668  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
      99999  rows processed

From above two behaviour of optimizer,we observe different Plan hash value for the same query which is syntactically and semantically same. But the end goals are different for optimizer modes which change the execution plan behaviour of optimizer!

We do see the execution time difference between FIRST_ROWS(Elapsed: 00:00:00.27) and ALL_ROWS(Elapsed: 00:00:00.29) because of an index usage in FIRST_ROWS which fetched first 1000 rows faster.

ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytesJob “SYS”.”SYS_EXPORT_FULL_02″ stopped due to fatal error

While exporting full database with multiple files per set and degree of parallelism,we encounter the ORA-39095 error

[oracle@orcl12x base]$ oerr ora 39095
39095, 00000, "Dump file space has been exhausted: Unable to allocate %s bytes"
// *Cause:  The Export job ran out of dump file space before the job was
//          completed.
// *Action: Reattach to the job and add additional dump files to the job
//          restarting the job.

Solution:

1)FILESIZE
Specify the size of each dump file in units of bytes.

The parameter value should be increased as per the total size of the object or database

eg: If database size is 1GB,then filesize per set = 100MB and degree of parallel = 10

which is 100 * 100MB/10 DOP = 100 MB per thread of CPU will be utilized

2)we forget to specify %U character on the dumpfile which spawn multiple jobs each process per file

eg: dumpfile=orcl19x_%U.dmp