Archives June 2022

ORA-13510: invalid RETENTION 7, must be in the range (1440, 52560000)

Cause:

Retention value should be in seconds unit

kIsH@Xhydra<>EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval => 60, retention => 7);
BEGIN DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval => 60, retention => 7); END;

*
ERROR at line 1:
ORA-13510: invalid RETENTION 7, must be in the range (1440, 52560000)
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 213
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 265
ORA-06512: at line 1

Action:

Convert the 8 days to seconds

kIsH@Xhydra<>select 7 * 24 * 60 seconds from dual;

   SECONDS
----------
     10080

kIsH@Xhydra<>EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval => 60, retention => 10080);

PL/SQL procedure successfully completed.
ORA-13541: system moving window baseline size (691200) greater than retention

Cause:

Retention should not be less than baseline window

Error is triggered due to retention !< baseline

kIsH@Xhydra<>EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval => 60, retention => 1440);
BEGIN DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval => 60, retention => 1440); END;

*
ERROR at line 1:
ORA-13541: system moving window baseline size (691200) greater than retention
(86400)
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 213
ORA-06512: at "SYS.DBMS_WORKLOAD_REPOSITORY", line 265
ORA-06512: at line 1

kIsH@Xhydra<>!oerr ora 13541
13541, 00000, "system moving window baseline size (%s) greater than retention (%s)"
// *Cause:  The system moving window baseline size must be less than the
//          retention setting.  The specified window size or retention
//          violate this.
// *Action: Check the moving window baseline size or retention.

Action:

Baseline window is set to 8 days and of course default value. Convert the days to seconds

kIsH@Xhydra<>EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_BASELINE_WINDOW_SIZE(window_size => 8);

PL/SQL procedure successfully completed.

kIsH@Xhydra<>select BASELINE_TYPE,MOVING_WINDOW_SIZE from dba_hist_baseline;

BASELINE_TYPE MOVING_WINDOW_SIZE
------------- ------------------
MOVING_WINDOW                  8
MOVING_WINDOW                  8


kIsH@Xhydra<>select 8 * 24 * 60 day_in_seconds from dual;

   DAY_IN_SECONDS
----------
     11520

Use the value 11520 seconds in retention

kIsH@Xhydra<>EXEC DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(interval => 60, retention => 11520);

PL/SQL procedure successfully completed.
ORA-14400: inserted partition key does not map to any partition

Cause:

kIsH@Xhydra<>INSERT INTO "KISH"."RP"
SELECT xid.nextval,
       dbms_random.string('a','10'),
           floor(dbms_random.value(1,1000000)),
           to_date(trunc(dbms_random.value(to_char(date '1979-01-01','J'),
                                           to_char(date '9999-12-31','J'))),'J')
from dual connect by level <= 1000;
  2    3    4    5    6    7       SELECT xid.nextval,
       *
ERROR at line 2:
ORA-14400: inserted partition key does not map to any partition


kIsH@Xhydra<>
kIsH@Xhydra<>!oerr ora 14400
14400, 00000, "inserted partition key does not map to any partition"
// *Cause:  An attempt was made to insert a record into, a Range or Composite
//          Range object, with a concatenated partition key that is beyond
//          the concatenated partition bound list of the last partition -OR-
//          An attempt was made to insert a record into a List object with
//          a partition key that did not match the literal values specified
//          for any of the partitions.
// *Action: Do not insert the key. Or, add a partition capable of accepting
//          the key, Or add values matching the key to a partition specification

Action:

Modify the attributes in the columns based on the variables length during creation of partitioned table

--DDL 
(PARTITION "R_JAN2022"  VALUES LESS THAN (TO_DATE(' 2022-02-01 00:00:00', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))  <<== date values provided below MAY-2022
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
  2    3    4    5    6    7    8    9   10   11   12   13   14  NOCOMPRESS LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1" ,
PARTITION "R_FEB2022"  VALUES LESS THAN (TO_DATE(' 2022-03-01 00:00:00', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) <<==
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

kIsH@Xhydra<>
INSERT INTO "KISH"."RP"
kIsH@Xhydra<>  2  SELECT xid.nextval,
       dbms_random.string('a','10'),
           floor(dbms_random.value(1,1000000)),
           to_date(trunc(dbms_random.value(to_char(date '2022-01-01','J'), 
                                           to_char(date '2022-04-30','J'))),'J')
from dual connect by level <= 1000;                       3    4    5    6    7

1000 rows created.

kIsH@Xhydra<>commit;

Commit complete.
SHELL SCRIPT TO GATHER STATISTICS ONLY FOR STALE PARTITIONS IN TABLE ORACLE

This script will help during times when manual statistics should be gathered for a schema containing stale stats of few partitions in a table and other partitions are non stale.

#!/bin/bash

trap 'rm -rf /home/oracle/oratab_new /home/oracle/DBMS_STATS.sql' EXIT
#For logging purpose
_LOG_0()
{
echo "*************************************$1"
}

#Set the environment variables
_SET_ENV_1()
{
DBMS_STATS=/home/oracle/DBMS_STATS.sql
chmod -R 775 /home/oracle/DBMS_STATS.sql
cat /etc/oratab|grep -v '#'|grep -v '^$' > /home/oracle/oratab_new
while read x
   do
     IFS=':' read -r -a array <<< $x
                ORACLE_SID="${array[0]}"
                ORACLE_HOME="${array[1]}"
                echo $ORACLE_SID
                echo $ORACLE_HOME
                export PATH=$PATH:$ORACLE_HOME/bin
   done < /home/oracle/oratab_new
}

#View the existing auto partition created for last hour and append to a output file
_VIEW_PART_DETAILS_2()
{
$ORACLE_HOME/bin/sqlplus -S '/ as sysdba' << EOF >> log_for_reference
spool LAST_HOUR_AUTO_PART.dtl
--LAST_HOUR_AUTO_PART
set heading off
set feedback off
set lines 200 pages 500
col owner for a20
col object_name for a20
col subobject_name for a20
WITH xpart as
(
select OWNER,
       OBJECT_NAME,
       SUBOBJECT_NAME,
       CREATED
from dba_objects
where OWNER='SYS'
and
OBJECT_NAME='PARTITION'
and
SUBOBJECT_NAME <> 'NULL'
)
select ''||do.OWNER||'.'
       ||do.OBJECT_NAME||'.'
       ||do.SUBOBJECT_NAME||''
from xpart do
inner join dba_tab_statistics dts on ( do.SUBOBJECT_NAME = dts.PARTITION_NAME )
where dts.STALE_STATS='YES';
spool off
exit;
EOF
}

#Form a gather stats script using the last hour partitions
_COLLECT_PART_STAT_DETAILS_3()
{
while read x;
   do
     IFS='.' read -r -a array <<< $x
     echo "EXEC DBMS_STATS.GATHER_TABLE_STATS(\
                                              ownname=>'"${array[0]}"',\
                                              tabname=>'"${array[1]}"',\
                                              partname=>'"${array[2]}"',\
                                              estimate_percent=>'10',\
                                              method_opt=>'for all indexed columns size skewonly',\
                                              granularity=>'ALL',\
                                              degree=>8,\
                                              cascade=>TRUE)"
   done < LAST_HOUR_AUTO_PART.dtl > $DBMS_STATS
}


#Gather stats only for last hour partitions irrespective of whether they are stale or not
_GATHER_STATS_4()
{
while true
   do
     if [ -s $DBMS_STATS ];
     then
         exit|$ORACLE_HOME/bin/sqlplus -S '/ as sysdba' @$DBMS_STATS
         cat /dev/null > $DBMS_STATS
     else
         _LOG_0 "*"
 echo "***No partitions created in last hour***"
         _LOG_0 "*"
         break
     fi
   done
}

_SET_ENV_1
_VIEW_PART_DETAILS_2
_COLLECT_PART_STAT_DETAILS_3
_GATHER_STATS_4
CAN ORACLE MANAGED TABLESPACES DROPPED?

No they cannot be dropped due to each tablespace’s utility for different purpose. But for some tablespace’s a counter tablespace can be created and existing tablespace can be dropped like undo, temp etc.. But system and sysaux are critical for database functioning and they are brain and heart of oracle database

kIsH@Xhydra<>drop tablespace users including contents and datafiles;
drop tablespace users including contents and datafiles
*
ERROR at line 1:
ORA-12919: Can not drop the default permanent tablespace


kIsH@Xhydra<>drop tablespace system including contents and datafiles;
drop tablespace system including contents and datafiles
*
ERROR at line 1:
ORA-01550: cannot drop system tablespace


kIsH@Xhydra<>drop tablespace temp including contents and datafiles;
drop tablespace temp including contents and datafiles
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace


kIsH@Xhydra<>drop tablespace sysaux including contents and datafiles;
drop tablespace sysaux including contents and datafiles
*
ERROR at line 1:
ORA-13501: Cannot drop SYSAUX tablespace

kIsH@Xhydra<>drop tablespace undotbs1 including contents and datafiles;
drop tablespace undotbs1 including contents and datafiles
*
ERROR at line 1:
ORA-30013: undo tablespace 'UNDOTBS1' is currently in use

CHANGE PARTITION STORAGE TO NEW TABLESPACE WITH MODIFY ATTRIBUTE ORACLE

Partition maintenance is often a challenging task specially during large chunks of partitions with highly critical tables.

Consider a case when the tablespace cannot store any more partition for a table and they need to be redirected to a different tablespace to continue

Create a sample interval partitioned table with default tablespace USERS

kIsH@Xhydra<>create table partition
(p_id number(10),
  2    3   c_id number(10),
  4   t_id date,
 ch_id char(1))
 tablespace USERS partition by range( t_id)
  5    6    7   interval (numtoyminterval(1, 'MONTH'))
 (partition p0 values less than (to_date('1-1-2022', 'DD-MM-YYYY')) tablespace USERS);  8

Table created.

Populate some data

kIsH@Xhydra<>begin
for i in 1 .. 10 loop
insert into partition values (i,i,to_date(trunc(dbms_random.value(to_char(date '1979-01-01','J'),to_char(date '9999-12-31','J'))),'J'),'a');
end loop;
commit;
end;
/  2    3    4    5    6    7

PL/SQL procedure successfully completed.

kIsH@Xhydra<>col TABLE_NAME for a20
col PARTITIONING_TYPE for a20
col PARTITION_COUNT for 999999999
col DEF_TABLESPACE_NAME for a20
select TABLE_NAME,
           PARTITIONING_TYPE,
           PARTITION_COUNT,
           DEF_TABLESPACE_NAME
  from dba_part_tables
  where TABLE_NAME='PARTITION';kIsH@Xhydra<>kIsH@Xhydra<>kIsH@Xhydra<>kIsH@Xhydra<>  2    3    4    5    6

TABLE_NAME           PARTITIONING_TYPE    PARTITION_COUNT DEF_TABLESPACE_NAME
-------------------- -------------------- --------------- --------------------
PARTITION            RANGE                        1048575 USERS

Consider that the tablespace can no longer hold any partition under USERS. Check the partition storage. Here partitions are stored in USERS tablespace

kIsH@Xhydra<>select TABLESPACE_NAME,count(*) from dba_tab_partitions where TABLE_NAME='PARTITION' group by TABLESPACE_NAME;

TABLESPACE_NAME        COUNT(*)
-------------------- ----------
USERS                       147

To modify the attribute to new tablespace CORRUPT, use the below DDL

kIsH@Xhydra<>alter table partition modify default attributes tablespace CORRUPT;

Table altered.

Populate new data

kIsH@Xhydra<>begin
for i in 1 .. 2 loop
insert into partition values (i,i,to_date(trunc(dbms_random.value(to_char(date '1979-01-01','J'),to_char(date '9999-12-31','J'))),'J'),'a');
end loop;
commit;
end;
/  2    3    4    5    6    7

PL/SQL procedure successfully completed.

Now observe that the new data entered into new tablespace CORRUPT. Subsequent partitions which will be generated get stored in CORRUPT tablespace not USERS

kIsH@Xhydra<>col TABLE_NAME for a20
col PARTITIONING_TYPE for a20
col PARTITION_COUNT for 999999999
col DEF_TABLESPACE_NAME for a20
select TABLE_NAME,
           PARTITIONING_TYPE,
           PARTITION_COUNT,
           DEF_TABLESPACE_NAME
  from dba_part_tables
  where TABLE_NAME='PARTITION';kIsH@Xhydra<>kIsH@Xhydra<>kIsH@Xhydra<>kIsH@Xhydra<>  2    3    4    5    6

TABLE_NAME           PARTITIONING_TYPE    PARTITION_COUNT DEF_TABLESPACE_NAME
-------------------- -------------------- --------------- --------------------
PARTITION            RANGE                        1048575 CORRUPT

kIsH@Xhydra<>select TABLESPACE_NAME,count(*) from dba_tab_partitions where TABLE_NAME='PARTITION' group by TABLESPACE_NAME;

TABLESPACE_NAME                  COUNT(*)
------------------------------ ----------
CORRUPT                                 6
USERS                                 147
ORA-01658: unable to create INITIAL extent for segment in tablespace USERS

Error:

kIsH@Xhydra<>begin
  2  for i in 1 .. 1000 loop
  3  insert into partition values (i,i,to_date(trunc(dbms_random.value(to_char(date '1979-01-01','J'),to_char(date '9999-12-31','J'))),'J'),'a');
  4  end loop;
  5  commit;
  6  end;
  7  /
begin
*
ERROR at line 1:
ORA-01658: unable to create INITIAL extent for segment in tablespace USERS
ORA-06512: at line 3

kIsH@Xhydra<>!oerr ora 1658
01658, 00000, "unable to create INITIAL extent for segment in tablespace %s"
// *Cause:  Failed to find sufficient contiguous space to allocate INITIAL
//          extent for segment being created.
// *Action: Use ALTER TABLESPACE ADD DATAFILE to add additional space to the
//          tablespace or retry with a smaller value for INITIAL
RESULT CACHE: RC LATCH

Result cache is a pool to cache the frequently accessed non changing results of query in shared pool. It’s main benefit is to increase the performance of the cached query by fetching the result sets directly from in memory avoiding physical IO’s.

Uses:

OLAP systems

Why:

  • Scanning large result sets and retrieving less rows
  • Buffer cache scan will be skipped for the queries in result cache
  • Less CPU consumption due to lack of need for consistent and current reads

Not suitable for:

OLTP systems where there are lot of DML commit transactions. The moment the data is modified, result cache invalidates the result set stored out of candidature.

Solution:

This wait occurs mainly due to space shortage of result cache memory being full.

The results of a query is held in result cache for short time and only queries where having high number of selective filters and return only few rows are the candidates. Hence queries which are not eligible to cache should be discarded if not appropriate.

First course of action should be to check RC memory report for space crunch

kish@x3z6zx9<^>set serveroutput on
kish@x3z6zx9<^>EXEC DBMS_RESULT_CACHE.MEMORY_REPORT;
R e s u l t   C a c h e   M e m o r y   R e p o r t
[Parameters]
Block Size          = 1K bytes
Maximum Cache Size  = 11552K bytes (11552 blocks)  <<== Available cache size(11 MB)
Maximum Result Size = 577K bytes (577 blocks)
[Memory]
Total Memory = 758336 bytes [0.161% of the Shared Pool]   <<== Used space (7 KB)
... Fixed Memory = 5352 bytes [0.001% of the Shared Pool]
... Dynamic Memory = 752984 bytes [0.160% of the Shared Pool]
....... Overhead = 130392 bytes
....... Cache Memory = 608K bytes (608 blocks)
........... Unused Memory = 606 blocks
........... Used Memory = 2 blocks
............... Dependencies = 1 blocks (1 count)
............... Results = 1 blocks
................... Invalid = 1 blocks (1 count)

PL/SQL procedure successfully completed.

Check in dynamic view

kish@x3z6zx9<^>col ID for 999999
kish@x3z6zx9<^>col NAME for a20
kish@x3z6zx9<^>col VALUE for 999999999
kish@x3z6zx9<^>select * from v$result_cache_statistics where ID in (2,3);

     ID NAME                 VALUE
------- -------------------- ---------------------------------------------------------------------------------
      2 Block Count Maximum  11552 <<== 11 MB
      3 Block Count Current  608 <<== 6 KB

In below example, full rows(5000000) are retrieved from a large table without any filter

total rows in table = 5000000 ; row processed = 5000000

kIsH@Xhydra<>set timing on
kIsH@Xhydra<>set autot traceonly
kIsH@Xhydra<>select * from kish.xtbl;

5000000 rows selected.

Elapsed: 00:00:16.07 <<<<<================

Execution Plan
----------------------------------------------------------
Plan hash value: 1420452506

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  5000K|   252M| 11230   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| XTBL |  5000K|   252M| 11230   (1)| 00:00:01 |
--------------------------------------------------------------------------

Using result cache in this situation takes longer than without cache and is absurd because this query is not the right candidate for result caching. Hence this query and queries similar to this should be removed from the list to reduce result cache contention

kIsH@Xhydra<>alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.03
kIsH@Xhydra<>alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.08
kIsH@Xhydra<>select /*+ result_cache */ * from kish.xtbl;

5000000 rows selected.

Elapsed: 00:00:16.37 <<<<<================

Execution Plan
----------------------------------------------------------
Plan hash value: 1420452506

--------------------------------------------------------------------------------
-----------------

| Id  | Operation          | Name                       | Rows  | Bytes | Cost (
%CPU)| Time     |

--------------------------------------------------------------------------------
-----------------

|   0 | SELECT STATEMENT   |                            |  5000K|   252M| 11230
  (1)| 00:00:01 |

|   1 |  RESULT CACHE      | grv29sc2j3kuc53cxvzdpknbqd |  5000K|   252M| 11230
  (1)| 00:00:01 |

|   2 |   TABLE ACCESS FULL| XTBL                       |  5000K|   252M| 11230
  (1)| 00:00:01 |

Result Cache Information (identified by operation id):
------------------------------------------------------

   1 - column-count=8; dependencies=(KISH.XTBL); name="select /*+ result_cache *
/ * from kish.xtbl"

By default, result cache will be in manual mode

kIsH@Xhydra<>show parameter result_cache_mo

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
result_cache_mode                    string      MANUAL

The settings can be configured to force explicitly which forces the query to use result cache irrespective of their performance or auto which uses result cache for queries automatically when needed

kIsH@Xhydra<>alter system set result_cache_mode=force;

System altered.

kIsH@Xhydra<>set timing on
kIsH@Xhydra<>set autot traceonly
kIsH@Xhydra<>select * from kish.xtbl;

5000000 rows selected.

Elapsed: 00:00:16.14

Execution Plan
----------------------------------------------------------
Plan hash value: 1420452506

--------------------------------------------------------------------------------
-----------------

| Id  | Operation          | Name                       | Rows  | Bytes | Cost (
%CPU)| Time     |

--------------------------------------------------------------------------------
-----------------

|   0 | SELECT STATEMENT   |                            |  5000K|   252M| 11230
  (1)| 00:00:01 |

|   1 |  RESULT CACHE      | grv29sc2j3kuc53cxvzdpknbqd |  5000K|   252M| 11230
  (1)| 00:00:01 |

|   2 |   TABLE ACCESS FULL| XTBL                       |  5000K|   252M| 11230
  (1)| 00:00:01 |
Result Cache Information (identified by operation id):
------------------------------------------------------
   1 - column-count=8; dependencies=(KISH.XTBL); name="select * from kish.xtbl"

kIsH@Xhydra<>alter system set result_cache_mode=auto;

System altered.

Elapsed: 00:00:00.02
kIsH@Xhydra<>select * from kish.xtbl;


5000000 rows selected.

Elapsed: 00:00:15.67

Execution Plan
----------------------------------------------------------
Plan hash value: 1420452506

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |  5000K|   252M| 11230   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| XTBL |  5000K|   252M| 11230   (1)| 00:00:01 |
--------------------------------------------------------------------------

The status of the result cache queries can be seen in the below query. The result cache can also be bypassed if there are contention due to particular query

kIsH@Xhydra<>col name for a20
col namespace for a20
col status for a20
col rcount for 999999
select NAME,
       NAMESPACE,
           STATUS,
           count(*) rcount
from v$result_cache_objects
group by NAME,NAMESPACE,STATUS
order by rcount desc;
kIsH@Xhydra<>kIsH@Xhydra<>kIsH@Xhydra<>kIsH@Xhydra<>  2    3    4    5    6    7
NAME                 NAMESPACE            STATUS                RCOUNT
-------------------- -------------------- -------------------- -------
select /*+ result_ca SQL                  Bypass                     1
che */ * from kish.x
tbl

How to remove a candidate which is not efficient for a result cache ?

set lines 200 pages 1000
col NAMESPACE for a20
col NAME for a20
col round(avg(SCAN_COUNT)) for a20
col round(max(SCAN_COUNT)) for a20
select ID,
       NAME,
       NAMESPACE,
	   count(*),
	   round(avg(SCAN_COUNT)),
	   round(max(SCAN_COUNT)),
	   round(sum(BLOCK_COUNT)) from v$result_cache_objects where type='Result' 
	   group by ID,NAME,NAMESPACE 
	   order by round(sum(BLOCK_COUNT)) desc;

To remove the specific object and its dependent queries from result cache, use invalidate option

DBMS_RESULT_CACHE.INVALIDATE (
   owner        IN  VARCHAR2, 
   name         IN  VARCHAR2) 
 RETURN NUMBER;
DBMS_RESULT_CACHE.INVALIDATE (
   object_id    IN BINARY_INTEGER);

Results which are stored in cache can also be blacklisted individually using black_list_add to prevent queries from using the object from result cache

dbms_result_cache.black_list_remove(
cache_id IN VARCHAR2,
global   IN BOOLEAN DEFAULT FALSE);

exec dbms_result_cache.black_list_add('5tkmw2hy1n5z48rfgn162kqjrv', TRUE);

If high latch free waits are observed for the result cache RC latch and high number of invalidations happen due to adaptive dynamic sampling queries, the use the below workaround

alter system set "_optimizer_ads_use_result_cache" = FALSE;

Also below query is observed with high CPU usage

SELECT /* DS_SVC */ /*+ dynamic_sampling(0) no_sql_tune no_monitoring optimizer_features_enable(default) no_parallel

To flush all the objects from result cache, use the below procedure

kIsH@zxd00<^>EXEC DBMS_RESULT_CACHE.FLUSH();

PL/SQL procedure successfully completed.

To permanently blacklist a set of cache id’s from result cache, use the following parameter.

alter system set "_result_cache_black_list"='5tkmw2hy1n5z48rfgn162kqjrv','gymbn0n3vka176untmq57gwfzs';

Result cache can also be bypassed by the queries from using it or can be permanently disabled

--Remove the cache result sets and further prevent queries from using it
EXEC DBMS_RESULT_CACHE.BYPASS(TRUE);
or
--disable result cache
alter system set "result_cache_max_size"=0

Below are the type of Result cache latches in 21c

kIsH@zxd00<^>col NAME for a20
kIsH@zxd00<^>col DISPLAY_NAME for a20
kIsH@zxd00<^>select * from v$latchname where name like 'Result Cache%';

    LATCH# NAME                 DISPLAY_NAME               HASH TYPE     CON_ID
---------- -------------------- -------------------- ---------- ---- ----------
       846 Result Cache: RC Lat Result Cache: RC Lat 1054203712 SGA           0
           ch                   ch

       847 Result Cache: SO Lat Result Cache: SO Lat  986859868 SGA           0
           ch                   ch

       848 Result Cache: MB Lat Result Cache: MB Lat  995186388 SGA           0
           ch                   ch

       849 Result Cache: Heap Q Result Cache: Heap Q 2355791010 SGA           0
           ueue Latch           ueue Latch

       850 Result Cache; Flush  Result Cache; Flush   230283579 PDB           0
           View Latch           View Latch

       851 Result Cache: Set    Result Cache: Set     218554630 PDB           0

6 rows selected.
ENQ: JI – CONTENTION ORACLE

Definition:

Lock held during materialized view operations (like refresh, alter) to prevent concurrent
operations on the same materialized view

This form of enqueue is used to serialise the refresh in a materialised view lock.
During a refresh, locks are only applied on the MVIEW base table, ensuring that all other sessions are queued.

ENQ : JI – contention

This enqueue is intended to ensure that two or more snapshot processes do not attempt to refresh the same item when a snapshot process intends to perform an aggregate join view (AJV) refresh.

AWR and ASH analysis:

kish@x3z6zx9<^>!oerr ora 10704
10704, 00000, "Print out information about what enqueues are being obtained"
// *Cause:  When enabled, prints out arguments to calls to ksqcmi and
//          ksqlrl and the return values.
// *Action: Level indicates details:
//   Level: 1-4: print out basic info for ksqlrl, ksqcmi
//          5-9: also print out stuff in callbacks:  ksqlac, ksqlop
//          10+: also print out time for each line

Enable enqueue trace to identify the JI activity

After tracing the refresh, JI enqueue is observed in lock mode 6 with the object id

*** 2022-06-05 11:58:44.942
ksqgtl *** JI-00015a30-00000000 mode=6 flags=0x10001 timeout=5 *** <<== Hold lock
ksqgtl: no transaction
ksqgtl: use existing ksusetxn DID
ksqgtl:
        ksqlkdid: 0001-001C-00000015 

*** 2022-06-05 11:58:44.985
ksqrcl: JI,15a30,0           <<== Release the lock
ksqrcl: returns 0

By converting hexadecimal to decimal value, the object id of the enq is identified

ENQ type: JI

OBJECT ID IN HEXADECIMAL : 00015a30-00000000

Lock mode=6 (exclusive)

kish@x3z6zx9<^>select object_name from dba_objects where object_id=88624;

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
MVSAL

These waits can be observed in v$session_wait

old   1: select &vswc
new   1: select case when state = 'ON CPU' THEN 'BURN_CPU' when state = 'WAITING' THEN 'LONG_WAIT' when state = 'WAITED SHORT TIME' THEN 'SHORT_WAIT' else 'KNOWN_WAIT' end as state,event,count(*) as counter
old   2:   from &vswv
new   2:   from v$session_wait
old   3:   &vswh
new   3:   where wait_class <> 'Idle'
old   4:   &vswg
new   4:   group by state,event
old   5:   &vswo
new   5:   order by counter desc

STATE      EVENT                                                                                                       COUNTER
---------- ---------------------------------------------------------------- --------------------------------------------------
SHORT_WAIT SQL*Net message to client                                                                                         1
LONG_WAIT  enq: JI - contention                       <<==                                                       1
SHORT_WAIT cell single block physical read                                                                                   1

Session 151 is blocked by 14 in enq: JI – contention. Here either session 14 has to complete the job or it needs to be killed to release the JI lock from MVIEW base table

set lines 200 pages 1000
col TIME for a18
col SID for 9999
col "WHO_BLOCK_WHO" for a20
col W_CHAINS for a60
col EVENT for a25
col WHO for a7
col PROGRAM for a7
col username for a7
def acol="ash.session_id as sid,to_char(ash.sample_time,'dd-mm-yy hh24:mi:ss') as TIME,regexp_substr(PROGRAM,'[^@]+') as WHO,regexp_substr(ash.PROGRAM,'\((.*?)\)',1,1,null,1) as PROGRAM"
def ucol="du.username"
def aview="v$active_session_history ash"
def uview="dba_users du"
def aujoin="( ash.user_id = du.user_id )"
def awhere="ash.session_state like '%WAIT%'"
--def astime="to_char(ash.sample_time,'dd-mm-yy hh24:mi:ss') between 14-05-22 20:30:50 and 14-05-22 20:30:59"
def aord="order by to_char(ash.sample_time,'dd-mm-yy hh24:mi:ss')"
select &acol,
       &ucol,
       case when session_state = 'ON CPU' 
	   then 'BURN_CPU' 
	        when session_state = 'WAITING' 
			then 'LONG_WAIT' 
			     when session_state = 'WAITED SHORT TIME' 
				 then 'SHORT_WAIT' 
				 else 'KNOWN_WAIT' 
				 end as state,
       case when ash.BLOCKING_SESSION is NULL 
       then 'NULL'
	        when ash.BLOCKING_SESSION is NOT NULL 
		    then '|'||lag(ash.EVENT) over (order by sample_id)||' [|]<~~~ '||ash.EVENT||'|' 	
       end W_CHAINS,				
       '|'||nvl(to_char(ash.BLOCKING_SESSION),'NULL')||' blocks===> '||ash.SESSION_ID||'|' as "WHO_BLOCK_WHO"
from &aview
full outer join &uview on &aujoin
where &awhere;
   14 05-06-22 14:06:39  sqlplus TNS V1- KISH    LONG_WAIT  NULL                                                         |NULL blocks===> 14|
                                 V3

  151 05-06-22 14:06:40  sqlplus TNS V1- SYS     LONG_WAIT  |cell single block physical read [|]<~~~ enq: JI - contentio |14 blocks===> 151|
                                 V3                         n|

  151 05-06-22 14:06:41  sqlplus TNS V1- SYS     LONG_WAIT  |enq: JI - contention [|]<~~~ enq: JI - contention|          |14 blocks===> 151|
                                 V3

Next identify the objects using ASH

1246298118:n 88990:view <<== p2 column denotes the object id and

set lines 200 pages 1000
col TIME for a18
col SID for 9999
col "WHO_BLOCK_WHO" for a20
col W_CHAINS for a60
col EVENT for a25
col WHO for a7
col PROGRAM for a7
col username for a7
col p1obj for a12
col p2obj for a12
col p3obj for a10
def acol="ash.session_id as sid,to_char(ash.sample_time,'dd-mm-yy hh24:mi:ss') as TIME,''||p1||':'||p1text||'' as p1obj,''||p2||':'||p2text||'' as p2obj,''||p3||':'||p3text||'' as p3obj"
def aview="v$active_session_history ash"
def uview="dba_users du"
def aujoin="( ash.user_id = du.user_id )"
def awhere="ash.session_state like '%WAIT%'"
--def astime="to_char(ash.sample_time,'dd-mm-yy hh24:mi:ss') between 14-05-22 20:30:50 and 14-05-22 20:30:59"
def aord="order by to_char(ash.sample_time,'dd-mm-yy hh24:mi:ss')"
select &acol,
       case when ash.BLOCKING_SESSION is NULL 
       then 'NULL'
	        when ash.BLOCKING_SESSION is NOT NULL 
		    then '|'||lag(ash.EVENT) over (order by sample_id)||' [|]<~~~ '||ash.EVENT||'|' 	
       end W_CHAINS,				
       '|'||nvl(to_char(ash.BLOCKING_SESSION),'NULL')||' blocks===> '||ash.SESSION_ID||'|' as "WHO_BLOCK_WHO"
from &aview
full outer join &uview on &aujoin
where &awhere;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   14 05-06-22 14:06:39  1858131412:c 3234425683:d 8192:bytes NULL                                                         |NULL blocks===> 14|
                         ellhash#     iskhash#

  151 05-06-22 14:06:40  1246298118:n 88990:view o 0:0        |cell single block physical read [|]<~~~ enq: JI - contentio |14 blocks===> 151|
                         ame|mode     bject #                 n|

  151 05-06-22 14:06:41  1246298118:n 88990:view o 0:0        |enq: JI - contention [|]<~~~ enq: JI - contention|          |14 blocks===> 151|
                         ame|mode     bject #

The type of lock can be identified by below query

kish@x3z6zx9<^>col type for a5
col lmode for 99999
col description for a30
select l.sid,
       lt.type,
       l.lmode,
           lt.description
       from v$lock_type lt
       inner join v$lock l on lt.type = l.type
where l.lmode=(select bitand(1246298118,65535) "Mode" from dual);kish@x3z6zx9<^>kish@x3z6zx9<^>kish@x3z6zx9<^>  2    3    4    5    6    7

  SID TYPE   LMODE DESCRIPTION
----- ----- ------ ------------------------------
   14 JI         6 Lock held during materialized
                   view operations (like refresh,
                    alter) to prevent concurrent
                   operations on the same materia
                   lized view

    6 RT         6 Thread locks held by LGWR, DBW
                   0, and RVWR to indicate mounte
                   d or open status

kish@x3z6zx9<^>col WAITING_SESSION for 99999999
kish@x3z6zx9<^>col HOLDING_SESSION for 99999999
kish@x3z6zx9<^>col LOCK_TYPE for a10
kish@x3z6zx9<^>col MODE_HELD for a10
kish@x3z6zx9<^>col MODE_REQUESTED for a10
kish@x3z6zx9<^>col LOCK_ID1 for 9999999
kish@x3z6zx9<^>select * from dba_waiters;

WAITING_SESSION HOLDING_SESSION LOCK_TYPE  MODE_HELD  MODE_REQUE LOCK_ID1                                           LOCK_ID2
--------------- --------------- ---------- ---------- ---------- -------- --------------------------------------------------
            151              14 JI         Exclusive  Exclusive     88990                                                  0

To identify the exact stage of where the session is blocked, use oradebug tool

Important points are highlighted with an arrow. Closely eyeball the hang analyzer chain

Even though the first session which is doing its job to complete the work, sometimes this can be misdiagnosed as performance issue with the second job which waits on JI contention.

-------------------------------------------------------------------------------
Chain 1:
-------------------------------------------------------------------------------
    Oracle session identified by:
    {
                instance: 1 (x3z6zx9.x3z6zx9)
                   os id: 16305
              process id: 43, oracle@x3dbzx36 (TNS V1-V3)
              session id: 151
        session serial #: 25
    }
    is waiting for 'enq: JI - contention' with wait info:  <========= 2nd session is blocked
    {
                      p1: 'name|mode'=0x4a490006
                      p2: 'view object #'=0x15b9e
                      p3: '0'=0x0
            time in wait: 2.386072 sec
      heur. time in wait: 1 min 32 sec
           timeout after: 2.613928 sec
                 wait id: 249
                blocking: 0 sessions
             current sql: BEGIN DBMS_MVIEW.REFRESH('KISH.MVSAL','F',PARALLELISM=>6,atomic_refresh=>true); END;
             short stack: ksedsts()+465<-ksdxfstk()+32<-ksdxcb()+1927<-sspuser()+112<-__sighandler()<-semtimedop()+10<-skgpwwait()+160<-ksliwat()+2022<-kslwaitctx()+163<-ksqcmi()+2848<-ksqgtlctx()+3501<-ksqgelctx()+557<-kkzfgenq()+162<-kkzfgenq_ord()+157<-kkzigenq()+449<-spefcmpa()+196<-spefmccallstd()+235<-pextproc()+41<-peftrusted()+150<-psdexsp()+255<-rpiswu2()+1776<-psdextp()+700<-pefccal()+726<-pefcal()+224<-pevm_FCAL()+169<-pfrinstr_FCAL()+75<-pfrrun_no_tool()+63<-pfrrun()+627<-plsql_run()+649<-peicnt()+302<-kkxexe()+525<-opiexe()+18
            wait history:
              * time between current wait and wait #1: 0.000100 sec
              1.       event: 'enq: JI - contention' <===========
                 time waited: 5.011015 sec
                     wait id: 248             p1: 'name|mode'=0x4a490006
                                              p2: 'view object #'=0x15b9e
                                              p3: '0'=0x0
              * time between wait #1 and #2: 0.000049 sec
              2.       event: 'enq: JI - contention'
                 time waited: 5.005329 sec
                     wait id: 247             p1: 'name|mode'=0x4a490006
                                              p2: 'view object #'=0x15b9e
                                              p3: '0'=0x0
              * time between wait #2 and #3: 0.000247 sec
              3.       event: 'enq: JI - contention'
                 time waited: 5.006765 sec
                     wait id: 246             p1: 'name|mode'=0x4a490006
                                              p2: 'view object #'=0x15b9e
                                              p3: '0'=0x0
    }
    and is blocked by
 => Oracle session identified by:
    {
                instance: 1 (x3z6zx9.x3z6zx9)
                   os id: 16269
              process id: 28, oracle@x3dbzx36 (TNS V1-V3)
              session id: 14
        session serial #: 287
    }
    which is not in a wait:
    {
               last wait: 0.000341 sec ago
                blocking: 1 session
             current sql:  delete from "KISH"."MVSAL"  <==============
             short stack: ksedsts()+465<-ksdxfstk()+32<-ksdxcb()+1927<-sspuser()+112<-__sighandler()<-times()+7<-kcbchg1_main()+4187<-kcbchg1()+205<-ktuchg2()+1417<-ktbchg2()+281<-kdimod0()+4144<-kaudelLoadIndexList()+1529<-kaudel()+153<-delrow()+1512<-qerdlFetch()+1774<-qerstFetch()+520<-delexe()+964<-opiexe()+10315<-opiall0()+1276<-opikpr()+524<-opiodr()+917<-rpidrus()+211<-skgmstack()+148<-rpiswu2()+690<-kprball()+1232<-kkzfrc()+1669<-kkzfrsh_execute()+7719<-kkzfrsh()+9022<-kkzifr3g()+1247<-spefcmpa()+874<-spefmccallstd()+235<-pextproc(

Solution:

Kill the first session to release the lock or wait for the session to complete

Check if MVIEW logs are purged and check the size of log with respect to base table

Summary:

  • Multiple jobs cannot access same MVIEW base table during refresh
  • Mainly during fast refresh, there can be hangs if other jobs are accessing the MVIEW base table
ARE HARD PARSE SQL GOOD FOR ORACLE DATABASE ?

There are two types of parsing in oracle database

  • Soft parse
  • Hard parse

Hard parse means to perform additional SQL checks for an executed SQL, every time when it enter inside shared pool (library cache) considering it as a new entrant.

A typical example will be,

  • Admission or onboarding of new employee in an organization
  • Admission or onboarding of new student in an university
  • Admission of new patient in a hospital

If a new employee join an organization, the organization has to perform all the prechecks like background verification, new employee account, salary account, technical or non technical work training, induction, policies etc..

There are lot of procedures to complete for the new employee which is an overhead in terms of cost and time. If there is already an existing old employee who stayed for long time, then there is no need for these additional processing right? Hard parsing can be depicted as follows,

  • Similarly, consider database as an organization.
  • Shared pool as the employee onboard processing department or team in organization.
  • SQL processing as the employee background checks processing.
  • Newly entering SQL as a new employee.
  • All these processing steps for a new employee is considered as hard parse. If all these processing steps are already complete for an old employee then it is soft parse
HARD PARSE ILLUSION

Hard parsing stages include

  • Syntax check
  • Semantic check
  • Security check
  • Shared pool check

If the executions of a particular sql id is equal to once, then it means the SQL is hard parsed

kish@x3z6zx9<^>select count(*) from v$sqlarea where executions=1;

                                          COUNT(*)
--------------------------------------------------
                                               515

The sql_id with single execution can be identified which are unique

kish@x3z6zx9<^>select sql_id from v$sqlarea where executions=1 order by executions desc;

SQL_ID
-------------
dxg261d9ss0ck
55qsxrz72s0gd
1h7zt6jks80pa
9ngq09yhhc1qf
gvtpnhc5ww1rn
cfk4qsrg681sz

Hard parsing depth can also be identified with the total number of sql statements which has same FORCE_MATCHING_SIGNATURE. If the number of FMS is considerably negligible, then they can be ignored. But there are hundreds or thousands or millions of same FMS for multiple sql statements, then think about the resource usage and consequences. If one is ready to power up the resources with tons of memory and CPU and some extra cash, then this post is not for them. If one acts in a preventive and conservative manner, then hard parse needs to be eliminated to flatten the resource usage curve.

kish@x3z6zx9<^>col force_matching_signature for 999999999999999999999999999999999
col cntr for 9999999999
select * from
(
select force_matching_signature,
       count(*) cntr
from v$sqlarea
group by force_matching_signature
order by cntr desc
)
where rownum < 6;
kish@x3z6zx9<^>kish@x3z6zx9<^>  2    3    4    5    6    7    8    9
          FORCE_MATCHING_SIGNATURE        CNTR
---------------------------------- -----------
              16425079020282941956         259
                                 0         164
               8758528229247075989           8
               6043408675031356560           2
              17192947727905198177           2

Each sql statement has different sql_id but same force_matching_signature

kish@x3z6zx9<^>select sql_id,count(*) from v$sqlarea where force_matching_signature=16425079020282941956 group by sql_id;

SQL_ID                                                  COUNT(*)
------------- --------------------------------------------------
gvmftumnkcbd2                                                  1
g7uq5sdk7ccpr                                                  1
a93zsyanhsu8h                                                  1
3cufct3q88w7n                                                  1
53nqzmfyunywq                                                  1
dy6c511aqx1fk                                                  1

These details can also be checked in v$sysstat and v$sys_time_model

kish@x3z6zx9<^>select STAT_NAME,VALUE from v$sys_time_model where STAT_NAME like '%parse%';

STAT_NAME                                                                VALUE
---------------------------------------------------------------- -------------
parse time elapsed                                                    12410728
hard parse elapsed time                                                7309616
failed parse elapsed time                                                 2718
failed parse (out of shared memory) elapsed time                             0
hard parse (sharing criteria) elapsed time                               65909
hard parse (bind mismatch) elapsed time                                   2138

6 rows selected.

To mitigate the risks of significant hard parse due to literals in predicates of sql , a typical workaround will be to set the cursor_sharing parameter to force and increase the session_cached_cursors parameter to cache the hot cursors to prevent high session connects and disconnects.

kish@x3z6zx9<^>show parameter cursor

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
cursor_bind_capture_destination      string      memory+disk
cursor_sharing                       string      EXACT
cursor_space_for_time                boolean     FALSE
open_cursors                         integer     300
session_cached_cursors               integer     50

Diagnosis:

AWR report shows high user calls and hard parses

Execute to parse ratio is in negative value which implies application is connecting and disconnecting frequently

%SQL with executions>1 is too low. This shows that database handle parsing of SQL inefficiently

There will be loads of shared pool waits when there is a need for high hard parse

Connection management call elapsed time will be high if application has frequent disconnections from database

Below is the workaround for heavy hard parse

kish@x3z6zx9<^>alter system set cursor_sharing=force;

System altered.

kish@x3z6zx9<^>alter system set session_cached_cursors=300 scope=spfile;

System altered.

kish@x3z6zx9<^>startup force
ORACLE instance started.

Total System Global Area 4710043648 bytes
Fixed Size                  2261128 bytes
Variable Size            3321892728 bytes
Database Buffers         1375731712 bytes
Redo Buffers               10158080 bytes
Database mounted.
Database opened.
kish@x3z6zx9<^>show parameter cursor

NAME_COL_PLUS_SHOW_PARAM
------------------------------------------------------------------------------
TYPE
--------------------------------
VALUE_COL_PLUS_SHOW_PARAM
------------------------------------------------------------------------------
cursor_sharing
string
FORCE
session_cached_cursors
integer
300

In the results, there are queries with bind variables substituted in place of literals by forcing cursor sharing. In this way, database automatically use bind variables as alternative option instead of generating from application side

kish@x3z6zx9<^>col VERSION_COUNT for 9999999
kish@x3z6zx9<^>col EXECUTIONS for 9999999
kish@x3z6zx9<^>col LOADS for 9999999
kish@x3z6zx9<^>col INVALIDATIONS for 999999999
kish@x3z6zx9<^>col PARSE_CALLS for 99999999
kish@x3z6zx9<^>col SQL_TEXT for a30
kish@x3z6zx9<^>select SQL_ID,
  2                   SQL_TEXT,
  3                   VERSION_COUNT,
  4                   EXECUTIONS,
  5                   LOADS,
  6                   INVALIDATIONS,
  7                   PARSE_CALLS
  8  from v$sqlarea
  9  where sql_text like '%kish.xtbl%';

SQL_ID        SQL_TEXT                       VERSION_COUNT EXECUTIONS    LOADS INVALIDATIONS PARSE_CALLS
------------- ------------------------------ ------------- ---------- -------- ------------- -----------
1qr1x0kzamay7 select country from kish.xtbl              1          2        1             0           2
              where :"SYS_B_0" > :"SYS_B_1"

gnjxgk41sztbk select * from kish.xtbl where              1        105        1             0         105
              xid between :"SYS_B_0" and :"S
              YS_B_1" * :"SYS_B_2"

cdfyxf3rk7ypn select * from kish.xtbl where              1          2        1             0           2
              xid between :"SYS_B_0" and :"S
              YS_B_1" +:"SYS_B_2"

Summary:

  • Hard parse up to some extent can be tolerated. But excessive hard parse is bad for database health and its performance.
  • Efficient solution should be to modify application mechanics by pooling the connections and apply bind variables in place of literals rather than adjustments from database