SHELL SCRIPT TO CAPTURE THE SQL INFORMATION FROM CURSOR CACHE AND POPULATE INTO STS

If the database need to be performant proof, certain need to be taken in a proactive manner to prevent future performance issue or analysis of any flaws from historic information. This shell script poll the cursor cache for every 30 seconds , capture the sql, populate the STS, pack the STS information into backup table and export the STS table in a dump file.

Warning: This script may impose heavy load on the database. It is recommended to execute the script during quiet hours of the database.

#!/bin/bash

# Set the environment variables for multiple databases.
_set_env(){

cat /etc/oratab|grep -v '#'|grep -v '^$' > /home/oracle/oratab_new
while read x
   do
     IFS=':' read -r -a array <<< $x
                ORACLE_SID="${array[0]}"
                ORACLE_HOME="${array[1]}"
                echo $ORACLE_SID
                echo $ORACLE_HOME
                export PATH=$PATH:$ORACLE_HOME/bin
   done < /home/oracle/oratab_new


}


#Capture the high load sql to STS and pack them into staging table
_capture_sql_load(){
t=30
loop="true"
c=0
ela_s=$(date +%s)
while [ $loop == "true" ]
        do
        sleep $t

        $ORACLE_HOME/bin/sqlplus -S '/ as sysdba' <<EOF
        EXEC dbms_workload_repository.create_snapshot();
        conn c##hydra/hydra
        EXEC DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET( -
                                                sqlset_name     => 'B4UPGLOAD', -
                                                time_limit      =>  30, -
                                        repeat_interval =>  5);
        conn / as sysdba
        BEGIN
            DBMS_SQLTUNE.create_stgtab_sqlset(table_name => 'STS',
            schema_name => 'KISH',
            tablespace_name => 'HYDRA');
        END;
        /
        BEGIN
            DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => 'B4UPGLOAD',
            sqlset_owner => 'C##HYDRA',
            staging_table_name => 'STS',
            staging_schema_owner => 'KISH');
            END;
         /
exit;
EOF
ela_e=$(date +%s)
c=$c+1
ela_t=$(expr $ela_e - $ela_s)
if [[ $c -gt 30 ]]
    then
    loop=False
elif [[ $c -eq 30 ]]
    then
    _exp_sqlset
    break
fi
done

}


#Export the STS to a dump file out of the database
_exp_sqlset(){
exp USERID=kish/password file=expsts.dmp log=stsbkp.log buffer=500 tables=kish.sts
}

_set_env
_capture_sql_load
DOES ADDING NEW PARTITION TO A COMPRESSED PARTITIONED TABLE ENABLE COMPRESSION AUTOMATICALLY TO ITS LOCAL INDEX ?

Main advantage of Advanced compression is to perform DML without decompression of compressed data.

There is NO article or feature found currently for automatically compressing a newly added local index during a partition addition to a compressed partitioned table. So, DBA should manually rebuild the new index partition with compression. But newly added table partition is compressed automatically.

There are three types of index for partitioned table

  • Local prefixed(OLTP, less partitions, less downtime for maintenance)
  • Local non prefixed(DSS, many partitions, more downtime for maintenance due to large number of large partition, faster full scans for ware house due to parallel)
  • Global prefixed(OLTP, less partitions,less downtime for maintenance)

Let us test the behavior

Create a tablespace, a range partitioned table with advanced compression and populate data into the table

kIsH@STAX<>create tablespace TBS1 datafile '/apps01/oradata/tbs1.dbf' size 5M autoextend on;

Tablespace created.

kIsH@STAX<>CREATE TABLE "KISH"."RP"
(    "R_ID" NUMBER(30,0),
        "R_NAME" VARCHAR2(30),
        "R_COST" NUMBER(30,0),
        "R_DATE" DATE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1"
PARTITION BY RANGE ("R_DATE")
(PARTITION "R_JAN2022"  VALUES LESS THAN (TO_DATE(' 2022-02-01 00:00:00', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 ROW STORE COMPRESS ADVANCED LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1" ,
PARTITION "R_FEB2022"  VALUES LESS THAN (TO_DATE(' 2022-03-01 00:00:00' , '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
ROW STORE COMPRESS ADVANCED LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1",
PARTITION "R_MAR2022"  VALUES LESS THAN (TO_DATE(' 2022-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
ROW STORE COMPRESS ADVANCED LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1",
PARTITION "R_APR2022"  VALUES LESS THAN (TO_DATE(' 2022-05-01 00:00:00', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1" ) ROW STORE COMPRESS ADVANCED; 16   17   18   19   20   21   22   23   24   25   26   27   28   29   30   31   32   33   34   35   36   37   38   39   40   41

Table created.

kIsH@STAX<>create sequence xid increment by 1 start with 1;

Sequence created.

kIsH@STAX<>INSERT INTO "KISH"."RP"
SELECT xid.nextval,
       dbms_random.string('a','10'),
           floor(dbms_random.value(1,1000000)),
           to_date(trunc(dbms_random.value(to_char(date '2022-01-01','J'),
                                           to_char(date '2022-04-30','J'))),'J')
from dual connect by level <= 1000;     2    3    4    5    6    7

1000 rows created.

kIsH@STAX<>commit;

Commit complete.

Check the segment size if needed for the compressed partitioned table

kIsH@STAX<>col SEGMENT_NAME for a20
kIsH@STAX<>SELECT segment_name,bytes/1048576 MB FROM dba_segments WHERE segment_name='RP';

SEGMENT_NAME                 MB
-------------------- ----------
RP                        .0625
RP                        .0625
RP                        .0625
RP                        .0625

Check the status of the newly created partition. Compression is enabled for all partitions.

kIsH@STAX<>col TABLE_NAME for a20
kIsH@STAX<>SELECT compression,compress_for,table_name from dba_tab_partitions WHERE table_name = 'RP';

COMPRESS COMPRESS_FOR                   TABLE_NAME
-------- ------------------------------ --------------------
ENABLED  ADVANCED                       RP
ENABLED  ADVANCED                       RP
ENABLED  ADVANCED                       RP
ENABLED  ADVANCED                       RP


kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>col PARTITION_NAME for a20
kIsH@STAX<>SELECT compression,compress_for,table_name,partition_name from dba_tab_partitions WHERE table_name = 'RP';

COMPRESS COMPRESS_FOR                   TABLE_NAME           PARTITION_NAME
-------- ------------------------------ -------------------- --------------------
ENABLED  ADVANCED                       RP                   R_APR2022
ENABLED  ADVANCED                       RP                   R_MAR2022
ENABLED  ADVANCED                       RP                   R_JAN2022
ENABLED  ADVANCED                       RP                   R_FEB2022

Create a local index with basic compression on the partitioned table. Global index cannot be partitioned because it is global which is centralized to all the partitions.

kIsH@STAX<>CREATE INDEX idx_rp on RP(r_id) LOCAL COMPRESS;

Index created.

All the inherit partitioned indexes are compressed automatically with basic compression.

kIsH@STAX<>SELECT partition_name,index_name,compression FROM dba_ind_partitions WHERE index_name like '%IDX_RP%';

PARTITION_NAME       INDEX_NAME           COMPRESSION
-------------------- -------------------- -------------
R_APR2022            IDX_RP               ENABLED
R_FEB2022            IDX_RP               ENABLED
R_JAN2022            IDX_RP               ENABLED
R_MAR2022            IDX_RP               ENABLED

Add a new partition to the partitioned compressed table

kIsH@STAX<>ALTER TABLE rp ADD PARTITION "R_MAY2022" VALUES LESS THAN (TO_DATE(' 2022-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLESPACE "TBS1";

Table altered.

Then check the new index which is added with basic compression automatically and marked in bold for new partition

kIsH@STAX<>SELECT dip.partition_name,dip.index_name,dip.compression,di.partitioned
  FROM dba_ind_partitions dip
  INNER JOIN dba_indexes di on dip.index_name = di.index_name
  WHERE dip.index_name = 'IDX_RP'
  ORDER BY index_name;  2    3    4    5

PARTITION_NAME       INDEX_NAME           COMPRESSION   PAR
-------------------- -------------------- ------------- ---
R_APR2022            IDX_RP               ENABLED       YES
R_FEB2022            IDX_RP               ENABLED       YES
R_JAN2022            IDX_RP               ENABLED       YES
R_MAR2022            IDX_RP               ENABLED       YES
R_MAY2022            IDX_RP               ENABLED       YES

So far, basic compression happens automatically for newly added partitioned index, when new partition is added to the table. Now, drop the index with basic compression and add index with advanced compression. If a new partition is added for partitioned table with advanced low compression, the respective index partition is not compressed automatically. It is disabled by default but the table partition is compressed automatically as mentioned earlier.

kIsH@STAX<>DROP INDEX idx_rp;

Index dropped.
kIsH@STAX<>ALTER TABLE rp ROW STORE COMPRESS ADVANCED;

Table altered.

kIsH@STAX<>ALTER TABLE rp MODIFY DEFAULT ATTRIBUTES ROW STORE COMPRESS ADVANCED;

Table altered.

kIsH@STAX<>SELECT def_compression,def_compress_for FROM all_part_tables WHERE table_name='RP';

DEF_COMP DEF_COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

kIsH@STAX<>CREATE INDEX idx_rp on RP(r_id) LOCAL
(PARTITION R_JAN2022 COMPRESS ADVANCED LOW,
 PARTITION R_FEB2022 COMPRESS ADVANCED LOW,
 PARTITION R_MAR2022 COMPRESS ADVANCED LOW,
 PARTITION R_APR2022 COMPRESS ADVANCED LOW,
 PARTITION R_MAY2022 COMPRESS ADVANCED LOW
);  2    3    4    5    6    7

Index created.

kIsH@STAX<>ALTER TABLE rp ADD PARTITION "R_JUN2022" VALUES LESS THAN (TO_DATE(' 2022-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLESPACE "TBS1";

Table altered.

kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>col TABLE_NAME for a20
kIsH@STAX<>col PARTITION_NAME for a20
kIsH@STAX<>SELECT compression,compress_for,table_name,partition_name from dba_tab_partitions WHERE table_name = 'RP';

COMPRESS COMPRESS_FOR                   TABLE_NAME           PARTITION_NAME
-------- ------------------------------ -------------------- --------------------
ENABLED  ADVANCED                       RP                   R_MAY2022
ENABLED  ADVANCED                       RP                   R_JUN2022
ENABLED  ADVANCED                       RP                   R_APR2022
ENABLED  ADVANCED                       RP                   R_FEB2022
ENABLED  ADVANCED                       RP                   R_JAN2022
ENABLED  ADVANCED                       RP                   R_MAR2022

6 rows selected.

kIsH@STAX<>SELECT dip.partition_name,dip.index_name,dip.compression,di.partitioned
  FROM dba_ind_partitions dip
  INNER JOIN dba_indexes di on dip.index_name = di.index_name
  WHERE dip.index_name = 'IDX_RP'
  ORDER BY index_name;   2    3    4    5

PARTITION_NAME       INDEX_NAME           COMPRESSION   PAR
-------------------- -------------------- ------------- ---
R_JUN2022            IDX_RP               DISABLED      YES
R_MAY2022            IDX_RP               ADVANCED LOW  YES
R_APR2022            IDX_RP               ADVANCED LOW  YES
R_JAN2022            IDX_RP               ADVANCED LOW  YES
R_FEB2022            IDX_RP               ADVANCED LOW  YES
R_MAR2022            IDX_RP               ADVANCED LOW  YES

6 rows selected.

kIsH@STAX<>ALTER INDEX idx_rp REBUILD PARTITION R_JUN2022 COMPRESS ADVANCED LOW;

Index altered.

kIsH@STAX<>col INDEX_NAME for a20
kIsH@STAX<>SELECT dip.partition_name,dip.index_name,dip.compression,di.partitioned
  FROM dba_ind_partitions dip
  INNER JOIN dba_indexes di on dip.index_name = di.index_name
  WHERE dip.index_name = 'IDX_RP'
  ORDER BY index_name;  2    3    4    5

PARTITION_NAME       INDEX_NAME           COMPRESSION   PAR
-------------------- -------------------- ------------- ---
R_MAY2022            IDX_RP               ADVANCED LOW  YES
R_JUN2022            IDX_RP               ADVANCED LOW  YES
R_APR2022            IDX_RP               ADVANCED LOW  YES
R_JAN2022            IDX_RP               ADVANCED LOW  YES
R_FEB2022            IDX_RP               ADVANCED LOW  YES
R_MAR2022            IDX_RP               ADVANCED LOW  YES

6 rows selected.

If the index is GLOBAL, then there are no partitions created.

kIsH@STAX<>CREATE INDEX idx_rp on RP(r_id) GLOBAL COMPRESS;

Index created.

kIsH@STAX<>SELECT partition_name,index_name,compression FROM dba_ind_partitions WHERE index_name like '%IDX_RP%';

no rows selected

kIsH@STAX<>select index_name,partitioned from dba_indexes where index_name='IDX_RP';

INDEX_NAME           PAR
-------------------- ---
IDX_RP               NO

Summary:

  • Local indexes with basic compression are compressed automatically during new partition addition
  • Local indexes with Advanced compression are NOT compressed automatically during new partition addition. DBA need to manually compress with index rebuild.

WHEN DO WE USE GV$SESSION_LONPOPS?

DBA’s confuse to use longops view for long running operations due to misunderstanding. This view cannot be used to monitor all long running operations but only for large resource intensive operations.

There are two main criteria for the records

  • If the table scan exceeds 10,000 formatted blocks
  • operation is greater than 6 seconds

Scenarios during which oracle consider things as long running sessions

  • Rman Backup and Restore
  • Parallel Query(Large chunks)
  • Recovery ( Crash and Media )
  • Large Full Table scans not all Full table scans
  • Sorting
  • Stats job

x$ksulop is the internal table which sample and populate the long running information in GV$SESSION_LONGOPS

SQL> select VIEW_DEFINITION from v$fixed_view_definition where VIEW_NAME='GV$SESSION_LONGOPS';

VIEW_DEFINITION
--------------------------------------------------------------------------------
select inst_id, ksulosno, ksulosrn, ksulopna,        ksulotna, ksulotde, ksulosf
r, ksulotot, ksulouni,        to_date(ksulostm,'MM/DD/RR HH24:MI:SS','NLS_CALEND
AR=Gregorian'),        to_date(ksulolut,'MM/DD/RR HH24:MI:SS','NLS_CALENDAR=Greg
orian'),        to_date(ksuloinft, 'MM/DD/RR HH24:MI:SS','NLS_CALENDAR=Gregorian
'),        decode(ksulopna, 'Advisor', ksuloif2,               decode(sign(ksulo
tot-ksulosfr),-1,to_number(NULL),                     decode(ksulosfr, 0, to_num
ber(NULL),                      round(ksuloetm*((ksulotot-ksulosfr)/ksulosfr))))
),       ksuloetm, ksuloctx, ksulomsg, ksulounm,        ksulosql, ksulosqh, ksul
osqi, ksulosqph, ksulosqesta,        decode(ksulosqeid, 0, to_number(NULL), ksul
osqeid),        decode(ksulosqplid, 0, to_number(NULL), ksulosqplid),        ksu
losqplop, ksulosqplnm, ksuloqid, con_id      from x$ksulop

Currently, there are 0 long running sessions in the database

kIsH@STAX<>select count(*) from v$session_longops;

  COUNT(*)
----------
         0

Note step 2 of the execution plan which shows large table full table scan. How can one consider a table size as large? It depends on the available resource and cost estimated by optimizer along with the conditions like 10000 block scans and elapsed second > 6.

Example: If a table size is 100TB and there is 1000 CPU with 1000 cores, then it is obvious that resource power is also aggressive as table size. In this case, long running session may be calculated differently.

kIsH@STAX<>set autot traceonly explain
kIsH@STAX<>select * from dbobj;

Execution Plan
----------------------------------------------------------
Plan hash value: 1315669886

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |  6464K|   813M| 34311   (1)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| DBOBJ |  6464K|   813M| 34311   (1)| 00:00:02 |
---------------------------------------------------------------------------

After 6 seconds, the output is printed in v$session_longops

kIsH@STAX<>set lines 200 pages 1000
col opname for a20
col target for a20
col message for a20
col sql_id for a20
SELECT sid,
           serial#,
           opname,
           target,
           sofar,
           100 * (sofar / totalwork) TotalWork,
           message,
           sql_id,
           elapsed_seconds,
           time_remaining
  from v$session_longops
  order by sid;kIsH@STAX<>kIsH@STAX<>kIsH@STAX<>kIsH@STAX<>kIsH@STAX<>  2    3    4    5    6    7    8    9   10   11   12

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

       SID    SERIAL# OPNAME               TARGET                    SOFAR  TOTALWORK MESSAGE              SQL_ID               ELAPSED_SECONDS TIME_REMAINING
---------- ---------- -------------------- -------------------- ---------- ---------- -------------------- -------------------- --------------- --------------
        34      40942 Table Scan           SYS.DBOBJ                    32   .0253614 Table Scan:  SYS.DBO afw2anq220ubg       7 <== ela > 6         27594
                                                                                      BJ: 32 out of 126176 <====== formatted blocks > 10000

                                                                                       Blocks done

Summary:

  • Operations like index scans may not be displayed in v$session_longops
  • Small full table scans where there are less than 10000 block touches may also be not included

HOW TO GENERATE SQL TUNING ADVISOR REPORT IN ORACLE DATABASE

Choose the sql to be tuned

kIsH@Xhydra<>select sql_text from v$sql where sql_id='5qmm8vw5dvy8b';

SQL_TEXT
--------------------------------------------------------------------------------
select xdate from kish.xtbl where xid between 4001021 and 10000000

Create the tuning task using the below procedure

kIsH@Xhydra<>DECLARE
          v_tune_task_id varchar2(50);
          BEGIN
                  v_tune_task_id := DBMS_SQLTUNE.CREATE_TUNING_TASK (
                  sql_id => '5qmm8vw5dvy8b',
                  time_limit => 86400,
                  task_name => '5qmm8vw5dvy8b_tt',
                  description => 'Tuning advisory for 5qmm8vw5dvy8b');
                  DBMS_OUTPUT.PUT_LINE('v_tune_task_id: '|| v_tune_task_id);
          END;
/  2    3    4    5    6    7    8    9   10   11

PL/SQL procedure successfully completed.

Check the advisor parameters that can be used for configuration of the tuning task from dba_advisor_parameters

kIsH@Xhydra<>select PARAMETER_NAME,PARAMETER_VALUE from dba_advisor_parameters where TASK_NAME='5qmm8vw5dvy8b_tt';

PARAMETER_NAME       PARAMETER_VALUE
-------------------- --------------------
DAYS_TO_EXPIRE       30
END_SNAPSHOT         UNUSED
END_TIME             UNUSED
INSTANCE             UNUSED
JOURNALING           INFORMATION
MODE                 COMPREHENSIVE
START_SNAPSHOT       UNUSED
START_TIME           UNUSED
TARGET_OBJECTS       1
TIME_LIMIT           86400
DEFAULT_EXECUTION_TY TUNE SQL
PE

CON_DBID_MAPPING     UNUSED
ORA_EM_PARAM1        UNUSED
ORA_EM_PARAM2        UNUSED
ORA_EM_PARAM3        UNUSED
ORA_EM_PARAM4        UNUSED
ORA_EM_PARAM5        UNUSED
ORA_EM_PARAM6        UNUSED
ORA_EM_PARAM7        UNUSED
ORA_EM_PARAM8        UNUSED
ORA_EM_PARAM9        UNUSED
ORA_EM_PARAM10       UNUSED
EXECUTION_DAYS_TO_EX UNLIMITED
PIRE

REMOTE_SOURCE        UNUSED
USERNAME             UNUSED
BASIC_FILTER         UNUSED
PLAN_FILTER          UNUSED
RANK_MEASURE1        UNUSED
RANK_MEASURE2        UNUSED
RANK_MEASURE3        UNUSED
SQL_PERCENTAGE       1
SQL_LIMIT            -1
RESUME_FILTER        UNUSED
LOCAL_TIME_LIMIT     UNUSED
TEST_EXECUTE         AUTO
APPLY_CAPTURED_COMPI UNUSED
LENV

DATABASE_LINK_TO     UNUSED

37 rows selected.

Execute the tuning task which was created earlier

kIsH@Xhydra<>BEGIN
  2          DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => '5qmm8vw5dvy8b_tt');
  3          END;
  4          /

PL/SQL procedure successfully completed.

Typical views which are related to tuning tasks

kIsH@Xhydra<>desc dba_advisor_tasks
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(128)
 TASK_ID                                   NOT NULL NUMBER
 TASK_NAME                                          VARCHAR2(128)
 DESCRIPTION                                        VARCHAR2(256)
 ADVISOR_NAME                                       VARCHAR2(128)
 CREATED                                   NOT NULL DATE
 LAST_MODIFIED                             NOT NULL DATE
 PARENT_TASK_ID                                     NUMBER
 PARENT_RXEC_ID                                     NUMBER
 LAST_EXECUTION                                     VARCHAR2(128)
 EXECUTION_TYPE                                     VARCHAR2(128)
 EXECUTION_TYPE#                                    NUMBER
 EXECUTION_DESCRIPTION                              VARCHAR2(256)
 EXECUTION_START                                    DATE
 EXECUTION_END                                      DATE
 STATUS                                             VARCHAR2(11)
 STATUS_MESSAGE                                     VARCHAR2(4000)
 PCT_COMPLETION_TIME                                NUMBER
 PROGRESS_METRIC                                    NUMBER
 METRIC_UNITS                                       VARCHAR2(64)
 ACTIVITY_COUNTER                                   NUMBER
 RECOMMENDATION_COUNT                               NUMBER
 ERROR_MESSAGE                                      VARCHAR2(4000)
 SOURCE                                             VARCHAR2(128)
 HOW_CREATED                                        VARCHAR2(30)
 READ_ONLY                                          VARCHAR2(5)
 SYSTEM_TASK                                        VARCHAR2(5)
 ADVISOR_ID                                NOT NULL NUMBER
 STATUS#                                            NUMBER

kIsH@Xhydra<>desc v$advisor_progress
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 SID                                                NUMBER
 SERIAL#                                            NUMBER
 USERNAME                                           VARCHAR2(128)
 OPNAME                                             VARCHAR2(64)
 ADVISOR_NAME                                       VARCHAR2(64)
 TASK_ID                                            NUMBER
 TARGET_DESC                                        VARCHAR2(32)
 SOFAR                                              NUMBER
 TOTALWORK                                          NUMBER
 UNITS                                              VARCHAR2(32)
 BENEFIT_SOFAR                                      NUMBER
 BENEFIT_MAX                                        NUMBER
 FINDINGS                                           NUMBER
 RECOMMENDATIONS                                    NUMBER
 TIME_REMAINING                                     NUMBER
 START_TIME                                         DATE
 LAST_UPDATE_TIME                                   DATE
 ELAPSED_SECONDS                                    NUMBER
 ADVISOR_METRIC1                                    NUMBER
 METRIC1_DESC                                       VARCHAR2(64)
 EXECUTION_TYPE                                     VARCHAR2(64)
 CON_ID                                             NUMBER

Check the advisor log, if the tuning task is completed

kIsH@Xhydra<>col STATUS_MESSAGE for a20
kIsH@Xhydra<>select OWNER,TASK_ID,TASK_NAME,STATUS,STATUS_MESSAGE from dba_advisor_log where TASK_NAME='5qmm8vw5dvy8b_tt';

OWNER                      TASK_ID TASK_NAME            STATUS               STATUS_MESSAGE
-------------------- ------------- -------------------- -------------------- --------------------
SYS                           1633 5qmm8vw5dvy8b_tt     COMPLETED

Generate the tuning advisor recommendations for tuning

kIsH@Xhydra<>set long 100000000
kIsH@Xhydra<>select DBMS_SQLTUNE.REPORT_TUNING_TASK('5qmm8vw5dvy8b_tt') from dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('5QMM8VW5DVY8B_TT')
----------------------------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name   : 5qmm8vw5dvy8b_tt
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 86400
Completion Status  : COMPLETED
Started at         : 10/26/2022 22:49:53
Completed at       : 10/26/2022 22:49:53

-------------------------------------------------------------------------------
Schema Name   : SYS
Container Name: CDB$ROOT
SQL ID        : 5qmm8vw5dvy8b
SQL Text      : select xdate from kish.xtbl where xid between 4001021 and
                10000000

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- Index Finding (see explain plans section below)
--------------------------------------------------
  The execution plan of this statement can be improved by creating one or more
  indices.

  Recommendation (estimated benefit: 70.43%)
  ------------------------------------------
  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index KISH.IDX$$_06610001 on KISH.XTBL("XID","XDATE");

  Rationale
  ---------
    Creating the recommended indices significantly improves the execution plan
    of this statement. However, it might be preferable to run "Access Advisor"
    using a representative SQL workload as opposed to a single statement. This
    will allow to get comprehensive index recommendations which takes into
    account index maintenance overhead and additional space consumption.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original
-----------
Plan hash value: 1420452506

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   425K|  5818K|  5485   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| XTBL |   425K|  5818K|  5485   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("XID"<=10000000 AND "XID">=4001021)

2- Using New Indices
--------------------
Plan hash value: 1675568796

-----------------------------------------------------------------------------------
| Id  | Operation        | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |   425K|  5818K|  1622   (1)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| IDX$$_06610001 |   425K|  5818K|  1622   (1)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("XID">=4001021 AND "XID"<=10000000)

-------------------------------------------------------------------------------
VARIABLES IN PYTHON

Variables are buckets or holders which holds or contains the objects which are assigned to them during declaration

Every programming language has a variable concept which is essential for storing different types of objects in the heap memory

In below example,

Variable x is declared first and y is assigned to x. In this case, python allocate a pointer 2367230667120 for x and also reuse the same memory area for y in the heap. Again x value become 6 in which a new area 2367230667216 is pointed in the memory.

x = 3
print(id(x))
print('*'*40)
y = x
x = 6

print(x,y,\
      id(x),\
      id(y))

O/P:
2367230667120
****************************************
6 3 2367230667216 2367230667120
Variable Declaration In Python
var = 'string'
var1 = 1
var2 = 1.0
var3 = True
var4 = None
var5 = [1,1.0,'list',True,None,[1,1.0,'list'],(),{}]
var6 = ('tuple',)
var7 = {'set'}
var8 = {'k':'v'}
tmpvar = var

print(var + '\n' \
      + str(var1) \
      + '\n' + str(var2) \
      + '\n' + str(var3) \
      + '\n' + str(var4)\
      + '\n' + str(var5)\
      + '\n' + str(var6)\
      + '\n' + str(var7)\
      + '\n' + str(var8))

print(type(var),\
      type(tmpvar),\
      type(var1), \
      type(var2), \
      type(var3), \
      type(var4), \
      type(var5), \
      type(var6), \
      type(var7), \
      type(var8))

print(id(var),\
      id(tmpvar),\
      id(var1), \
      id(var2), \
      id(var3), \
      id(var4), \
      id(var5), \
      id(var6), \
      id(var7), \
      id(var8))

O/P:

string
1
1.0
True
None
[1, 1.0, 'list', True, None, [1, 1.0, 'list'], (), {}]
('tuple',)
{'set'}
{'k': 'v'}

<class 'str'> <class 'str'> <class 'int'> <class 'float'> <class 'bool'> <class 'NoneType'> <class 'list'> <class 'tuple'> <class 'set'> <class 'dict'>

2036667053296 2036667053296 2036660922672 2036742039664 140732748421224 140732748471512 2036742276672 2036742155424 2036742978016 2036742258176

Variable declaration guidelines

  • Underscore symbol(_) or a letter can be used in the beginning of a variable name.
  • A variable name cannot begin with a number.
  • Variable names can only contain alphanumeric characters and underscores (A-z, 0-9, and ).
  • Variable names are case-sensitive (var, Var and VAR are three different variables).
  • The reserved terms cannot be used to name the variable (keywords).

_var = 1
_var_ = 2
var1 = 3
1var = 4 # invalid declaration
@var = 5 # invalid declaration

There are two conventions that are named as snake case and camel case variables similar to english letters. These type of names are used to represent the variable.

CamelCase:

This term comes from a camel’s hunch or back which looks like a slope. Similar to this, the variable name is sloped with an upper case letter.

Example: AbcDef, WxYz

Snake case:

These variables resemble a snake with sine curves with an underscore in between

Example: abc_efg, xyz_abc

CamelCase = "This is a camel case variable"

snake_case = "This is snake case variable"
DOES ONLINE INDEX REBUILD NEED DOWNTIME AND LOCK? WHAT IS THE DIFFERENCE BETWEEN OFFLINE AND ONLINE?

Does index rebuild using online clause works without any locks or require a lock to process the rebuild? Most of the DBA’s has this misconception that, index rebuilding in online mode execute without any blockings between DML. Online mode gives freedom for the database with near or zero downtime which means any DML transactions can acquire exclusive lock on the table even after an index rebuild is executed without interruptions but DDL(Online rebuild) cannot progress during modification of data in the table until commit. So, if the index is holding many records, then calculate the maintenance duration accordingly. If the DBA is concerned about faster index maintenance rather than table downtime, then offline rebuild is efficient and if it is about table downtime, then online rebuild is efficient.

Note: Table downtime means prevention of application from modifying the rows in the table using DML(Insert, Update, Delete)

The only independence that is available in online mode is that, online rebuild operation waits and cope up with the other DML transaction to complete its work and then start the rebuild process in between when getting a chance post DML commit.

In offline mode, below error is reported immediately if other DML clutch the table lock.

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Both offline and online index rebuild require lock at some point but the difference is,

  • Online rebuild require “Row share mode” lock and offline rebuild require “Share mode nowait”
  • Online rebuild is slower and offline rebuild is faster
  • Online rebuild may cause temporary performance issues in heavy DML application and offline rebuild require complete table downtime

Online method works with these steps

  • Create a snapshot table in the form of “SYS_JOURNAL_N”
  • Create an unique IOT index for “SYS_JOURNAL_N” table
  • Populate the newly cloned index with the defragmented records with sorted key
  • Populate the original index with the defragmented records with sorted key
  • Drop the “SYS_JOURNAL_N” table after rebuild
  • Above all, if PGA is not sufficient to sort the records, then processing spill to temporary tablespace.
[oracle@staxhost ~]$ grep -a "SYS_JOURNAL" staxdb_ora_30481_IRO_tkp.trc
--Table creation phase
create table "SYS"."SYS_JOURNAL_76683" (C0 NUMBER,  opcode char(1), partno
  number,  rid rowid, primary key( C0 , rid )) organization index TABLESPACE
  "SYSTEM"

  "SYS"."SYS_JOURNAL_76683"("C0","RID") INDEX ONLY TOPLEVEL TABLESPACE
         0          0          0    TABLE ACCESS FULL SYS_JOURNAL_76683 (cr=0 pr=0 pw=0 time=3 us starts=1)
--Index creation phase
CREATE UNIQUE INDEX "SYS"."SYS_IOT_TOP_76710" on
  "SYS"."SYS_JOURNAL_76683"("C0","RID") INDEX ONLY TOPLEVEL TABLESPACE
  "SYSTEM" STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE
  DEFAULT) NOPARALLEL
--Populate the clone index
insert into ind_online$ (obj#,type#,flags)
values
(:1,:2,:3)
--Populate the original index
INSERT INTO indrebuild$(obj#, dataobj#, ts#, file#, block#, pctfree$,
  initrans, maxtrans, compcols, flags)
VALUES
(:1,:2,:3,:4,:5,:6,:7,:8,:9,:10)
--Update the clone index

UPDATE ind_online$ SET flags=:2
WHERE
 obj#=:1
--Update the original index

UPDATE indrebuild$ set dataobj#=:2, ts#=:3, file#=:4, block#=:5, pctfree$=:6,
  initrans=:7, maxtrans=:8, compcols=:9, flags=:10
WHERE
 obj#=:1

--Drop table phase
drop table "SYS"."SYS_JOURNAL_76683" purge

First session does the following and does not commit the transaction.

kIsH@STAX<>select sid from v$mystat where rownum < 2;

       SID
----------
        94

kIsH@STAX<>create table dbobj as select * from dba_objects;

Table created.

kIsH@STAX<>insert into dbobj select * from dba_objects;

73462 rows created.

kIsH@STAX<>create index idxobj on dbobj(object_id);

Index created.

kIsH@STAX<>insert into dbobj select * from dba_objects;

73463 rows created.

Second session rebuild the index online and waiting for concurrent access to the table.

kIsH@STAX<>select sid from v$mystat where rownum < 2;

       SID
----------
         1
kIsH@STAX<>alter index idxobj rebuild online;

<======== waiting for lock on the table dbobj

First session acquired TX lock. User left the command line and gone for a coffee.

Here session 2 waits on enq: TX – row lock contention due to uncommitted INSERT statement

kIsH@STAX<>col sid for 99999
kIsH@STAX<>col event for a20
kIsH@STAX<>col blocking_session for 99999999
kIsH@STAX<>col sql_id for a20
kIsH@STAX<>select sid,event,sql_id,blocking_session from v$session where sid=1;

   SID EVENT                SQL_ID               BLOCKING_SESSION
------ -------------------- -------------------- ----------------
     1 enq: TX - row lock c ffnqahhcsy8g0                      94
       ontention


kIsH@STAX<>select sid,event,sql_id,blocking_session from v$session where sid=94;

   SID EVENT                SQL_ID               BLOCKING_SESSION
------ -------------------- -------------------- ----------------
    94 SQL*Net message to c 2nshzv6a0h4p2
       lient

Since the DML statement ‘INSERT’ acquired Row-X (exclusive) lock on the table which prevents concurrent sessions to access the object on shared mode and exclusive mode. Though the column BLOCKING_OTHERS shows “Not Blocking”, still DDL commands are not allowed to proceed due to change in the structure of the index.

kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>col OWNER for a20
kIsH@STAX<>col NAME for a20
kIsH@STAX<>select SESSION_ID,OWNER,NAME,MODE_HELD,MODE_REQUESTED,BLOCKING_OTHERS from dba_dml_locks;

SESSION_ID OWNER                NAME                 MODE_HELD     MODE_REQUESTE BLOCKING_OTHERS
---------- -------------------- -------------------- ------------- ------------- ----------------------------------------
         1 SYS                  DBOBJ                Row-S (SS)    None          Not Blocking
         1 SYS                  SYS_JOURNAL_76683    Share         None          Not Blocking
        94 SYS                  DBOBJ                Row-X (SX)    None          Not Blocking

Lock type shows that “INSERT” statement has acquired exclusive lock with enqueue and prevents the SMON process from cleaning up the sort segments which need to be cleaned up for index rebuild.

kIsH@STAX<>select TYPE,NAME,DESCRIPTION from v$lock_type where type='SS';

TYPE                 NAME                           DESCRIPTION
-------------------- ------------------------------ --------------------------------------------------------------------------------
SS                   Sort Segment                   Ensures that sort segments created during parallel DML operations aren't prematu
                                                    rely cleaned up


kIsH@STAX<>select TYPE,NAME,DESCRIPTION from v$lock_type where type='SX';

TYPE                 NAME                           DESCRIPTION
-------------------- ------------------------------ --------------------------------------------------------------------------------
SX                   Spare Enqueue 1                Spare enqueue 1 for one-off backports

Use oradebug tool to trace the hang progress.

kIsH@STAX<>oradebug setospid 3944
Oracle pid: 34, Unix process pid: 3944, image: oracle@staxhost (TNS V1-V3)
kIsH@STAX<>oradebug unlimit
Statement processed.
kIsH@STAX<>oradebug hanganalyze 3
Hang Analysis in /u01/app/oracle/diag/rdbms/staxdb/staxdb/trace/staxdb_ora_4290.trc
kIsH@STAX<>--wait for some time
kIsH@STAX<>oradebug hanganalyze 3
Hang Analysis in /u01/app/oracle/diag/rdbms/staxdb/staxdb/trace/staxdb_ora_4290.trc
kIsH@STAX<>--wait for some time
kIsH@STAX<>oradebug hanganalyze 3
Hang Analysis in /u01/app/oracle/diag/rdbms/staxdb/staxdb/trace/staxdb_ora_4290.trc
kIsH@STAX<>exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.16.0.0.0

Wait chains show that index rebuild is waiting on enq: TX – row lock contention, even though index rebuild is executed with online clause. Until, the insert statement is commited, the index rebuild wait to acquire the lock

===============================================================================

Chains most likely to have caused the hang:
 [a] Chain 1 Signature: 'CPU or Wait CPU'<='enq: TX - row lock contention'
     Chain 1 Signature Hash: 0xcc111f32

===============================================================================
Non-intersecting chains:

-------------------------------------------------------------------------------
Chain 1:
-------------------------------------------------------------------------------
    Oracle session identified by:
    {
                instance: 1 (staxdb.staxdb)
                   os id: 3944
              process id: 34, oracle@staxhost (TNS V1-V3)
              session id: 1
        session serial #: 39412
             module name: 1 (sqlplus@staxhost (TNS V1-V3))
                  pdb id: 1 (CDB$ROOT)
    }
    is waiting for 'enq: TX - row lock contention' with wait info:
    {
                      p1: 'name|mode'=0x54580004
                      p2: 'usn<<16 | slot'=0x20007
                      p3: 'sequence'=0x7e3
            time in wait: 227 min 17 sec
           timeout after: never
                 wait id: 4460
                blocking: 0 sessions
          current sql_id: 3880962692
             current sql: alter index idxobj rebuild online
             short stack: ksedsts()+426<-ksdxfstk()+58<-ksdxcb()+872<-sspuser()+223<-__sighandler()<-semtimedop()+14<-skgpwwait()+187<-ksliwat()+2192<-kslwaitctx()+200<-ksqcmi()+27366<-ksqgtlctx()+3424<-ksqgelctx()+838<-ktuGetTxForXid()+324<-ktaddlvisprocres()+828<-ktaddlvis()+1222<-kdicrws()+29979<-kdicdrv()+681<-opiexe()+25075<-opiosq0()+4494<-kpooprx()+287<-kpoal8()+824<-opiodr()+1202<-ttcpip()+1218<-opitsk()+1900<-opiino()+936<-opiodr()+1202<-opidrv()+1094<-sou2o()+165<-opimai_real()+422<-ssthrdmain()+417<-main()+256<-__libc_start_main
            wait history:
              * time between current wait and wait #1: 0.003004 sec
              1.       event: 'index (re)build lock or pin object'
                 time waited: 0.000006 sec
                     wait id: 4459             p1: 'namespace'=0x1
                                               p2: 'lock_mode'=0x3
                                               p3: 'pin_mode'=0x3
              * time between wait #1 and #2: 0.000215 sec
              2.       event: 'reliable message'
                 time waited: 0.000638 sec
                     wait id: 4458             p1: 'channel context'=0x82ab5518
                                               p2: 'channel handle'=0x82a9f038
                                               p3: 'broadcast message'=0x82c54638
              * time between wait #2 and #3: 0.001611 sec
              3.       event: 'index (re)build lock or pin object'
                 time waited: 0.000021 sec
                     wait id: 4457             p1: 'namespace'=0x4
                                               p2: 'lock_mode'=0x3
                                               p3: 'pin_mode'=0x3
    }
    and is blocked by
 => Oracle session identified by:
    {
                instance: 1 (staxdb.staxdb)
                   os id: 4290
              process id: 70, oracle@staxhost (TNS V1-V3)
              session id: 94
        session serial #: 13032
             module name: 1 (sqlplus@staxhost (TNS V1-V3))
                  pdb id: 1 (CDB$ROOT)
    }
    which is on CPU or Wait CPU:
    {
               last wait: 0.000197 sec ago
                blocking: 1 session
          current sql_id: 0
             current sql: <none>
             short stack: ksedsts()+426<-ksdxfstk()+58<-ksdxdocmdmultex()+5961<-ksdhng_request_msg_process_phase_diagn()+1534<-ksdhng_request_msg_process()+933<-ksdhng_graph_build_local()+361<-ksdhng()+961<-ksdxfhng()+676<-ksdxen_int()+3381<-opiodr()+1202<-ttcpip()+1218<-opitsk()+1900<-opiino()+936<-opiodr()+1202<-opidrv()+1094<-sou2o()+165<-opimai_real()+422<-ssthrdmain()+417<-main()+256<-__libc_start_main()+243<-0x5541F689495641D7
            wait history:
              1.       event: 'Disk file operations I/O'
                 time waited: 0.000031 sec
                     wait id: 1759             p1: 'FileOperation'=0x8
                                               p2: 'fileno'=0x0
                                               p3: 'filetype'=0x8
              * time between wait #1 and #2: 0.000036 sec
              2.       event: 'SQL*Net message from client'
                 time waited: 6 min 54 sec
                     wait id: 1758             p1: 'driver id'=0x62657100
                                               p2: '#bytes'=0x1
              * time between wait #2 and #3: 0.000006 sec
              3.       event: 'SQL*Net message to client'
                 time waited: 0.000005 sec
                     wait id: 1757             p1: 'driver id'=0x62657100
                                               p2: '#bytes'=0x1
    }

If index is rebuild offline, then an error is directly thrown with ORA-00054 because the table need to be locked in SHARE MODE NOWAIT mode because of NOWAIT clause used internally.

kIsH@STAX<>alter index idxobj rebuild;
alter index idxobj rebuild
            *
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

To summarize,

  • Both offline and online index rebuild need to lock the table no matter the method used.

[oracle@staxhost ~]$ egrep 'LOCK TABLE' /u01/app/oracle/diag/rdbms/staxdb/staxdb/trace/staxdb_ora_30460_IR.trc
LOCK TABLE  FOR INDEX "IDXOBJ" IN SHARE MODE  NOWAIT
[oracle@staxhost ~]$ egrep 'LOCK TABLE' /u01/app/oracle/diag/rdbms/staxdb/staxdb/trace/staxdb_ora_30481_IRO.trc
LOCK TABLE  FOR INDEX "IDXOBJ" IN ROW SHARE MODE
[oracle@staxhost ~]$
  • Index Rebuild offline uses Index Fast Full scan and Index Rebuild online uses Full table scan

--Index Rebuild Offline
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  INDEX BUILD NON UNIQUE IDXOBJ (cr=293873 pr=651 pw=651 time=250883 us starts=1)(object id 0)
    293833     293833     293833   SORT CREATE INDEX (cr=293845 pr=651 pw=0 time=140768 us starts=1)
    293833     293833     293833    INDEX FAST FULL SCAN IDXOBJ (cr=293845 pr=651 pw=0 time=105728 us starts=1)(object id 76683)

--Index Rebuild Online
Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         1          1          1  INDEX BUILD NON UNIQUE IDXOBJ (cr=5740 pr=0 pw=651 time=180561 us starts=1)(object id 0)
    293849     293849     293849   SORT CREATE INDEX (cr=5693 pr=0 pw=0 time=76729 us starts=1)
    293849     293849     293849    TABLE ACCESS FULL DBOBJ (cr=5693 pr=0 pw=0 time=14613 us starts=1 cost=1544 size=1101930 card=220386)

  • Online rebuild is slow
OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          2          0           0
Execute      2      0.18       0.25        652     293867       1040           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.18       0.25        652     293869       1040           0

Misses in library cache during parse: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        3        0.00          0.00
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        5.07          9.29
  PGA memory operation                          130        0.00          0.00
  index (re)build lock or pin object              4        0.00          0.00
  asynch descriptor resize                        1        0.00          0.00
  db file scattered read                         25        0.00          0.00
  direct path write                             166        0.00          0.06
  reliable message                                3        0.00          0.00
  enq: RO - fast object reuse                     1        0.00          0.00
  db file sequential read                         1        0.00          0.00
  enq: CR - block range reuse ckpt                1        0.00          0.00
  log file sync                                   1        0.00          0.00

Offline rebuild is fast

OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        2      0.00       0.00          0          1          0           0
Execute      2      0.11       0.19          2       5739       1130           0
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      0.11       0.19          2       5740       1130           0

Misses in library cache during parse: 1

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  Disk file operations I/O                        2        0.00          0.00
  SQL*Net message to client                       2        0.00          0.00
  SQL*Net message from client                     2        4.73          7.34
  PGA memory operation                          136        0.00          0.00
  index (re)build lock or pin object             10        0.00          0.00
  reliable message                                5        0.00          0.00
  direct path write                             166        0.00          0.08
  enq: RO - fast object reuse                     2        0.00          0.00
  db file sequential read                         2        0.00          0.00
  enq: CR - block range reuse ckpt                2        0.00          0.00
  log file sync                                   1        0.00          0.00
  • Do not schedule online rebuild activity during heavy DML hours of database since rebuild operation should wait for a long time until the DML commit.
  • If good machine power is available like CPU and MEMORY , then go for parallel online index creation and also make sure that other critical DML operations are not impacted due to resource exhaustion. Use parallel_degree_limit to a less value say CPU / 3 if CPU_COUNT > 3 to limit the parallel or use resource manager plan. By default, parallel_degree_limit is set to CPU capacity which is not always good.
kIsH@STAX<>show parameter parallel_degree_limit

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_limit                string      CPU

Since offline rebuild require downtime, this activity is too easy without any hassles except that if it is difficult to get business approval.

Binary file ‘file_name’ matches

This error occur as a result of some binary values inside the file

[user@staxhost ~]$ grep "create" file.txt
Binary file file.txt matches

Note that you could want this setting if your input file is text, but there are random binary bytes in the middle, for example, due to data corruption or another reason the “binary content” heuristics fails. The purpose of needing to specify this parameter is to prevent mistakenly grepping a binary file and outputting raw binary material to output.

[user@staxhost ~]$ grep -a "create" file.txt
create table 
HOW TO USE TYPECASTING IN PYTHON

Type casting is a method of converting a datatype into a different form to provide meaningful results as per programmer needs.

Basically, python (pvm) either convert the datatypes implicitly or a human should explicitly convert the datatypes

Example: If person X understands english language and X has a computer which understands binary language 0 and 1. In this case, the computer needs to understands as a result of voice recognition of X’s language, then there should be some conversion of letters to binary numbers for the computer to respond. Typecasting also work’s similar to this.

There is an inbuilt function in python called type() which print the datatype of an object.

A real world example will be converting seconds in decimal to integers

time_in_sec = 1.6
print(type(time_in_sec))
print(int(time_in_sec),"second")
O/P:
<class 'float'>
1 second

Any of the datatype can be converted using python functions with the arguments.

There are different ways to convert a datatype to other datatype in python using this feature called typecasting. Using inbuilt functions of python, this can be achieved in a simple way.

To convert string to integer, use ‘int’ function. Here there is a number inside a string which is converted to exact integer by casting.

In this example,

A variable ‘string’ is declared which has been assigned a string ‘1’ object which is automatically recognized by python as string which is called implicit conversion.

In another line, ‘int’ function is added to the variable string which convert the string variable to integer explicitly which is called explicit conversion

#Typecasting of string to integer 

string = '1'

print(type(int(string))) #Explicit conversion
print(type(string)) #implicit conversion
O/P:
1
<class 'int'>
<class 'str'>

Integer can also be converted to string using ‘str’ function. Even though 1 is an integer, which is transformed to string ‘1’

#Typecasting of integer to string

integer = 1

print(str(integer))
print(type(str(integer)))
print(type(integer))
O/P:
1
<class 'str'>
<class 'int'>

Since object with quotes are considered as string, the below string was converted to string again

#Typecasting of integer to string

integer = '1'

print(type(str(integer)))
print(type(integer))
O/P:
<class 'str'>
<class 'str'>

String with letters cannot be converted to integer, which is expected. There should be numbers inside the quotes to be converted.

string = 'string'

print(int(string))

O/P:
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_7236/3795652633.py in <module>
      1 string = 'string'
      2 
----> 3 print(int(string))

ValueError: invalid literal for int() with base 10: 'string'

Similar to integers, decimal’s can also be converted to integer or string

#Typecasting of decimal to string

decimal = 1.0

print(type(str(decimal)))
print(type(decimal))
O/P:
<class 'str'>
<class 'float'>
#Typecasting of decimal to integer 

decimal = '1.0'

print(type(int(decimal)))
print(type(decimal))
O/P:
<class 'int'>
<class 'float'>

#Typecasting of decimal to integer

decimal = 1.0
print(decimal)
print(type(int(decimal)))
print(type(decimal))
O/P:
1.0
<class 'int'>
<class 'float'>
#Typecasting of a boolean value to String or Integer
boolean = True

print(str(boolean))
print(type(boolean))
print(int(boolean))
print(type(boolean))
O/P:
True
<class 'bool'>
1
<class 'bool'>

#Convert string to boolean
str_bool = 'True'
print(str_bool)
print(type(bool(str_bool)))
print(type(str_bool))
O/P:
True
<class 'bool'>
<class 'str'>

Integers can also be converted to binary, octal and hexadecimal values using bin(), oct() and hex() functions

#Convert integer to Binary, Octal and hexadeximal
integer = 1
print(bin(integer))
print(oct(integer))
print(hex(integer))

oct(integer)
O/P:
0o1
0b1
0x1

A typical example which shows that a string and integer cannot be concatenated. This is where typecasting plays a major role.

name = 'kishan'
age = 28
print(name + " you are "+ age + " years old")
O/P:
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_24324/783362701.py in <module>
      1 name = 'kishan'
      2 age = 28
----> 3 print(name + "you are"+ age + "years old")

TypeError: can only concatenate str (not "int") to str

After converting ‘age’ variable to string using str() function, the output is printed without errors

name = 'kishan'
age = 28
print(name + " you are "+ str(age) + " years old")
O/P:
kishan you are 28 years old

A simple function which convert the datatype based on the arguments provided

#Function to convert the datatype of an object in variable
def typecast(var,dt):
#Check the current datatype
    dtype = type(var)
#Convert string to integer
    if dtype == str and dt == 'integer' or dtype == int and dt == 'integer':
        try:
            return int(var)
        except ValueError:
            print("Provide an integer inside quotes")
#Convert integer to string
    elif dtype == int and dt == 'string' or dtype == str and dt == 'string':
            return str(var)

#Convert string to decimal
    elif dtype == str and dt == 'decimal':
        return float(var)
#Convert decimal to integer 
    elif dtype == float and dt == 'integer' or dtype == int and dt == 'decimal':
        return float(var)
    elif var.startswith('0x') or var.startswith('0X'):
        return hex(var)
    elif var.startswith('0b') or var.startswith('0B'):
        return bin(var)
    elif var.startswith('0o') or var.startswith('0O'):
        return oct(var)
    else:
        return "Invalid input"



        
print(typecast('0xA180',None))

Also lists,tuples and sets can also be converted to its counterparts using list(),tuple() and set() functions

l = [1,2,3,4]

print(tuple(l))
print(set(l))

print(type(tuple(l)))
print(type(set(l)))

O/P:
(1, 2, 3, 4)
{1, 2, 3, 4}
<class 'tuple'>
<class 'set'>

t = (1,2,3,4)
print(list(t))
print(set(t))
print(type(list(t)))
print(type(set(t)))

O/P:
[1, 2, 3, 4]
{1, 2, 3, 4}
<class 'list'>
<class 'set'>

s = {1,2,3,4}
print(list(s))
print(tuple(s))
print(type(list(s)))
print(type(tuple(s)))

O/P:
[1, 2, 3, 4]
(1, 2, 3, 4)
<class 'list'>
<class 'tuple'>

Summary:

  • Typecasting will be always useful to convert a datatype to another
  • In a real world scenario, there are lot of conversions which happen from number to text, text to number etc.. which depend on casting of datatypes
  • Typecasting may be useful in string concatenation areas, where a number need to concatenated with a string

HOW TO VIEW THE BYTE CODE IN PYTHON

A translator is the one which translates the source code into object code.

Translator which convert high level language into a byte code which is further converted into machine understandable format by a Compiler.

Translator which implicitly convert the source program line by line into byte code and interpret the code at same time without the need for explicit byte code generation by Interpreter.

In python, the byte code is generated internally by PVM(python virtual machine) in the memory heap which is not visible to programmers eyes.

There are different types of compiler that are used for python

  • CPython(C compiler)
  • PyPy(JIT compiler)

JIT compiler is faster compared to C compiler during interpretation phase.

Below example in linux shows the procedure to peep through the byte code files generated by python3 compiler.

Here python3 calls the compiler to display the results directly to the screen rather than generating the byte code files.

Create a test code called test.py

[hydra@hydrupgrd hydrapy]$ cat test.py
#!/usr/bin/python3


print("Hello")

Execute the python file using python3 to display the output

[hydra@hydrupgrd hydrapy]$ python3 test.py
Hello

There is a module called py_compile.py which prints the byte code information into .pyc file which is not readable using editor. Also man page provides some info about the options that can be used to generate the byte code

[hydra@hydrupgrd hydrapy]$ man python3
       -m module-name
              Searches sys.path for the named module and runs the corresponding .py file as a script.

[hydra@hydrupgrd hydrapy]$ locate py_compile.py
/usr/lib64/python3.6/py_compile.py

Generate the byte code with the following command and a new directory called __pycache__ will be created

[hydra@hydrupgrd hydrapy]$ python3 -m py_compile test.py
[hydra@hydrupgrd hydrapy]$ ls -lrt|tail -1
drwxr-xr-x 2 hydra hydra   33 Nov 24 17:57 __pycache__
[hydra@hydrupgrd hydrapy]$ cd __pycache__/
[hydra@hydrupgrd __pycache__]$ ls
test.cpython-36.pyc

Check the type of the file which shows byte compiled.

[hydra@hydrupgrd __pycache__]$ file test.cpython-36.pyc
test.cpython-36.pyc: python 3.6 byte-compiled <<<

To display the contents of the file, use the below methods

[hydra@hydrupgrd __pycache__]$ strings test.cpython-36.pyc
HelloN)
print
test.py
<module>
[hydra@hydrupgrd __pycache__]$ cd ..

To display all the internal function calls used by compiler, use the dis option

[hydra@hydrupgrd hydrapy]$ python3 -m dis test.py
  4           0 LOAD_NAME                0 (print)
              2 LOAD_CONST               0 ('Hello')
              4 CALL_FUNCTION            1
              6 POP_TOP
              8 LOAD_CONST               1 (None)
             10 RETURN_VALUE
INSTALLATION OF CASSANDRA IN UBUNTU

Cassandra is a NoSQL database management system that is free and open-source. It is made to handle massive volumes of data over multiple commodity servers while also offering high availability and eliminating single points of failure.

Considering its capability to handle volumes and cheap cost, market is slowly moving towards Cassandra

Install apt-transport-https package to access Cassandra repository over secure protocol

root@exhydra:/home/exhydra# apt install apt-transport-https
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  apt-transport-https
0 upgraded, 1 newly installed, 0 to remove and 254 not upgraded.
Need to get 4,348 B of archives.
After this operation, 154 kB of additional disk space will be used.
Get:1 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 apt-transport-https all 1.6.14 [4,348 B]
Fetched 4,348 B in 1s (3,316 B/s)
Selecting previously unselected package apt-transport-https.
(Reading database ... 148579 files and directories currently installed.)
Preparing to unpack .../apt-transport-https_1.6.14_all.deb ...
Unpacking apt-transport-https (1.6.14) ...
Setting up apt-transport-https (1.6.14) ...

Install curl to fetch the Cassandra package from repository

root@exhydra:/home/exhydra# snap install curl
curl 7.86.0 from Wouter van Bommel (woutervb) installed
root@exhydra-VirtualBox:/home/exhydra# curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add -
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  267k  100  267k    0     0   205k      0  0:00:01  0:00:01 --:--:--  205k

Update apt utility in Ubuntu

root@exhydra-VirtualBox:/home/exhydra# apt update

Install cassandra using the below package

root@exhydra:/home/exhydra# apt install cassandra
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  ca-certificates-java java-common libopts25 ntp openjdk-8-jre-headless sntp
Suggested packages:
  cassandra-tools default-jre ntp-doc fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei fonts-wqy-zenhei
The following NEW packages will be installed:
  ca-certificates-java cassandra java-common libopts25 ntp openjdk-8-jre-headless sntp
0 upgraded, 7 newly installed, 0 to remove and 254 not upgraded.
Need to get 76.6 MB of archives.
After this operation, 165 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://in.archive.ubuntu.com/ubuntu bionic/universe amd64 libopts25 amd64 1:5.18.12-4 [58.2 kB]
Get:2 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 ntp amd64 1:4.2.8p10+dfsg-5ubuntu7.3 [640 kB]
Get:3 http://in.archive.ubuntu.com/ubuntu bionic-updates/main amd64 java-common all 0.68ubuntu1~18.04.1 [14.5 kB]
Get:4 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 openjdk-8-jre-headless amd64 8u352-ga-1~18.04 [28.3 MB]
Get:6 http://in.archive.ubuntu.com/ubuntu bionic-updates/main amd64 ca-certificates-java all 20180516ubuntu1~18.04.1 [12.2 kB]
Get:7 http://in.archive.ubuntu.com/ubuntu bionic-updates/universe amd64 sntp amd64 1:4.2.8p10+dfsg-5ubuntu7.3 [86.5 kB]
Get:5 https://downloads.apache.org/cassandra/debian 40x/main amd64 cassandra all 4.0.5 [47.5 MB]
Fetched 76.6 MB in 12s (6,527 kB/s)

Start Cassandra service post installation and enable it to start the service automatically after reboot using systemctl utility

root@exhydra:/home/exhydra# systemctl start cassandra
root@exhydra:/home/exhydra# systemctl enable cassandra
cassandra.service is not a native service, redirecting to systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable cassandra
root@exhydra:/home/exhydra# systemctl status cassandra
● cassandra.service - LSB: distributed storage system for structured data
   Loaded: loaded (/etc/init.d/cassandra; generated)
   Active: active (running) since Wed 2022-11-16 22:36:01 IST; 2min 12s ago
     Docs: man:systemd-sysv-generator(8)
    Tasks: 41 (limit: 4915)
   CGroup: /system.slice/cassandra.service
           └─2537 /usr/bin/java -ea -da:net.openhft... -XX:+UseThreadPriorities -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:+AlwaysPreTouch -XX:-

Nov 16 22:36:01 exhydra-VirtualBox systemd[1]: Starting LSB: distributed storage system for structured data...
Nov 16 22:36:01 exhydra-VirtualBox systemd[1]: Started LSB: distributed storage system for structured data.

Check the current node status to check the service status

root@exhydra:/home/exhydra# nodetool status
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address    Load       Tokens  Owns (effective)  Host ID                               Rack
UN  127.0.0.1  69.08 KiB  16      100.0%            147efa87-4637-470a-a33a-1a237bb7fc35  rack1

Use the command line shell using cqlsh utility

root@exhydra:/home/exhydra# cqlsh
Connected to Test Cluster at 127.0.0.1:9042
[cqlsh 6.0.0 | Cassandra 4.0.5 | CQL spec 3.4.5 | Native protocol v5]
Use HELP for help.
cqlsh> help

Documented shell commands:
===========================
CAPTURE  CLS          COPY  DESCRIBE  EXPAND  LOGIN   SERIAL  SOURCE   UNICODE
CLEAR    CONSISTENCY  DESC  EXIT      HELP    PAGING  SHOW    TRACING

CQL help topics:
================
AGGREGATES               CREATE_KEYSPACE           DROP_TRIGGER      TEXT
ALTER_KEYSPACE           CREATE_MATERIALIZED_VIEW  DROP_TYPE         TIME
ALTER_MATERIALIZED_VIEW  CREATE_ROLE               DROP_USER         TIMESTAMP
ALTER_TABLE              CREATE_TABLE              FUNCTIONS         TRUNCATE
ALTER_TYPE               CREATE_TRIGGER            GRANT             TYPES
ALTER_USER               CREATE_TYPE               INSERT            UPDATE
APPLY                    CREATE_USER               INSERT_JSON       USE
ASCII                    DATE                      INT               UUID
BATCH                    DELETE                    JSON
BEGIN                    DROP_AGGREGATE            KEYWORDS
BLOB                     DROP_COLUMNFAMILY         LIST_PERMISSIONS
BOOLEAN                  DROP_FUNCTION             LIST_ROLES
COUNTER                  DROP_INDEX                LIST_USERS
CREATE_AGGREGATE         DROP_KEYSPACE             PERMISSIONS
CREATE_COLUMNFAMILY      DROP_MATERIALIZED_VIEW    REVOKE
CREATE_FUNCTION          DROP_ROLE                 SELECT
CREATE_INDEX             DROP_TABLE                SELECT_JSON

Check the version and host inside command line

cqlsh> show version;
[cqlsh 6.0.0 | Cassandra 4.0.5 | CQL spec 3.4.5 | Native protocol v5]
cqlsh> show host;
Connected to Test Cluster at 127.0.0.1:9042