Archives December 2022

PROGRAM TO PRINT ONLY THE 1ST MULTILPLE OF A NUMBER IN PYTHON


def f1st_multiple(L,x):
    for x in L:
        if x % i == 0:
            fmul = x
            break
    if x % i != 0:
        fmul = -1
    return fmul 
FIND SUM OF ALL EVEN AND ODD NUMBERS AND SUBSTRACT THEM IN PYTHON

Code:

l = [12,34,56,78,90,23,34,47,79]

def diff_even_odd_sum(l):
    se = 0
    so = 0
    # YOUR CODE GOES HERE
    for x in l:
        if x % 2 == 0:
            se = se + x
        else:
            so = so + x
    diff = se - so
    return diff

Output:

diff_even_odd_sum(l)

Out[4]:

155
HOW TO FIND THE SECOND LARGEST ELEMENT IN A LIST IN PYTHON

l = [ 11,15,19 ]

#Use dummy variables 
mx = 0
smx = 0

#Iterate over the list with x variable
for x in l:

    #If value of x is greater than x, then swap the values of smx to mx and mx to x otherwise get the second largest value by funneling the range
    if x > mx:
        smx = mx
        mx = x
    elif x < mx and x > smx:
        smx = x
print(mx,smx)

Alternatively a function can be defined

def sln(l):
    mx = 0
    smx = 0
    for x in l:
        if len(l) <= 1:
            return -1
        else:
            if x > mx:
                smx = mx
                mx = x
            elif x < mx and x > smx:
                smx = x
            elif smx == 0:
                smx = -1
    return smx
WAYS TO IMPROVE IMPROVE CTAS PERFORMANCE

CTAS(create table as select) is one of the efficient way to perform a bulk load of data from a big table to another fresh table. But if CTAS is itself slower, then some tweaks should be used to improve the performance.

What exactly happen during CTAS at high level?

  • create a new table
  • select the rows from source table
  • insert the rows using direct path load using parallel by default in new table

If either of the above step become slow due to some reason, then the benefits of using a CTAS are outweighed to another approach.

Main areas of slowness may be

  • DDL statement which restructure source table cause library cache contention
  • Lack of CPU to copy a larger table
  • Parallel waits
  • Segment level checkpoint issues
  • Redo log generation during bulk load
  • PGA not sufficient for direct path operation
  • Lack of temp space to sort
  • Statistics gathering adds additional time
  • DBWR not able to write dirty blocks faster to cope up with checkpoints
  • IO waits due to slow response time from block devices
  • Undo shortage

Create a LOB table for testing purpose

kIsH@STAX<>conn kish/password
Connected.
kIsH@STAX<>create table lob1 (
                             n number(10),
                            c clob
                            )
          lob (c) store as securefile ( enable storage in row nocompress) tablespace USERS;

Table created.

Populate some random data into the table

kIsH@STAX<>insert /*+parallel(8)*/ into lob1 nologging
                 select /*+parallel(8)*/ rownum, rpad('lob',10000,'lob')
                        from dual
          connect by level <= 100000;

100000 rows created.

kIsH@STAX<>commit;

Commit complete.

Approach 1:

Normal CTAS itself takes 11 minutes to create a new table. There is step with “OPTIMIZER STATISTICS GATHERING” which gather statistics during the load.

kIsH@STAX<>set timing on
kIsH@STAX<>create table lob2 as select * from lob1;

Table created.

Elapsed: 00:11:27.06

kIsH@STAX<>explain plan for create table lob2 as select * from lob1;

Explained.

Elapsed: 00:00:00.01
kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3273868302

-----------------------------------------------------------------------------------------
| Id  | Operation                        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |      |   117K|   226M|  7053   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT                  | LOB2 |       |       |            |          |
|   2 |   OPTIMIZER STATISTICS GATHERING |      |   117K|   226M|   205   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL             | LOB1 |   117K|   226M|   205   (1)| 00:00:01 |
-----------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

14 rows selected.

Observe something suspicious here? Optimizer simultaneously gather statistics during the load for the insert statement due to the configurations. It takes approximate 0.363494 seconds additionally for stats gather which is not necessary during bulk load. Statistics can be gathered later once the data is loaded.

create table lob2 as select * from lob1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0         37          2           0
Execute      1     20.98    1409.55     100016       9451     117171      100000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        2     20.98    1409.55     100016       9488     117173      100000

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 111
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max)  Row Source Operation
---------- ---------- ----------  ---------------------------------------------------
         0          0          0  LOAD AS SELECT  LOB2 (cr=10339 pr=100018 pw=100679 time=1409747397 us starts=1)
    100000     100000     100000   OPTIMIZER STATISTICS GATHERING  (cr=695 pr=0 pw=0 time=363494 us starts=1 cost=205 size=198904680 card=98712)
    100000     100000     100000    TABLE ACCESS FULL LOB1 (cr=695 pr=0 pw=0 time=251449 us starts=1 cost=205 size=198904680 card=98712)

Approach 2:

Check the execution plans without statistics. Additional “OPTIMIZER STATISTICS GATHERING” step is removed after disabling the parameter.

kIsH@STAX<>drop table kish.lob2;

Table dropped.

kIsH@STAX<>alter session set "_optimizer_gather_stats_on_load"=false;

Session altered.

kIsH@STAX<>explain plan for create table lob2 as select * from lob1;

Explained.

kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3273868302

-------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT |      |   112K|   216M|  6714   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT        | LOB2 |       |       |            |          |
|   2 |   TABLE ACCESS FULL    | LOB1 |   112K|   216M|   205   (1)| 00:00:01 |
-------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

13 rows selected.

kIsH@STAX<>alter session set "_optimizer_gather_stats_on_load"=true;

Session altered.

kIsH@STAX<>explain plan for create table lob2 as select /*+NO_GATHER_OPTIMIZER_STATISTICS*/ * from lob1;

Explained.

kIsH@STAX<>select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3273868302

-------------------------------------------------------------------------------
| Id  | Operation              | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT |      |   112K|   216M|  6714   (1)| 00:00:01 |
|   1 |  LOAD AS SELECT        | LOB2 |       |       |            |          |
|   2 |   TABLE ACCESS FULL    | LOB1 |   112K|   216M|   205   (1)| 00:00:01 |
-------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)

13 rows selected.

Disabling the load statistics improved the performance drastically by 90% as below from 11 min to 37 seconds.

kIsH@STAX<>set timing on
kIsH@STAX<>alter session set "_optimizer_gather_stats_on_load"=false;

Session altered.

Elapsed: 00:00:00.00
kIsH@STAX<>create table lob2 as select * from lob1;

Table created.

Elapsed: 00:00:37.45

Approach 3:

No statistics + nologging improves the performance further down to 33 sec. ‘nologging’ or ‘unrecoverable’ means skipping the redo vectors to be written to redo logs. DBA should be cautious to use nologging due to its non recoverable trap. Hence roll forward is not possible. If the database is in archive log mode, using logging can help to recover the data up to a point in time. Since nologging skips redo log, archive logs are also not generated for the CTAS operation. This option can be used to clone tables for test or dev which does not cause any business impact.

kIsH@STAX<>drop table kish.lob2;

Table dropped.
kIsH@STAX<>alter session set "_optimizer_gather_stats_on_load"=false;

Session altered.
kIsH@STAX<>create table lob2 nologging as select /*+parallel(8)*/ * from lob1;

Table created.

Elapsed: 00:00:33.30

Check explain plan for the new approach

kIsH@STAX<>explain plan for create table lob2 nologging as select /*+parallel(8)*/ * from lob1;

Explained.

kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 3155667128

----------------------------------------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT |          |   112K|   216M|   842   (0)| 00:00:01 |        |      |            |
|   1 |  LOAD AS SELECT        | LOB2     |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR       |          |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM) | :TQ10000 |   112K|   216M|    28   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR  |          |   112K|   216M|    28   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   5 |      TABLE ACCESS FULL | LOB1     |   112K|   216M|    28   (0)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - Degree of Parallelism is 8 because of hint

17 rows selected.

Approach 4:

This approach need some extra resource for the buffer cache which is memory. KEEP pool is one of the underrated pool which do not strike during a performance issue. Use KEEP pool, if the object to be kept is considerably small to memory size. If the segment is bigger than memory, then this approach may not be efficient. Example: 500GB LOB is not a good candidate to be placed in KEEP. This approach reduced the slowness further by 3 seconds

kIsH@STAX<>select segment_name,bytes/1048576 MB from dba_segments where segment_name='LOB1';

SEGMENT_NAME                                                                                                                             MB
-------------------------------------------------------------------------------------------------------------------------------- ----------
LOB1                                                                                                                                      6

kIsH@STAX<>select BUFFER_POOL from dba_tables where table_name='LOB1';

BUFFER_
-------
DEFAULT

kIsH@STAX<>alter system set db_keep_cache_size=500M;

System altered.

kIsH@STAX<>alter table LOB1 storage (buffer_pool keep);

Table altered.

Elapsed: 00:00:00.04
kIsH@STAX<>select BUFFER_POOL from dba_tables where table_name='LOB1';

BUFFER_
-------
KEEP

kIsH@STAX<>drop table lob2;

Table dropped.

kIsH@STAX<>create table lob2 nologging as select /*+parallel(8)*/ * from lob1;

Table created.

Elapsed: 00:00:30.62
kIsH@STAX<>alter table LOB1 storage (buffer_pool default);

Table altered.

Approach 5:

If more parallel resource can be utilized, then use parallel degree to select statement as well to burn dual way CPU for both load(write) and select(read). If management is ready to spend some cash on extra CPU, then increase the parallel gradually until the expected response time

kIsH@STAX<>create table lob2 parallel(degree 8) nologging as select /*+parallel(8)*/ * from lob1;

Table created.

Elapsed: 00:00:27.90

HYBRID TSM/HWMB” this message is printed in the execution plan for dual parallel usage

kIsH@STAX<>explain plan for create table lob2 parallel(degree 8) nologging as select /*+parallel(8)*/ * from lob1;

Explained.

Elapsed: 00:00:00.01
kIsH@STAX<>select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1111949198

----------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT             |          |   112K|   216M|   842   (0)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                    |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10000 |   112K|   216M|    28   (0)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT (HYBRID TSM/HWMB)| LOB2     |       |       |            |          |  Q1,00 | PCWP |            |
|   4 |     PX BLOCK ITERATOR              |          |   112K|   216M|    28   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   5 |      TABLE ACCESS FULL             | LOB1     |   112K|   216M|    28   (0)| 00:00:01 |  Q1,00 | PCWP |            |
----------------------------------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: dynamic sampling (level=2)
   - Degree of Parallelism is 8 because of hint

17 rows selected.

Alternatively, an order clause can also used to sort by primary key column. This approach may or may not speed up depending on the situation.

kIsH@STAX<>create table lob2 parallel(degree 8) nologging as select /*+parallel(8)*/ * from lob1 order by n;

Table created.

Elapsed: 00:00:33.11

kIsH@STAX<>explain plan for create table lob2 parallel(degree 8) nologging as select /*+parallel(8)*/ * from lob1 order by n;

Explained.

kIsH@STAX<>select * from table(dbms_xplan.display());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 1487570938

------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name     | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT             |          |   100K|    13M|       |    72   (2)| 00:00:01 |        |      |            |
|   1 |  PX COORDINATOR                    |          |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10001 |   100K|    13M|       |    30   (7)| 00:00:01 |  Q1,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT (HYBRID TSM/HWMB)| LOB2     |       |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     OPTIMIZER STATISTICS GATHERING |          |   100K|    13M|       |    30   (7)| 00:00:01 |  Q1,01 | PCWP |            |
|   5 |      SORT ORDER BY                 |          |   100K|    13M|    14M|    30   (7)| 00:00:01 |  Q1,01 | PCWP |            |
|   6 |       PX RECEIVE                   |          |   100K|    13M|       |    28   (0)| 00:00:01 |  Q1,01 | PCWP |            |
|   7 |        PX SEND RANGE               | :TQ10000 |   100K|    13M|       |    28   (0)| 00:00:01 |  Q1,00 | P->P | RANGE      |
|   8 |         PX BLOCK ITERATOR          |          |   100K|    13M|       |    28   (0)| 00:00:01 |  Q1,00 | PCWC |            |
|   9 |          TABLE ACCESS FULL         | LOB1     |   100K|    13M|       |    28   (0)| 00:00:01 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------

Approach 6:

Setting parallel_execution_message_size to higher value speed up the amount of chunks which are transferred between px buffer, if memory is plenty in the system. Warning: If other critical jobs are executing at the same time, then setting this parameter exhaust the memory resource aggressively.

kIsH@STAX<>alter system set parallel_execution_message_size=32768 scope=spfile;

System altered.

kIsH@STAX<>show parameter parallel_execution

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_execution_message_size      integer     32768
kIsH@STAX<>create table lob2 parallel(degree 8) nologging as select /*+parallel(8)*/ * from lob1;

Table created.

Elapsed: 00:00:27.19

Approach 7:

Last but not the least, gather proper statistics on the source table.

Summary:

  • CTAS is one of the most efficient way to replicate a table efficiently, but if any part of CTAS become slow, then whole CTAS will be slow
  • Always tune the SELECT and INSERT part of the query which comprises most of time in CTAS
  • Try to distribute parallel chunks in such a way that parallel servers are not overutilized in one place
  • CTAS is more efficient in larger tables compared to other methods
  • Mostly, tune the PGA to avoid temp spill scans during Direct IO
  • If possible, “_smm_isort_cap” can be increased to higher value to avoid frequent temp sorts
  • Statistics can be gathered later after a load than during the load.
  • Monitor the wait events specific to the CTAS using ASH or 10046 and tune that event
  • Last but not the least, check for better execution plan.
AttributeError: partially initialized module ‘smtplib’ has no attribute ‘SMTP_SSL’ (most likely due to a circular import)

Cause:

Issue is with the file name due to circular import of the smtplib module

AttributeError: partially initialized module 'smtplib' has no attribute 'SMTP_SSL' (most likely due to a circular import)

Solution:

File is saved with the name email.py. Refactor the code file name from email.py to different file name

SHELL SCRIPT TO CAPTURE THE SQL INFORMATION FROM CURSOR CACHE AND POPULATE INTO STS

If the database need to be performant proof, certain need to be taken in a proactive manner to prevent future performance issue or analysis of any flaws from historic information. This shell script poll the cursor cache for every 30 seconds , capture the sql, populate the STS, pack the STS information into backup table and export the STS table in a dump file.

Warning: This script may impose heavy load on the database. It is recommended to execute the script during quiet hours of the database.

#!/bin/bash

# Set the environment variables for multiple databases.
_set_env(){

cat /etc/oratab|grep -v '#'|grep -v '^$' > /home/oracle/oratab_new
while read x
   do
     IFS=':' read -r -a array <<< $x
                ORACLE_SID="${array[0]}"
                ORACLE_HOME="${array[1]}"
                echo $ORACLE_SID
                echo $ORACLE_HOME
                export PATH=$PATH:$ORACLE_HOME/bin
   done < /home/oracle/oratab_new


}


#Capture the high load sql to STS and pack them into staging table
_capture_sql_load(){
t=30
loop="true"
c=0
ela_s=$(date +%s)
while [ $loop == "true" ]
        do
        sleep $t

        $ORACLE_HOME/bin/sqlplus -S '/ as sysdba' <<EOF
        EXEC dbms_workload_repository.create_snapshot();
        conn c##hydra/hydra
        EXEC DBMS_SQLTUNE.CAPTURE_CURSOR_CACHE_SQLSET( -
                                                sqlset_name     => 'B4UPGLOAD', -
                                                time_limit      =>  30, -
                                        repeat_interval =>  5);
        conn / as sysdba
        BEGIN
            DBMS_SQLTUNE.create_stgtab_sqlset(table_name => 'STS',
            schema_name => 'KISH',
            tablespace_name => 'HYDRA');
        END;
        /
        BEGIN
            DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name => 'B4UPGLOAD',
            sqlset_owner => 'C##HYDRA',
            staging_table_name => 'STS',
            staging_schema_owner => 'KISH');
            END;
         /
exit;
EOF
ela_e=$(date +%s)
c=$c+1
ela_t=$(expr $ela_e - $ela_s)
if [[ $c -gt 30 ]]
    then
    loop=False
elif [[ $c -eq 30 ]]
    then
    _exp_sqlset
    break
fi
done

}


#Export the STS to a dump file out of the database
_exp_sqlset(){
exp USERID=kish/password file=expsts.dmp log=stsbkp.log buffer=500 tables=kish.sts
}

_set_env
_capture_sql_load
DOES ADDING NEW PARTITION TO A COMPRESSED PARTITIONED TABLE ENABLE COMPRESSION AUTOMATICALLY TO ITS LOCAL INDEX ?

Main advantage of Advanced compression is to perform DML without decompression of compressed data.

There is NO article or feature found currently for automatically compressing a newly added local index during a partition addition to a compressed partitioned table. So, DBA should manually rebuild the new index partition with compression. But newly added table partition is compressed automatically.

There are three types of index for partitioned table

  • Local prefixed(OLTP, less partitions, less downtime for maintenance)
  • Local non prefixed(DSS, many partitions, more downtime for maintenance due to large number of large partition, faster full scans for ware house due to parallel)
  • Global prefixed(OLTP, less partitions,less downtime for maintenance)

Let us test the behavior

Create a tablespace, a range partitioned table with advanced compression and populate data into the table

kIsH@STAX<>create tablespace TBS1 datafile '/apps01/oradata/tbs1.dbf' size 5M autoextend on;

Tablespace created.

kIsH@STAX<>CREATE TABLE "KISH"."RP"
(    "R_ID" NUMBER(30,0),
        "R_NAME" VARCHAR2(30),
        "R_COST" NUMBER(30,0),
        "R_DATE" DATE
) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
STORAGE(
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1"
PARTITION BY RANGE ("R_DATE")
(PARTITION "R_JAN2022"  VALUES LESS THAN (TO_DATE(' 2022-02-01 00:00:00', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 ROW STORE COMPRESS ADVANCED LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1" ,
PARTITION "R_FEB2022"  VALUES LESS THAN (TO_DATE(' 2022-03-01 00:00:00' , '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
ROW STORE COMPRESS ADVANCED LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1",
PARTITION "R_MAR2022"  VALUES LESS THAN (TO_DATE(' 2022-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
ROW STORE COMPRESS ADVANCED LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1",
PARTITION "R_APR2022"  VALUES LESS THAN (TO_DATE(' 2022-05-01 00:00:00', '
SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
LOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TBS1" ) ROW STORE COMPRESS ADVANCED; 16   17   18   19   20   21   22   23   24   25   26   27   28   29   30   31   32   33   34   35   36   37   38   39   40   41

Table created.

kIsH@STAX<>create sequence xid increment by 1 start with 1;

Sequence created.

kIsH@STAX<>INSERT INTO "KISH"."RP"
SELECT xid.nextval,
       dbms_random.string('a','10'),
           floor(dbms_random.value(1,1000000)),
           to_date(trunc(dbms_random.value(to_char(date '2022-01-01','J'),
                                           to_char(date '2022-04-30','J'))),'J')
from dual connect by level <= 1000;     2    3    4    5    6    7

1000 rows created.

kIsH@STAX<>commit;

Commit complete.

Check the segment size if needed for the compressed partitioned table

kIsH@STAX<>col SEGMENT_NAME for a20
kIsH@STAX<>SELECT segment_name,bytes/1048576 MB FROM dba_segments WHERE segment_name='RP';

SEGMENT_NAME                 MB
-------------------- ----------
RP                        .0625
RP                        .0625
RP                        .0625
RP                        .0625

Check the status of the newly created partition. Compression is enabled for all partitions.

kIsH@STAX<>col TABLE_NAME for a20
kIsH@STAX<>SELECT compression,compress_for,table_name from dba_tab_partitions WHERE table_name = 'RP';

COMPRESS COMPRESS_FOR                   TABLE_NAME
-------- ------------------------------ --------------------
ENABLED  ADVANCED                       RP
ENABLED  ADVANCED                       RP
ENABLED  ADVANCED                       RP
ENABLED  ADVANCED                       RP


kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>col PARTITION_NAME for a20
kIsH@STAX<>SELECT compression,compress_for,table_name,partition_name from dba_tab_partitions WHERE table_name = 'RP';

COMPRESS COMPRESS_FOR                   TABLE_NAME           PARTITION_NAME
-------- ------------------------------ -------------------- --------------------
ENABLED  ADVANCED                       RP                   R_APR2022
ENABLED  ADVANCED                       RP                   R_MAR2022
ENABLED  ADVANCED                       RP                   R_JAN2022
ENABLED  ADVANCED                       RP                   R_FEB2022

Create a local index with basic compression on the partitioned table. Global index cannot be partitioned because it is global which is centralized to all the partitions.

kIsH@STAX<>CREATE INDEX idx_rp on RP(r_id) LOCAL COMPRESS;

Index created.

All the inherit partitioned indexes are compressed automatically with basic compression.

kIsH@STAX<>SELECT partition_name,index_name,compression FROM dba_ind_partitions WHERE index_name like '%IDX_RP%';

PARTITION_NAME       INDEX_NAME           COMPRESSION
-------------------- -------------------- -------------
R_APR2022            IDX_RP               ENABLED
R_FEB2022            IDX_RP               ENABLED
R_JAN2022            IDX_RP               ENABLED
R_MAR2022            IDX_RP               ENABLED

Add a new partition to the partitioned compressed table

kIsH@STAX<>ALTER TABLE rp ADD PARTITION "R_MAY2022" VALUES LESS THAN (TO_DATE(' 2022-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLESPACE "TBS1";

Table altered.

Then check the new index which is added with basic compression automatically and marked in bold for new partition

kIsH@STAX<>SELECT dip.partition_name,dip.index_name,dip.compression,di.partitioned
  FROM dba_ind_partitions dip
  INNER JOIN dba_indexes di on dip.index_name = di.index_name
  WHERE dip.index_name = 'IDX_RP'
  ORDER BY index_name;  2    3    4    5

PARTITION_NAME       INDEX_NAME           COMPRESSION   PAR
-------------------- -------------------- ------------- ---
R_APR2022            IDX_RP               ENABLED       YES
R_FEB2022            IDX_RP               ENABLED       YES
R_JAN2022            IDX_RP               ENABLED       YES
R_MAR2022            IDX_RP               ENABLED       YES
R_MAY2022            IDX_RP               ENABLED       YES

So far, basic compression happens automatically for newly added partitioned index, when new partition is added to the table. Now, drop the index with basic compression and add index with advanced compression. If a new partition is added for partitioned table with advanced low compression, the respective index partition is not compressed automatically. It is disabled by default but the table partition is compressed automatically as mentioned earlier.

kIsH@STAX<>DROP INDEX idx_rp;

Index dropped.
kIsH@STAX<>ALTER TABLE rp ROW STORE COMPRESS ADVANCED;

Table altered.

kIsH@STAX<>ALTER TABLE rp MODIFY DEFAULT ATTRIBUTES ROW STORE COMPRESS ADVANCED;

Table altered.

kIsH@STAX<>SELECT def_compression,def_compress_for FROM all_part_tables WHERE table_name='RP';

DEF_COMP DEF_COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

kIsH@STAX<>CREATE INDEX idx_rp on RP(r_id) LOCAL
(PARTITION R_JAN2022 COMPRESS ADVANCED LOW,
 PARTITION R_FEB2022 COMPRESS ADVANCED LOW,
 PARTITION R_MAR2022 COMPRESS ADVANCED LOW,
 PARTITION R_APR2022 COMPRESS ADVANCED LOW,
 PARTITION R_MAY2022 COMPRESS ADVANCED LOW
);  2    3    4    5    6    7

Index created.

kIsH@STAX<>ALTER TABLE rp ADD PARTITION "R_JUN2022" VALUES LESS THAN (TO_DATE(' 2022-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) TABLESPACE "TBS1";

Table altered.

kIsH@STAX<>set lines 200 pages 1000
kIsH@STAX<>col TABLE_NAME for a20
kIsH@STAX<>col PARTITION_NAME for a20
kIsH@STAX<>SELECT compression,compress_for,table_name,partition_name from dba_tab_partitions WHERE table_name = 'RP';

COMPRESS COMPRESS_FOR                   TABLE_NAME           PARTITION_NAME
-------- ------------------------------ -------------------- --------------------
ENABLED  ADVANCED                       RP                   R_MAY2022
ENABLED  ADVANCED                       RP                   R_JUN2022
ENABLED  ADVANCED                       RP                   R_APR2022
ENABLED  ADVANCED                       RP                   R_FEB2022
ENABLED  ADVANCED                       RP                   R_JAN2022
ENABLED  ADVANCED                       RP                   R_MAR2022

6 rows selected.

kIsH@STAX<>SELECT dip.partition_name,dip.index_name,dip.compression,di.partitioned
  FROM dba_ind_partitions dip
  INNER JOIN dba_indexes di on dip.index_name = di.index_name
  WHERE dip.index_name = 'IDX_RP'
  ORDER BY index_name;   2    3    4    5

PARTITION_NAME       INDEX_NAME           COMPRESSION   PAR
-------------------- -------------------- ------------- ---
R_JUN2022            IDX_RP               DISABLED      YES
R_MAY2022            IDX_RP               ADVANCED LOW  YES
R_APR2022            IDX_RP               ADVANCED LOW  YES
R_JAN2022            IDX_RP               ADVANCED LOW  YES
R_FEB2022            IDX_RP               ADVANCED LOW  YES
R_MAR2022            IDX_RP               ADVANCED LOW  YES

6 rows selected.

kIsH@STAX<>ALTER INDEX idx_rp REBUILD PARTITION R_JUN2022 COMPRESS ADVANCED LOW;

Index altered.

kIsH@STAX<>col INDEX_NAME for a20
kIsH@STAX<>SELECT dip.partition_name,dip.index_name,dip.compression,di.partitioned
  FROM dba_ind_partitions dip
  INNER JOIN dba_indexes di on dip.index_name = di.index_name
  WHERE dip.index_name = 'IDX_RP'
  ORDER BY index_name;  2    3    4    5

PARTITION_NAME       INDEX_NAME           COMPRESSION   PAR
-------------------- -------------------- ------------- ---
R_MAY2022            IDX_RP               ADVANCED LOW  YES
R_JUN2022            IDX_RP               ADVANCED LOW  YES
R_APR2022            IDX_RP               ADVANCED LOW  YES
R_JAN2022            IDX_RP               ADVANCED LOW  YES
R_FEB2022            IDX_RP               ADVANCED LOW  YES
R_MAR2022            IDX_RP               ADVANCED LOW  YES

6 rows selected.

If the index is GLOBAL, then there are no partitions created.

kIsH@STAX<>CREATE INDEX idx_rp on RP(r_id) GLOBAL COMPRESS;

Index created.

kIsH@STAX<>SELECT partition_name,index_name,compression FROM dba_ind_partitions WHERE index_name like '%IDX_RP%';

no rows selected

kIsH@STAX<>select index_name,partitioned from dba_indexes where index_name='IDX_RP';

INDEX_NAME           PAR
-------------------- ---
IDX_RP               NO

Summary:

  • Local indexes with basic compression are compressed automatically during new partition addition
  • Local indexes with Advanced compression are NOT compressed automatically during new partition addition. DBA need to manually compress with index rebuild.

WHEN DO WE USE GV$SESSION_LONPOPS?

DBA’s confuse to use longops view for long running operations due to misunderstanding. This view cannot be used to monitor all long running operations but only for large resource intensive operations.

There are two main criteria for the records

  • If the table scan exceeds 10,000 formatted blocks
  • operation is greater than 6 seconds

Scenarios during which oracle consider things as long running sessions

  • Rman Backup and Restore
  • Parallel Query(Large chunks)
  • Recovery ( Crash and Media )
  • Large Full Table scans not all Full table scans
  • Sorting
  • Stats job

x$ksulop is the internal table which sample and populate the long running information in GV$SESSION_LONGOPS

SQL> select VIEW_DEFINITION from v$fixed_view_definition where VIEW_NAME='GV$SESSION_LONGOPS';

VIEW_DEFINITION
--------------------------------------------------------------------------------
select inst_id, ksulosno, ksulosrn, ksulopna,        ksulotna, ksulotde, ksulosf
r, ksulotot, ksulouni,        to_date(ksulostm,'MM/DD/RR HH24:MI:SS','NLS_CALEND
AR=Gregorian'),        to_date(ksulolut,'MM/DD/RR HH24:MI:SS','NLS_CALENDAR=Greg
orian'),        to_date(ksuloinft, 'MM/DD/RR HH24:MI:SS','NLS_CALENDAR=Gregorian
'),        decode(ksulopna, 'Advisor', ksuloif2,               decode(sign(ksulo
tot-ksulosfr),-1,to_number(NULL),                     decode(ksulosfr, 0, to_num
ber(NULL),                      round(ksuloetm*((ksulotot-ksulosfr)/ksulosfr))))
),       ksuloetm, ksuloctx, ksulomsg, ksulounm,        ksulosql, ksulosqh, ksul
osqi, ksulosqph, ksulosqesta,        decode(ksulosqeid, 0, to_number(NULL), ksul
osqeid),        decode(ksulosqplid, 0, to_number(NULL), ksulosqplid),        ksu
losqplop, ksulosqplnm, ksuloqid, con_id      from x$ksulop

Currently, there are 0 long running sessions in the database

kIsH@STAX<>select count(*) from v$session_longops;

  COUNT(*)
----------
         0

Note step 2 of the execution plan which shows large table full table scan. How can one consider a table size as large? It depends on the available resource and cost estimated by optimizer along with the conditions like 10000 block scans and elapsed second > 6.

Example: If a table size is 100TB and there is 1000 CPU with 1000 cores, then it is obvious that resource power is also aggressive as table size. In this case, long running session may be calculated differently.

kIsH@STAX<>set autot traceonly explain
kIsH@STAX<>select * from dbobj;

Execution Plan
----------------------------------------------------------
Plan hash value: 1315669886

---------------------------------------------------------------------------
| Id  | Operation         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |       |  6464K|   813M| 34311   (1)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| DBOBJ |  6464K|   813M| 34311   (1)| 00:00:02 |
---------------------------------------------------------------------------

After 6 seconds, the output is printed in v$session_longops

kIsH@STAX<>set lines 200 pages 1000
col opname for a20
col target for a20
col message for a20
col sql_id for a20
SELECT sid,
           serial#,
           opname,
           target,
           sofar,
           100 * (sofar / totalwork) TotalWork,
           message,
           sql_id,
           elapsed_seconds,
           time_remaining
  from v$session_longops
  order by sid;kIsH@STAX<>kIsH@STAX<>kIsH@STAX<>kIsH@STAX<>kIsH@STAX<>  2    3    4    5    6    7    8    9   10   11   12

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

no rows selected

kIsH@STAX<>/

       SID    SERIAL# OPNAME               TARGET                    SOFAR  TOTALWORK MESSAGE              SQL_ID               ELAPSED_SECONDS TIME_REMAINING
---------- ---------- -------------------- -------------------- ---------- ---------- -------------------- -------------------- --------------- --------------
        34      40942 Table Scan           SYS.DBOBJ                    32   .0253614 Table Scan:  SYS.DBO afw2anq220ubg       7 <== ela > 6         27594
                                                                                      BJ: 32 out of 126176 <====== formatted blocks > 10000

                                                                                       Blocks done

Summary:

  • Operations like index scans may not be displayed in v$session_longops
  • Small full table scans where there are less than 10000 block touches may also be not included

HOW TO GENERATE SQL TUNING ADVISOR REPORT IN ORACLE DATABASE

Choose the sql to be tuned

kIsH@Xhydra<>select sql_text from v$sql where sql_id='5qmm8vw5dvy8b';

SQL_TEXT
--------------------------------------------------------------------------------
select xdate from kish.xtbl where xid between 4001021 and 10000000

Create the tuning task using the below procedure

kIsH@Xhydra<>DECLARE
          v_tune_task_id varchar2(50);
          BEGIN
                  v_tune_task_id := DBMS_SQLTUNE.CREATE_TUNING_TASK (
                  sql_id => '5qmm8vw5dvy8b',
                  time_limit => 86400,
                  task_name => '5qmm8vw5dvy8b_tt',
                  description => 'Tuning advisory for 5qmm8vw5dvy8b');
                  DBMS_OUTPUT.PUT_LINE('v_tune_task_id: '|| v_tune_task_id);
          END;
/  2    3    4    5    6    7    8    9   10   11

PL/SQL procedure successfully completed.

Check the advisor parameters that can be used for configuration of the tuning task from dba_advisor_parameters

kIsH@Xhydra<>select PARAMETER_NAME,PARAMETER_VALUE from dba_advisor_parameters where TASK_NAME='5qmm8vw5dvy8b_tt';

PARAMETER_NAME       PARAMETER_VALUE
-------------------- --------------------
DAYS_TO_EXPIRE       30
END_SNAPSHOT         UNUSED
END_TIME             UNUSED
INSTANCE             UNUSED
JOURNALING           INFORMATION
MODE                 COMPREHENSIVE
START_SNAPSHOT       UNUSED
START_TIME           UNUSED
TARGET_OBJECTS       1
TIME_LIMIT           86400
DEFAULT_EXECUTION_TY TUNE SQL
PE

CON_DBID_MAPPING     UNUSED
ORA_EM_PARAM1        UNUSED
ORA_EM_PARAM2        UNUSED
ORA_EM_PARAM3        UNUSED
ORA_EM_PARAM4        UNUSED
ORA_EM_PARAM5        UNUSED
ORA_EM_PARAM6        UNUSED
ORA_EM_PARAM7        UNUSED
ORA_EM_PARAM8        UNUSED
ORA_EM_PARAM9        UNUSED
ORA_EM_PARAM10       UNUSED
EXECUTION_DAYS_TO_EX UNLIMITED
PIRE

REMOTE_SOURCE        UNUSED
USERNAME             UNUSED
BASIC_FILTER         UNUSED
PLAN_FILTER          UNUSED
RANK_MEASURE1        UNUSED
RANK_MEASURE2        UNUSED
RANK_MEASURE3        UNUSED
SQL_PERCENTAGE       1
SQL_LIMIT            -1
RESUME_FILTER        UNUSED
LOCAL_TIME_LIMIT     UNUSED
TEST_EXECUTE         AUTO
APPLY_CAPTURED_COMPI UNUSED
LENV

DATABASE_LINK_TO     UNUSED

37 rows selected.

Execute the tuning task which was created earlier

kIsH@Xhydra<>BEGIN
  2          DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => '5qmm8vw5dvy8b_tt');
  3          END;
  4          /

PL/SQL procedure successfully completed.

Typical views which are related to tuning tasks

kIsH@Xhydra<>desc dba_advisor_tasks
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OWNER                                              VARCHAR2(128)
 TASK_ID                                   NOT NULL NUMBER
 TASK_NAME                                          VARCHAR2(128)
 DESCRIPTION                                        VARCHAR2(256)
 ADVISOR_NAME                                       VARCHAR2(128)
 CREATED                                   NOT NULL DATE
 LAST_MODIFIED                             NOT NULL DATE
 PARENT_TASK_ID                                     NUMBER
 PARENT_RXEC_ID                                     NUMBER
 LAST_EXECUTION                                     VARCHAR2(128)
 EXECUTION_TYPE                                     VARCHAR2(128)
 EXECUTION_TYPE#                                    NUMBER
 EXECUTION_DESCRIPTION                              VARCHAR2(256)
 EXECUTION_START                                    DATE
 EXECUTION_END                                      DATE
 STATUS                                             VARCHAR2(11)
 STATUS_MESSAGE                                     VARCHAR2(4000)
 PCT_COMPLETION_TIME                                NUMBER
 PROGRESS_METRIC                                    NUMBER
 METRIC_UNITS                                       VARCHAR2(64)
 ACTIVITY_COUNTER                                   NUMBER
 RECOMMENDATION_COUNT                               NUMBER
 ERROR_MESSAGE                                      VARCHAR2(4000)
 SOURCE                                             VARCHAR2(128)
 HOW_CREATED                                        VARCHAR2(30)
 READ_ONLY                                          VARCHAR2(5)
 SYSTEM_TASK                                        VARCHAR2(5)
 ADVISOR_ID                                NOT NULL NUMBER
 STATUS#                                            NUMBER

kIsH@Xhydra<>desc v$advisor_progress
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 SID                                                NUMBER
 SERIAL#                                            NUMBER
 USERNAME                                           VARCHAR2(128)
 OPNAME                                             VARCHAR2(64)
 ADVISOR_NAME                                       VARCHAR2(64)
 TASK_ID                                            NUMBER
 TARGET_DESC                                        VARCHAR2(32)
 SOFAR                                              NUMBER
 TOTALWORK                                          NUMBER
 UNITS                                              VARCHAR2(32)
 BENEFIT_SOFAR                                      NUMBER
 BENEFIT_MAX                                        NUMBER
 FINDINGS                                           NUMBER
 RECOMMENDATIONS                                    NUMBER
 TIME_REMAINING                                     NUMBER
 START_TIME                                         DATE
 LAST_UPDATE_TIME                                   DATE
 ELAPSED_SECONDS                                    NUMBER
 ADVISOR_METRIC1                                    NUMBER
 METRIC1_DESC                                       VARCHAR2(64)
 EXECUTION_TYPE                                     VARCHAR2(64)
 CON_ID                                             NUMBER

Check the advisor log, if the tuning task is completed

kIsH@Xhydra<>col STATUS_MESSAGE for a20
kIsH@Xhydra<>select OWNER,TASK_ID,TASK_NAME,STATUS,STATUS_MESSAGE from dba_advisor_log where TASK_NAME='5qmm8vw5dvy8b_tt';

OWNER                      TASK_ID TASK_NAME            STATUS               STATUS_MESSAGE
-------------------- ------------- -------------------- -------------------- --------------------
SYS                           1633 5qmm8vw5dvy8b_tt     COMPLETED

Generate the tuning advisor recommendations for tuning

kIsH@Xhydra<>set long 100000000
kIsH@Xhydra<>select DBMS_SQLTUNE.REPORT_TUNING_TASK('5qmm8vw5dvy8b_tt') from dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('5QMM8VW5DVY8B_TT')
----------------------------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name   : 5qmm8vw5dvy8b_tt
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 86400
Completion Status  : COMPLETED
Started at         : 10/26/2022 22:49:53
Completed at       : 10/26/2022 22:49:53

-------------------------------------------------------------------------------
Schema Name   : SYS
Container Name: CDB$ROOT
SQL ID        : 5qmm8vw5dvy8b
SQL Text      : select xdate from kish.xtbl where xid between 4001021 and
                10000000

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- Index Finding (see explain plans section below)
--------------------------------------------------
  The execution plan of this statement can be improved by creating one or more
  indices.

  Recommendation (estimated benefit: 70.43%)
  ------------------------------------------
  - Consider running the Access Advisor to improve the physical schema design
    or creating the recommended index.
    create index KISH.IDX$$_06610001 on KISH.XTBL("XID","XDATE");

  Rationale
  ---------
    Creating the recommended indices significantly improves the execution plan
    of this statement. However, it might be preferable to run "Access Advisor"
    using a representative SQL workload as opposed to a single statement. This
    will allow to get comprehensive index recommendations which takes into
    account index maintenance overhead and additional space consumption.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original
-----------
Plan hash value: 1420452506

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |   425K|  5818K|  5485   (1)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| XTBL |   425K|  5818K|  5485   (1)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("XID"<=10000000 AND "XID">=4001021)

2- Using New Indices
--------------------
Plan hash value: 1675568796

-----------------------------------------------------------------------------------
| Id  | Operation        | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT |                |   425K|  5818K|  1622   (1)| 00:00:01 |
|*  1 |  INDEX RANGE SCAN| IDX$$_06610001 |   425K|  5818K|  1622   (1)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("XID">=4001021 AND "XID"<=10000000)

-------------------------------------------------------------------------------
VARIABLES IN PYTHON

Variables are buckets or holders which holds or contains the objects which are assigned to them during declaration

Every programming language has a variable concept which is essential for storing different types of objects in the heap memory

In below example,

Variable x is declared first and y is assigned to x. In this case, python allocate a pointer 2367230667120 for x and also reuse the same memory area for y in the heap. Again x value become 6 in which a new area 2367230667216 is pointed in the memory.

x = 3
print(id(x))
print('*'*40)
y = x
x = 6

print(x,y,\
      id(x),\
      id(y))

O/P:
2367230667120
****************************************
6 3 2367230667216 2367230667120
Variable Declaration In Python
var = 'string'
var1 = 1
var2 = 1.0
var3 = True
var4 = None
var5 = [1,1.0,'list',True,None,[1,1.0,'list'],(),{}]
var6 = ('tuple',)
var7 = {'set'}
var8 = {'k':'v'}
tmpvar = var

print(var + '\n' \
      + str(var1) \
      + '\n' + str(var2) \
      + '\n' + str(var3) \
      + '\n' + str(var4)\
      + '\n' + str(var5)\
      + '\n' + str(var6)\
      + '\n' + str(var7)\
      + '\n' + str(var8))

print(type(var),\
      type(tmpvar),\
      type(var1), \
      type(var2), \
      type(var3), \
      type(var4), \
      type(var5), \
      type(var6), \
      type(var7), \
      type(var8))

print(id(var),\
      id(tmpvar),\
      id(var1), \
      id(var2), \
      id(var3), \
      id(var4), \
      id(var5), \
      id(var6), \
      id(var7), \
      id(var8))

O/P:

string
1
1.0
True
None
[1, 1.0, 'list', True, None, [1, 1.0, 'list'], (), {}]
('tuple',)
{'set'}
{'k': 'v'}

<class 'str'> <class 'str'> <class 'int'> <class 'float'> <class 'bool'> <class 'NoneType'> <class 'list'> <class 'tuple'> <class 'set'> <class 'dict'>

2036667053296 2036667053296 2036660922672 2036742039664 140732748421224 140732748471512 2036742276672 2036742155424 2036742978016 2036742258176

Variable declaration guidelines

  • Underscore symbol(_) or a letter can be used in the beginning of a variable name.
  • A variable name cannot begin with a number.
  • Variable names can only contain alphanumeric characters and underscores (A-z, 0-9, and ).
  • Variable names are case-sensitive (var, Var and VAR are three different variables).
  • The reserved terms cannot be used to name the variable (keywords).

_var = 1
_var_ = 2
var1 = 3
1var = 4 # invalid declaration
@var = 5 # invalid declaration

There are two conventions that are named as snake case and camel case variables similar to english letters. These type of names are used to represent the variable.

CamelCase:

This term comes from a camel’s hunch or back which looks like a slope. Similar to this, the variable name is sloped with an upper case letter.

Example: AbcDef, WxYz

Snake case:

These variables resemble a snake with sine curves with an underscore in between

Example: abc_efg, xyz_abc

CamelCase = "This is a camel case variable"

snake_case = "This is snake case variable"