Archives April 2023

TEXT FILE MANIPULATION IN PYTHON

There are four ways that we manipulate a file in general.

  • read
  • write
  • append
  • delete

In python, text file can be manipulated in similar way.

There are options which can be used with open() function which opens a file and return the contents of the file or insert data based on the options specified inside.

r - read
w - write
a - append

There is no direct way to delete a string from a text file. But the string the updated using append mode and 


#Write the file
f = open("Colour.txt","w")
f.write("Red\n")
f.write("Green\n")
f.write("Blue\n")
f.close()


#Read the file
f = open("Colour.txt","r")
print(f.read())

#Add new entries to existing file

f = open("Colour.txt","a")
f.write("Yellow\n")
f.close()

#Read the file
print('*'*40)
f = open("Colour.txt","r")
print(f.read())
GRAPHICS USING PYTHON TURTLE MODULE

If you need to draw geometric diagrams in python, then turtle is a good module which can create different figures with respect to different angles.

A small turtle pointer acts as a pen which can draw the figures as per our definition.

turtle has many methods which can used to control the shape of the figures

import turtle

turtle.shape("turtle")

turtle.exitonclick()

import turtle

turtle.shape("turtle")

for X in range(0,10):
    turtle.forward(80)
    turtle.right(45)

for Y in range(0,10):
    turtle.forward(80)
    turtle.left(90)

for X in range(0,10):
    turtle.forward(80)
    turtle.right(45)

turtle.exitonclick()

import turtle

for X in range(0, 10):
    turtle.right(45)
    for X in range(0, 5):
        turtle.forward(100)
        turtle.right(90)
turtle.exitonclick()
PYTHON PROGRAM TO GUESS COLOURS


#import random module
import random

#Use choice method
color = random.choice(['red','blue','pink','green','yellow','brown','grey','white','violet'])
#Loop through the list of colours
loop = True
while loop == True:
    print(color)
    input_colour = input("Guess a colour: ")
    input_colour.casefold()
    if input_colour == color:
        print("You have guessed the colour correctly")
        loop = False
    else:
        continue



“STATISTICS” THE VITALS OF ORACLE DATABASE

Statistics are the receptors or the data collector similar to a human body. Eyes, ears, nose, tongue etc are the receptors in a human body which senses and collects information with the stimulus and sensory mechanism like smell, light, heat etc and sends the signal to brain in form of electrical impulses.

Nervous system controls and coordinates the activities of our body with the help of the information gathered from organs. So lets focus on oracle. Humans who live in present enjoy the current moment and who donot, worry thinking about the past. We dont go deeper into human biology however its good to imagine.

Similar to how a human body and mind works, Oracle collects the present information about the database in the form of statistics for each and every changes like DML (Insert, Update and Delete) which happen in the database. If appropriate statistics are not feeded to Oracle, then Oracle think in terms of past as present using the old information which leads to sub optimal plan. Thus the performance of a single SQL or batch may be slow causing end user impact.

A DBA should know what happens in his database which he work daily. So its important to understand the internal jobs and how they work in collecting the statistics.

  • When the stats job is scheduled?
  • What are the settings or default configurations of stats job?
  • Whether the auto job collect stats properly or manual intervention required?
  • Why does a last day stats job fail?
  • Why stats are stale even after stats collection?
  • Is the schedule time sufficient for job to complete?
  • What is the right time to collect stats according to the incoming workload?
  • What are the objects which become stale frequently?
  • How often there are changes (DML) on the objects?
  • Does stats become a cause for any performance slowness? ….

Ask these above questions, when you determine a stats requirements for database

As statistics is vital for a database to live in present, they must be up to date to get good performance results

Basically there are three ways to gather stats

  • Manual – This is not always possible
  • Automatic statistics internal job – This is good but may fail sometimes due to timeout
  • Automate the statistics using shell script or PLSQL in cronjob

Types of statistics

  • System statistics
  • Table
  • Column
  • Index

Data dictionary stores the optimizer statistics in shared pool(SGA) and system tablespace

Statistics can be

  • Copied from a different database
  • Exported and imported to different database
  • Gathered
  • Locked to avoid further statistics change

In this article, we will see how to identify the auto optimizer stats job and its behaviour

To check the auto optimizer stats job,

SQL> set lines 200 pages 500
colSQL>  CLIENT_NAME for a20
col TASK_NAME for a15
SQL> SQL> col OPERATION_NAME for a20
col ATTRIBUTES for a20
col TASK_PRIORITY for 999
select client_name,
        task_name,
         operation_name,
          status,
           attributes,
            task_priority
from dba_autotask_task
where task_name like '%stats%';SQL> SQL> SQL>   2    3    4    5    6    7    8

CLIENT_NAME          TASK_NAME       OPERATION_NAME       STATUS   ATTRIBUTES           TASK_PRIORITY
-------------------- --------------- -------------------- -------- -------------------- -------------
auto optimizer stats gather_stats_pr auto optimizer stats ENABLED  VOLATILE, SAFE TO KI             2
 collection          og               job                          LL

The parameter values can be set either globally or locally specific to an object using DBMS_STATS.SET_*_PREFS procedure

To get the existing default values of the stats parameters, we can use get_prefs procedure which is very handy. Use the below script to get all the values for parameters of stats.

SQL> select 'AUTOSTATS_TARGET =====> '||DBMS_STATS.GET_PREFS('AUTOSTATS_TARGET')||'' as "STATS_PARAM" from dual
 union all
select 'CASCADE =====> '||DBMS_STATS.GET_PREFS('CASCADE')||'' from dual
 union all
select 'CONCURRENT =====> '||DBMS_STATS.GET_PREFS('CONCURRENT')||'' as "STATS_PARAM" from dual
 union all
select 'DEGREE =====> '||DBMS_STATS.GET_PREFS('DEGREE')||'' from dual
 union all
select 'ESTIMATE_PERCENT =====> '||DBMS_STATS.GET_PREFS('ESTIMATE_PERCENT')||'' from dual
 union all
select 'GRANULARITY =====> '||DBMS_STATS.GET_PREFS('GRANULARITY')||'' from dual
 union all
select 'INCREMENTAL =====> '||DBMS_STATS.GET_PREFS('INCREMENTAL')||'' from dual
 union all
select 'INCREMENTAL_LEVEL =====> '||DBMS_STATS.GET_PREFS('INCREMENTAL_LEVEL')||'' from dual
 union all
select 'INCREMENTAL_STALENESS =====> '||DBMS_STATS.GET_PREFS('INCREMENTAL_STALENESS')||'' from dual
 union all
select 'METHOD_OPT =====> '||DBMS_STATS.GET_PREFS('METHOD_OPT')||'' from dual
 union all
select 'OPTIONS =====> '||DBMS_STATS.GET_PREFS('OPTIONS')||'' from dual
 union all
select 'STALE_PERCENT =====> '||DBMS_STATS.GET_PREFS('STALE_PERCENT')||'' from dual;  2    3    4    5    6    7    8    9   10   11   12   13   14   15   16   17   18   19   20   21   22   23

STATS_PARAM
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
AUTOSTATS_TARGET =====> AUTO
CASCADE =====> DBMS_STATS.AUTO_CASCADE
CONCURRENT =====> OFF
DEGREE =====> NULL
ESTIMATE_PERCENT =====> DBMS_STATS.AUTO_SAMPLE_SIZE
GRANULARITY =====> AUTO
INCREMENTAL =====> FALSE
INCREMENTAL_LEVEL =====> PARTITION
INCREMENTAL_STALENESS =====>
METHOD_OPT =====> FOR ALL COLUMNS SIZE AUTO
OPTIONS =====> GATHER
STALE_PERCENT =====> 10

12 rows selected.

For individual objects or entities, we can set the preferences manually using the below procedures

SET_TABLE_PREFS
SET_SCHEMA_PREFS
SET_DATABASE_PREFS
SET_GLOBAL_PREFS

ESTIMATE_PERCENT AND METHOD_OPT

These two parameters plays a crucial role in determining the samples of calculating statistics(ESTIMATE_PERCENT) and histograms for columns(METHOD_OPT)

If stale stats for a table needs to be identified, then stale_stats column should have a value ‘YES’ . These tables need the statistics to be updated in order to get optimal execution plans for a SQL statement

SQL> set lines 200 pages 1000
col PARTITION_NAME for a10
col TABLE_NAME for a15
col INDEX_NAME for a10
col CLUSTERING_FACTOR for 999999
select dts.TABLE_NAME,
             dts.PARTITION_NAME,
               dts.SAMPLE_SIZE as SS,
                 dts.AVG_ROW_LEN as ARL,
                   dts.BLOCKS,
                     di.NUM_ROWS,
                       di.CLUSTERING_FACTOR as CF,
                         di.INDEX_NAME,
                           dts.CHAIN_CNT as CC
from dba_tab_statistics dts
inner join dba_indexes di on ( dts.table_name = di.table_name )
where dts.STALE_STATS='YES'
order by 7 desc;SQL> SQL> SQL> SQL> SQL>   2    3    4    5    6    7    8    9   10   11   12   13

TABLE_NAME      PARTITION_         SS        ARL     BLOCKS   NUM_ROWS         CF INDEX_NAME         CC
--------------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
SCHEDULER$_EVEN                  1212        264         58                       SYS_IL0000          0
T_LOG                                                                             007903C000
                                                                                  16$$

WRH$_SYSSTAT    WRH$_SYSST       7068         30         35      49476      40884 WRH$_SYSST          0
                A_41072869                                                        AT_PK
                76_277

WRH$_LATCH      WRH$_LATCH       4620         54         43      32340      26568 WRH$_LATCH          0
                _410728697                                                        _PK
                6_277

In 19c database, there is one default gather stats auto program to gather stats during a scheduler window on off business hours

SQL> select task_name,count(*) from dba_autotask_task where task_name like '%stat%' group by task_name;

TASK_NAME                                                          COUNT(*)
---------------------------------------------------------------- ----------
gather_stats_prog                                                         1

Below detail shows the auto stats job execution during previous days

SQL> set lines 200 pages 1000
col OPERATION for a25
col TARGET for a10
SQL> SQL> SQL> col STATUS for a10
col JOB_NAME for a20
col CPU for 999999
col DURATION for 999999
col START_TIME for a15
col END_TIME for a15
select  doo.OPERATION,
                 to_char(doo.START_TIME,'dd-mm-yy hh24:mi') "START",
                  to_char(doo.END_TIME,'dd-mm-yy hh24:mi') "END",
                   doo.STATUS,
                    doo.JOB_NAME,
                extract(SECOND from dsjrd.RUN_DURATION) "DURATION",
                 extract(SECOND from dsjrd.CPU_USED) "CPU"
from dba_optstat_operations doo
inner join dba_scheduler_job_run_details dsjrd on ( doo.JOB_NAME = dsjrd.JOB_NAME)
where doo.TARGET = 'AUTO'
order by doo.ID asc;SQL> SQL> SQL> SQL> SQL> SQL>   2    3    4    5    6    7    8    9   10   11

OPERATION                 START          END            STATUS     JOB_NAME             DURATION     CPU
------------------------- -------------- -------------- ---------- -------------------- -------- -------
gather_database_stats (au 03-01-22 22:00 03-01-22 22:00 COMPLETED  ORA$AT_OS_OPT_SY_301       31      24
to)

gather_database_stats (au 13-01-22 22:00 13-01-22 22:00 COMPLETED  ORA$AT_OS_OPT_SY_321       13      44
to)

gather_database_stats (au 31-01-22 22:00 31-01-22 22:01 COMPLETED  ORA$AT_OS_OPT_SY_341       21      36
to)

gather_database_stats (au 01-02-22 22:00 01-02-22 22:00 COMPLETED  ORA$AT_OS_OPT_SY_361        3      36
to)

To check the auto stats job history

SQL> set lines 200 pages 1000
col CLIENT_NAME for a20
col JOB_NAME for a20
SQL> SQL> SQL> col JOB_STATUS for a10
col WDURATION for 999999
col JDURATION for 999999
select CLIENT_NAME,
       to_char(WINDOW_START_TIME,'dd-mm-yy hh24:mi') "WSTART",
       extract(SECOND from WINDOW_DURATION) "WDURATION",
       JOB_NAME,
       JOB_STATUS,
       to_char(JOB_START_TIME,'dd-mm-yy hh24:mi') "JSTART",
       extract(SECOND from JOB_DURATION) "JDURATION"
from dba_autotask_job_history
where CLIENT_NAME like '%stat%'
order by 4;SQL> SQL> SQL>   2    3    4    5    6    7    8    9   10

CLIENT_NAME          WSTART         WDURATION JOB_NAME             JOB_STATUS JSTART         JDURATION
-------------------- -------------- --------- -------------------- ---------- -------------- ---------
auto optimizer stats 03-01-22 22:00        49 ORA$AT_OS_OPT_SY_301 SUCCEEDED  03-01-22 22:00        31
 collection

auto optimizer stats 13-01-22 22:00        29 ORA$AT_OS_OPT_SY_321 SUCCEEDED  13-01-22 22:00        13
 collection

auto optimizer stats 31-01-22 22:00        55 ORA$AT_OS_OPT_SY_341 SUCCEEDED  31-01-22 22:00        21
 collection

auto optimizer stats 01-02-22 22:00         0 ORA$AT_OS_OPT_SY_361 SUCCEEDED  01-02-22 22:00         3
 collection

ORA-01109: database not open

Open the pdb before connection using alias

kIsH<X>conn c##test/password@xypdb1
ERROR:
ORA-01109: database not open

kIsH<X>alter pluggable database xypdb1 open;

Pluggable database altered.

kIsH<X>conn c##test/password@xypdb1
Connected.
TESTING WITH HIGH NUMBER OF PDBS IN ONE CDB

Have you ever tested with the maximum number of pdb or with near max pdb’s and their behavior?

This is a vm with 10gb of memory and 1 cpu with virtual processor.

Since, this is 19c version, almost 4096 pdb’s can be configured compared to 12c which is only 252.

There is no sufficient space and resource for configuration of 4000+ pdb’s but though dbca allows the maximum count of 4094 excluding pdb$seed.

As we know that in 12c, max of 1 user pdb can be created and in 19c, max of 3 user defined pdb’s can be used without any license in 19c.

Calculating the FRA space needed for each PDB which leads to high disk space requirement.

3000 GB / 4000 PDB’s = .75 GB per pdb

4000 * 750 MB = 3,000,000 TB for total number of pdb’s

Totally, FRA space require 1129494 MB (1TB) for all PDB’s

Per PDB, it should ask for 443 MB at least.

So fast recovery should be at least three times the db size

250 pdb require 6.5 GB of shared pool

Ensure that db_files parameter is set to a higher value as per the number of pdbs

kIsH<X>kIsH<X>kIsH<X>show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 XYPDB1                         READ WRITE NO
         4 XYPDB2                         READ WRITE NO
         5 XYPDB3                         READ WRITE NO
         6 XYPDB4                         READ WRITE NO
         7 XYPDB5                         READ WRITE NO
         8 XYPDB6                         READ WRITE NO
....
    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
       200 XYPDB198                       READ WRITE NO
       201 XYPDB199                       READ WRITE NO
       202 XYPDB200                       READ WRITE NO

Oracle automatically calculates the parameter max_pdbs as per the resource availability and space in the server. Any attempt to increase the parameter throws an error ORA-65334. So the max pdb’s which can be created as per the server are 254. This parameter helps to limit the user’s from creating extra pdb’s unknowingly without having knowledge on license part.

kIsH<X>show parameter max_pdb

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_pdbs                             integer     254


kIsH<X>SELECT count(*) from v$containers;

  COUNT(*)
----------
       202


kIsH<X>alter system set max_pdbs=4096;
alter system set max_pdbs=4096
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-65334: invalid number of PDBs specified

Reduce the max_pdbs to a lower value and create pdb’s. Oracle allows to create maximum of 202 user defined pdb’s excluding pdb$seed and CDB. It does not allow to create more than 202 user pdb’s and throws “ORA-65010: maximum number of pluggable databases created “

kIsH<X>alter system set max_pdbs=202;

System altered.

kIsH<X>CREATE PLUGGABLE DATABASE xypdb201 admin user pdbadmin identified by password;

Pluggable database created.

kIsH<X>show parameter max_pdn
kIsH<X>show parameter max_pdb

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
max_pdbs                             integer     202
kIsH<X>CREATE PLUGGABLE DATABASE xypdb202 admin user pdbadmin identified by password;

Pluggable database created.


kIsH<X>CREATE PLUGGABLE DATABASE xypdb203 admin user pdbadmin identified by password;
CREATE PLUGGABLE DATABASE xypdb203 admin user pdbadmin identified by password
*
ERROR at line 1:
ORA-65010: maximum number of pluggable databases created


kIsH<X>SELECT count(*) from v$containers;

  COUNT(*)
----------
       204

PYTHON PROGRAM TO LOOP INFINITE TIMES

Press ctrl + c to break the loop or click on stop button

while not None:
    print("loop")
#or
while not False:
    print("loop")
#or
while True:
    print("loop")
PYTHON PROGRAM FOR BANK WITHDRAWAL

#import the library
import csv

def makelist():
    """function to make a temporary list from the csv data"""
    accno = list(csv.reader(open("cust_details.csv")))
    tmp = []
    for x in accno:
        tmp.append(x)
    return tmp

def addDetails():
    """Function to add details of the customer"""
    cname = input("Enter the cname: ")
    cname.capitalize()
    acnt_no = int(input("Enter the account number: "))
    acnt_type = input("Enter the type of the account: ")
    acnt_type.upper()
    acnt_amount = int(input("Enter the balance amount: "))
    pin = input("Enter a pin: ")
    row = cname + "," + str(acnt_no) + "," + acnt_type + "," + str(acnt_amount) + "," + str(pin) + "\n"
    file = open('cust_details.csv','a')
    file.write(str(row))
    file.close()
    print('*'*40)
    print("Account has been added successfully!")
    return acnt_amount

def viewCustomer():
    """function to view the details of the customer"""
    file = open("cust_details.csv","r")
    for r in file:
        print(r)

def viewBalance(tmp):
    """function to view the balance of a specific customer"""
    ci = input("Enter your acct no: ")
    r = 0
    for y in tmp:
        if ci in tmp[r][1]:
            print(y)
        r = r + 1
    return tmp


def withDraw(tmp):
    """function to withdraw money from the account"""
    ci = input("Enter your acct no: ")
    cash = int(input("Please enter the amount to be withdrawn: "))
    r = 0
    for x in tmp:
        if ci in tmp[r][1]:
            currbal = int(tmp[r][3])
            global balance
            balance = currbal - cash
            tmp[r][3] = balance
            ind = r
            bal = input("Do you want to print the balance receipt? y/n")
            print('*' * 40)
            if bal.casefold() == 'y':
                print("Balance for the customer {0} is {1}".format(x[0], balance))
                print('*' * 40)
                print("Thanks {0}, for making a transaction! Your updated details below : ".format(x[0]))
                print('*' * 40)
                in_file = open("cust_details.csv","r")
                reader = csv.reader(in_file)
                l = list(reader)
                in_file.close()
                l[r][3] = balance
                nl = open("cust_details.csv",'w',newline='')
                csv_writer = csv.writer(nl)
                csv_writer.writerows(l)
                nl.close()
                for v in tmp[ind]:
                    print(v, end = '|')
                print()
            else:
                pass
        r += 1
    return None

def deposit(tmp):
    """function to deposit money on the account"""
    ci = input("Enter your acct no: ")
    dep = int(input("Enter the amount that should be deposit: "))
    r = 0
    for x in tmp:
        if ci in tmp[r][1]:
            newbalance = int(tmp[r][3]) + dep
            tmp[r][3] = newbalance
            ind = r
            print("Updated balance in your account {1}".format(x[0], newbalance))
            in_file = open("cust_details.csv","r")
            reader = csv.reader(in_file)
            l = list(reader)
            in_file.close()
            l[r][3] = newbalance
            nl = open("cust_details.csv",'w',newline='')
            csv_writer = csv.writer(nl)
            csv_writer.writerows(l)
            nl.close()
            for v in tmp[ind]:
                print(v, end ='|')
            print()
        r += 1
    return newbalance

def ChangePin(tmp):
    ci = input("Enter the acc id: ")
    cp = input("Enter the new password: ")
    confirm = input("Enter the password again to confirm: ")
    assert cp == confirm,"Passwords do not match between 1st and 2nd entry. Exiting!"
    r = 0
    for x in tmp:
        if ci in tmp[r][1]:
            tmp[r][4] = cp
            ind = r
            in_file = open("cust_details.csv","r")
            reader = csv.reader(in_file)
            l = list(reader)
            in_file.close()
            l[r][4] = cp
            nl = open("cust_details.csv",'w',newline='')
            csv_writer = csv.writer(nl)
            csv_writer.writerows(l)
            nl.close()
            for v in tmp[ind]:
                print(v, end ='|')
            print()
        r += 1
    return cp

def clear_csv():
    """function to clean the csv"""
    file = open("cust_details.csv","w+")
    file.close()
    print("Details are erased successfully!")

print("Options available :" + '\n' \
      + "1) Get the new customer details to be stored" + '\n' \
      + "2) Withdraw Amount From specified account" + '\n' \
      + "3) Deposit Amount For specified account" + '\n' \
      + "4) View Balance" + '\n' \
      + "5) Change the Pin" + '\n'
      + "6) View All Customer Details" + '\n'
      + "7) Close" + '\n'
      + "8) Clear the csv")

tmp = makelist()
loop = True

user = str(input("Enter your name as per ID proof: "))
pin = str(input("please enter the pin: "))
while loop == True:
    r = 0
    for x in tmp:
        # print(x)
        if user in tmp[r][0] and pin in tmp[r][4]:
            print("logs ---> {} has logged in currently".format(user))
            # print(tmp[r][0],tmp[r][4])
            option = input("Enter any number option from above: ")
            if option == "1":
                addDetails()
            elif option == "2":
                draw = withDraw(tmp)
            elif option == "3":
                dep = deposit(tmp)
            elif option == "4":
                viewBalance(tmp)
            elif option == "5":
                ChangePin(tmp)
            elif option == "6":
                viewCustomer()
            elif option == "7":
                again = input("Do you want to perform any transaction again? - y/n ")
                if again.casefold() == "y":
                    continue
                else:
                    loop = False
                    print("*** Closing the session ***")
                    break
            elif option == "8":
                clear_csv()
        r += 1
SP2-0618: Cannot find the Session Identifier. Check PLUSTRACE role is enabledSP2-0611: Error enabling STATISTICS report

In multitenant systems, there will always be confusions with respect to the container databases and pluggable databases

Create a common user from CDB and grant basic privileges to that user.

kIsH<X>create user c##test identified by password container=all;

User created.

kIsH<X>grant create session, unlimited tablespace to c##test container=all;

Grant succeeded.

Connect to the pdb with the user and set autotrace. Autotrace fails due to missing privileges on stats model views.

kIsH<X>conn c##test/password@xypdb1
Connected.
kIsH<X>set autot traceonly
SP2-0618: Cannot find the Session Identifier.  Check PLUSTRACE role is enabled
SP2-0611: Error enabling STATISTICS report

If the plustrce.sql script should be executed inside the pdb. Most of DBA’s forget to login to the container pdb and execute the script outside the pdb or from CDB due to lack of architectural understanding and getting used to non-CDB.

kIsH<X>alter session set container=XYPDB1;

Session altered.

kIsH<X>@?/sqlplus/admin/plustrce.sql
kIsH<X>
kIsH<X>drop role plustrace;

Role dropped.

kIsH<X>create role plustrace;

Role created.

kIsH<X>
kIsH<X>grant select on v_$sesstat to plustrace;

Grant succeeded.

kIsH<X>grant select on v_$statname to plustrace;

Grant succeeded.

kIsH<X>grant select on v_$mystat to plustrace;

Grant succeeded.

kIsH<X>grant plustrace to dba with admin option;

Grant succeeded.

kIsH<X>
kIsH<X>set echo off

Manually create a common user c##plustrace with all the privileges

kIsH<X>grant plustrace to c##test;

Grant succeeded.

Auto trace works fine

kIsH<X>conn c##test/password@xypdb1
Connected.
kIsH<X>set autot traceonly
CAN A FUNCTION BASED INDEX GET COVERED ?

The answer is yes. Function based index can be created as a covering index in combination with normal columns and functional columns.

kIsH@xHydra<>CREATE INDEX cov_xd_xc_xs ON xtbl(trunc(xdate),country,salary);
Index created.

kIsH@xHydra<>set autot traceonly explain
kIsH@xHydra<>SELECT name,country FROM xtbl WHERE trunc(xdate) < sysdate and country <> 'NULL' and salary < 10000;

Execution Plan
----------------------------------------------------------
Plan hash value: 4249760034

--------------------------------------------------------------------------------
--------------------

| Id  | Operation                           | Name         | Rows  | Bytes | Cos
t (%CPU)| Time     |

--------------------------------------------------------------------------------
--------------------

|   0 | SELECT STATEMENT                    |              |   455 | 14560 |   1
23   (0)| 00:00:01 |

|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| XTBL         |   455 | 14560 |   1
23   (0)| 00:00:01 |

|*  2 |   INDEX RANGE SCAN                  | COV_XD_XC_XS |    82 |       |
41   (0)| 00:00:01 |

--------------------------------------------------------------------------------
--------------------


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access(TRUNC(INTERNAL_FUNCTION("XDATE"))<SYSDATE@! AND "SALARY"<10000)
       filter("SALARY"<10000 AND "COUNTRY"<>'NULL')