Oracle interview questions

When you enter the interview panel, the rule of thumb question asked by every interviewer is to introduce yourself.I have just mentioned a sample introduction.There is again no thumb rule to answer the question in one way.There are multiple ways of answering this question based on each perspective.

1)Tell me about yourself,your activities in your day to day life as a dba

I am basically a database admin with x years of experience.

I handled over 100 oracle servers with 200 databases and 300 instances in my previous projects.

I have good experience on installation,configuration,upgradation,migration,user management,tablespace management,patching,cloning,backup,high availability solutions like RAC,dataguard,performance tuning and so on …

2)Can you explain about the architecture of oracle database 12c?

The instance is called memory + background processes like SGA and PGA.Also UGA depending on shared servers.

SGA consist of shared pool,database buffer cache,redo buffer,large pool,java pool,streams pool.

Database consists of physical files like datafiles,online redolog files,controlfiles,archivelog files etc.

Mandatory background processes: SMON,PMON,DBWR,LGWR and CKPT

3)What is SMON?

smon manages instance crash recovery and clean up the temporary segments.

4)What are the modes of database buffer cache?

Free(unused),pinned(held) and dirty buffer(modified)

5)How to calculate the size of database?Mention the views?

dba_segments, dba_extents, v$datafile, dba_data_files, dba_temp_files, v$tempfile, dba_free_space

6)How to calculate the memory for a database?Mention the views?

select sum(value)/1024/1024/1024 “SGA_SIZE (GB)” from v$sga;

select sum(pga_max_mem)/1024/1024/1024 “PGA_SIZE (GB)” from v$process;

7)How to check the resource consumption (CPU,memory) of a process in a Linux server?

TOP command,free -mg, mpstat,sar,vmstat

8)What is the state of redolog when new member is added to redolog?

Unused

9)If one of the controlfile is corrupted,what happens to the database?

we get error “ORA-00205: error in identifying control file, check alert log". Recent controlfile needs to be restored from backup.

10)For a database ,how many recovery catalog can be present?

we can have multiple catalogs for a database.

But multiple databases can register to a single catalog.

11)explain about the sql processing in oracle?

Whenever sql statement is executed ,it has to enter the library cache into shared pool and the statement is parsed.

The parse has to undergo syntax check and semantic check.

The optimization has to generate execution plan for the statement with hash value.

Then the result is retrieved from the object.

12)What will you do if a user complains performance issues on the particular sql query?

fetch the sqlid using v$sql and v$session.use sql tuning advisor to get the advise.You can add index to the predicate column for example as per the advise.

13) how to show all the nodes in a RAC cluster?

olsnodes -nip

13)How to check if dataguard is running in a database?

v$dataguard_status, show parameter log_archive_Dest_2

14) How to check if mrp is running in physical standby?

v$managed_standby and ps -ef | grep mrp on standby

15)Things to check on awr report for performance issues?

dbtime ,elapsed time,top five foreground events by CPU time ,elapsed time and wait events- db_file_sequential_read,db_file_scattered_read etc.Please refer online resources to get more details on these jargons

Mount and Unmount NFS in Linux
Be unique

NAS(network attached storage) mount point is a remote storage attached to a server.

The system administrator can access and share the files using NFS(network file system) protocol over network.

1)Check the available mount points on your server

~] df -h

2)Check if nfs service is running

~]service nfs status (To start the nfs service – service nfs start)

3)Check the nfs mount points

~]showmount -e a01nfs01

4)Create the directory structure to mount the nfs

~] mkdir /home/oracle/nfs

5)Give permissions to the directory

~] chown -R oracle:oinstall /home/oracle/nfs

6)Mount the nfs

~] mount -t nfs a01nfs01:/st01/orcl1/client/nfs /home/oracle/nfs

7)To permanently mount the nfs mount point,add the nfs entries in fstab

~]vi /etc/fstab

8)Add the following entry

a01nfs01:/st01/orcl1/client/nfs /home/oracle/nfs nfs\ rw,bg,hard,nointr,rsize=100000,wsize=100000,tcp,vers=3,timeo=600,actimeo=0,noatime 0 0

9)After adding the following entry,server should be rebooted in order to take permanent effect

Warning!! don’t blindly run the below command.The server requires downtime.Hence you have to plan before reboot,if the server is in production.

~] shutdown -r now

10)To unmount the nfs

~]umount /home/oracle/nfs

11)To permanently unmount nfs,comment the below entry and save

#a01nfs01:/st01/orcl1/client/nfs /home/oracle/nfs nfs\ rw,bg,hard,nointr,rsize=100000,wsize=100000,tcp,vers=3,timeo=600,actimeo=0,noatime 0 0

Thank you !!!

Shell scripting basics
“Learn something new”

We all sometimes feel that there can be some machine which can do your work so that you can concentrate on more innovations. Shell scripting is one of the powerful techniques to automate your daily tasks.In Linux, bash scripting is one of the common shell which we can invoke from the command line.

We are going to see how to create,edit and execute a shell script in Linux.

We can use a vi editor to write a shell script.

In Linux terminal,give the following:

~]$ vi file_name

Every bash script starts with the line:

#!/bin/bash

where #! is called sharp + bang also called as shebang for invoking the bash executable from binary.

For example: I want to clean up the logfiles using a shell script for specific log directory for a every two days.I write a shell script like:

#!/bin/bash

find /path/to/files/ -type f -name '*.log' -mtime +2 -exec rm {} \; —-> give ‘shift key’ + : + wq to exit the vi editor

Give permissions to the file to execute:

chmod -R 755 filename

To execute a shell script:

use “./ + filename”

example: If your file is present in current working directory you have to use “./”

./filename

if your file is present in some other directory ,then set the variable $PATH as :

export PATH=/path/to/executable file

eg:export PATH=/bin/sqlplus

To check the path of the executable file

echo $PATH

We will see more in further posts.Thank you!!!

ORA-01652: unable to extend temp segment by 128 in tablespace users
“Be alpha”

You find this ora-01652 error often ,when there are huge sorts like group by ,order by and death hash joins which has large sort segments utilizing the PGA and spill over temporary tablespace to save the intermediate result sets happening in the database instance. If PGA is not sufficient to handle the large sorts in memory itself, then intermediate result sets undergo single pass or multipass operation which will undergo sorts in disk and much slower than PGA sorts

Often application administrators have this misconception that adding more space with tempfiles to temp tablespace or having huge temp tablespace will cause the sql queries to run like jet speed. This is not always true because if the sql query doesnot have sorts or group by or hash aggregation works ,then there is no point in adding more tempfiles to temporary tablespace. This will be like adding more oil to fire instead of water. You should know what consume what!

For example: If you want your vehicle to run fast, then instead of filling the fuel tank with water, fill more fuel to it. Adding water will cease your vehicle to run. Likewise, if the sql query (vehicle) with heavy sort want to run fast, then add more PGA and TEMP(fuel) to your database. 😉

An application administrator who ask for more temp space to add without having idea on the sorting operations on the query so that they can get maximum database speed.

check the size of the temporary tablespace

kish@exdbx<>col TABLESPACE_NAME format a10
col TABLESPACE_SIZE format a20
col TABLESPACE_SIZE format 999999
col ALLOCATED_SPACE format 999999
col FREE_SPACE format 999999
select TABLESPACE_NAME,TABLESPACE_SIZE/1048576 MB_TOT,ALLOCATED_SPACE/1048576 MB_USED,FREE_SPACE/1048576 MB_FREE from dba_temp_free_space;kish@exdbx<>kish@exdbx<>kish@exdbx<>kish@exdbx<>kish@exdbx<>


TABLESPACE                                   MB_TOT                                  MB_USED                               MB_FREE
---------- ---------------------------------------- ---------------------------------------- ----------------------------------------
TEMP                                            355                                      355                           332

I use some sql queries with large sorts which use temporary tablespace for performing its sort operations

SQL> select a.country,a.item_type,a.sales_channel,b.order_date,sum(c.total_revenue),b.total_cost,b.total_profit
  2  from test.SALES_TAB_COPY a
  3  inner join test.tabx b on a.ORDER_ID=b.ORDER_ID
  4  inner join test.sales_tab c on b.order_id=c.order_id
  5  group by a.country,a.item_type,a.sales_channel,b.order_date,b.total_cost,b.total_profit
  6  order by 1,2,3,4 desc;

Monitor the sort utilization

kish@exdbx<>select USERNAME,SQL_ID,TABLESPACE,BLOCKS*8192/(1048576) SORTMB,SEGTYPE from v$tempseg_usage;

USERNAME        SQL_ID        TABLESPACE                                        SORTMB SEGTYPE
--------------- ------------- --------------- ---------------------------------------- ---------
TEST            NULL          TEMP                                                  12 HASH

Solution:

1)SMON background process cleans up the temporary segments and extents automatically

2)Add a new tempfile to the temp tablespace

SQL> alter tablespace temp add tempfile '/home/oracle/oradata/temp02.dbf' size 500M;

3)Set autoextend on for temp tablespace to avoid exhaust errors

kish@exdbx<>alter tablespace temp add tempfile '+DATA' size 5M autoextend on NEXT 5M MAXSIZE 500M;

Tablespace altered.

4)Increase PGA_AGGREGATE_TARGET at session level to avoid one pass and multipass IO’s

5) As a proactive measure to avoid temp errors, a dedicated temp tablespace with big space for the user who always run sort queries on database

A testcase

I was running a query with large sorts due to multiple join conditions and end up with the error ora-01652

SQL> /
select a.country,b.item_type,c.sales_channel,b.total_cost
*
ERROR at line 1:
ORA-01652: unable to extend temp segment by 128 in tablespace TEMP

When i kept monitoring the session for temp usage , i can find direct path write temp due to lack sufficient PGA.

USERNAME      SQL_ID     TABLESPACE SPID                     SORT_CNT   SID   PROGRAM         SERIAL# EVENT            USEDMB
---------- ------------- ---------- ------------------------ -------- ------- --------------- ------- --------------- -------
TEST       gqs10z3mw86h4 TEMP       13327                           2      73 sqlplus.exe          23 direct path wri    2767
                                                                                                      te temp

TEST       gqs10z3mw86h4 TEMP       13327                           2      73 sqlplus.exe          23 KSV master wait    2797

kish@exdbx<>select FILE_ID,TABLESPACE_NAME,BYTES/1048576 AvailMB,USER_BYTES/1048576 utilizeMB,MAXBYTES/1048576 MaxMB from dba_temp_files;

 FILE_ID   TABLESPACE_NAME  AVAILMB UTILIZEMB    MAXMB
---------- --------------- -------- --------- --------
         1 TEMP                2775      2774    32768
         2 TEMP                  15        14      500
         3 TEMP                  15        14      500

kish@exdbx<>select USERNAME,TABLESPACE_NAME,BYTES/1073741824 AvailGB, MAX_BYTES/1073741824 MaxGB from dba_ts_quotas;

USERNAME   TABLESPACE_NAME                                            AVAILGB                                              MAXGB
---------- --------------- -------------------------------------------------- --------------------------------------------------
OLAPSYS    SYSAUX                                             .00860595703125                   -.000000000931322574615478515625
SYSMAN     SYSAUX                                             .08258056640625                   -.000000000931322574615478515625
SQLTXADMIN EXAMPLE                                                          0                   -.000000000931322574615478515625
APPQOSSYS  SYSAUX                                                           0                   -.000000000931322574615478515625
FLOWS_FILE SYSAUX                                                           0                   -.000000000931322574615478515625
S

SQLTXPLAIN EXAMPLE                                             .0050048828125                   -.000000000931322574615478515625
PERFSTAT   SYSAUX                                               .000732421875                   -.000000000931322574615478515625

7 rows selected.

ASMCMD> lsdg DATA
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  4194304     12000       44                0              44              0             N  DATA/
ORA-01950 – GIVE USER QUOTA TO TABLESPACE
Learn by brain,understand by heart and apply by hands

We created new tablespace but not given quota for the user. Would that user be able to create extent on that tablespace? Let us see!!

SQL> create tablespace newtbs datafile '/u01/oracle/oradata/new01.dbf' size 5G;

SQL>create user new identified by password;

SQL> grant connect,resource to new;

SQL>create table newtab (emp_id number,emp_name varchar(10));

SQL>insert into newtab values(1,'xyz');

We get this error.

ORA-01952 : system privileges not granted to 'tablespace'

Check if the tablespace exist under user ‘new’

select tablespace_name,
max_bytes/1024/1024/1024 "quota_in_gb"
from dba_ts_quotas
where username = 'new';

Solution:

SQL>alter user new quota 500m on newtbs; (unlimited also can be specified)

SQL>grant unlimited tablespace to new;

Thanks!!!

DATA COPY USING SQL*PLUS COPY COMMAND
“Don’t procrastinate, Do it now

Most of the Oracle dba’s would probably might not be using sqlplus copy command which is a very efficient way to copy small table data and columns from one database to another.

Copy usage:

Copies data from a query to a table in the same or another database. COPY supports CHAR, DATE, LONG, NUMBER and VARCHAR2.

COPY {FROM database | TO database | FROM database TO database} {APPEND|CREATE|INSERT|REPLACE} destination_table [(column, column, column, …)] USING query

we can set the following functions:

set copycommit 0 (number of copy batches after completion the changes are commited)

set arraysize 15(number of rows in batch)

1)COPY TABLE USING SQLPLUS:

copy from system/password@db1-

replace test1.tbl0101-

using select * from test.tbl0101

2)COPY COLUMNS FROM ONE TABLE TO ANOTHER TABLE:

copy from test1@db1-

to test2@db2 -

create tab2 (emp_id,emp_name,salary) -

using select * from tab1

3)COPY TABLE TO REMOTE DATABASE (add the tns entries of databases in tnsnames.ora)

copy from test1@db1 -

create tab2 -

using select * from tab1

Note: If you copy big size table,then increase your undo tablespace size and undo retention accordingly and monitor for archivelog and tracefile generations.If you have less space in mountpoint then you might end up filling the space fast.

CONNECTION CLOSED BY IP ADDRESS
“Think dimensional”

Troubleshooting the ssh equivalency issues in Linux:

I try to establish ssh equivalency connection between two servers for oracle user in Linux. I get the below error.

“CONNECTION CLOSED BY 192.168.56.108”

Solution:

vi /etc/security/access.conf

#All other users should be denied to get access from all sources.

+ : root oracle : ALL

After adding the entry of ‘oracle’ user on the above line in access.conf file,the connection was successful.

[oracle@orcl~]$ ssh orcl1 && ssh orcl

Issue is resolved!!

Tips to resolve the issue:

check /var/log/secure logfile to check for any errors.In my case, I got the below error.

Access denied for user ‘oracle’ from ‘host’

Access denied for user ‘oracle’ by PAM account configuration’

check for /etc/hosts.deny file ,if there are any ipaddresses restricted.

Check if selinux is blocking connections >> should be disabled (setenforce 0)

[oracle@orcl~]$getenforce

IMPDP PERFORMANCE SPEED UP IN DATAPUMP ORACLE
“BE PERSISTENT AND PERSEVERANT IN iNFINITE LEARNING”

We all get pressure from application team for data migration of big tables and schemas which is hard time for us during business production hours.There should be some faster approach to save us in this situation 🙂 You can test the below methods and be a super hero oracle dba to your team.Let us see how!!

Always test anything before you mess up. Testing makes things clear!

Case study :There are 4 big tables to be exported from prod and imported to dev.

Database name – source – testdb

destination – test1db

version – 12.1.0

Total size of the four tables = 436GB

Tables: big1,big2,big3,big4

Prod schema: test

Dev schema: test1

1) Estimated time taken for traditional import = 13 hours

2) Importing tables separately ,followed by constraints and indexes took 7 hours

3)Firing the import of four tables parallely and their objects took 6 hours

i usually estimate parallelism based on the formula ===> ” DOP = ” parallel threads per CPU * number of cpu cores “ and with the total available CPU cycles

Take the export backup of full tables with the parameters with max parallel servers:

~]# vi test.par 
directory = export_dir 
dumpfile = exptest_%U.dmp 
logfile = exptest.log 
tables = test.big1,test.big2,test.big3,test.big4
filesize=10GB #this parameter will split the export dumpfiles to reduce single file I/O read and write.
parallel = 4  -------> parallel = "parallel = "Total size of objects/CPU capacity"
~]# expdp user/password parfile=test.par 

In this case , datapump take 1 hour for exporting the data.

Job “SYSTEM”.”SYS_EXPORT_TABLE_01″ successfully completed at Thu Dec 21 10:25:26 2018 elapsed 001:01:03

Importing the data with normal method:

~]vi test.par

PARFILE:

directory=export_dir 
dumpfile=exptest_%U.dmp 
logfile=imp_speed.log 
tables=test.big1,test.big2,test.big3,test.big4 
remap_schema=test:test1 
remap_tablespace=prod1:dev1,prod2:dev2 
parallel=4 
logtime=all 

Time taken for normal method:

nohup expdp username/password parfile=test.par

22-DEC-18 15:09:59.953: Job “SYSTEM”.”SYS_IMPORT_TABLE_02″ successfully completed at Thu Dec 22 15:09:59 2018 elapsed 0 14:11:03

1)Importing with faster approach:

PARFILE:test.par

~] vi test.par
directory=export_dir 
dumpfile=expfast_%U.dmp 
logfile=impfast.log
tables=test.big1,test.big2,test.big3,test.big4 
remap_schema=test:test1 
table_exists_action=replace
parallel=4 
logtime=all
exclude=index,constraint,statistics,grant  <=====
access_method=direct_path <=======
transform=disable_archive_logging:y <=======
cluster=y <====== if you have RAC setup,then this parameter utilizes cluster resources to speed up the import. Always use this parameter for larger datasets as it causes cluster overhead
nohup impdp username/password parfile=test.par

The above parameters which are highlighted in red are the major impact for the performance enhancement.

direct path access method insert the rows on top of the high water mark. Note that this parameter will not always be used by oracle due to certain limitations of table properties but mostly for all tables oracle automatically choose access method

disable archive logging parameter will not archive the redologs during import. If the database is in force_logging mode , then the parameter changes doesnot take any effect.If archivelogs are disabled,then there is not much impact of this parameter

Command for dumping DDL to script file:

---impdp username/password directory=default tables=test.big1,test.big2,test.big3,test.big4 sqlfile=script_ddl.sql include=constraint,index,grant

a script file is placed in your current directory named “script_ddl.sql”

Once the ddl are dumped just modify the ddl with higher DOP for index creation and enable novalidate on constraints in the script file. Edit the file in vi editor.Split the index, constraint and grant in separate files and run it.

Substitute “noparallel” or “parallel 1” with “parallel 16” on all ddl

The degree of parallel is calculated by Total size of objects / CPU capacity..

Calculating the number of dumpfiles. Note that oracle internally distributes the data across the dumpfiles. So if you specify substitution variables like %U, then there may be random number of dumpfiles.

kish@exdbx<>select 450/50 from dual;

    450/50
----------
         9

This means that depending on the number of tables we can use parallel servers which is 4 here!In datapump,no matter how many parallel server you use,oracle will utilize parallel degree per table count.If i have 4 tables,then using 16 parallel degree would make remaining 12 worker processes idle.

Note1: Degree of parallelism is dependent on the number of cores of CPU on your server.This server has 48 cores.If your server has less cores then you can use less parallel resource

Substitute “enable” to “enable novalidate” for the ddl command

Note2: ‘Enable novalidate’ can be used in case of emergency like business hours for faster import.After constraints are imported without validation,you can ‘enable validate’ leisurely so that there will not be application user complaint.

Eg: ~] vi script_ddl.sql

---set timing on;
---create index "test"."big1_ind" on "test"."big1"("row1","row2","row3" DESC) PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "dev1" parallel 16; 
---alter index "test"."big1_ind" parallel 16; 
---alter table "test"."big1" add constraint "big1_pk" primary key("row11","row12","row13") using index "test"."big1_ind" enable novalidate; (By default, oracle validate constraints which take huge amount of time)<======

Note: Using novalidate clause means, oracle will not validate constraints only while import but in future if there are constraint violation, then they will be validated

use “sqlplus” to import the constraints and indexes

---nohup sqlplus username/password@script_ddl.sql

Time taken to import without constraints and indexes:

22-DEC-18 22:10:33.991: Job “SYSTEM”.”SYS_IMPORT_TABLE_02″ successfully completed at Thu Dec 22 22:10:33 2018 elapsed 0 00:48:04

Time until index and constraint validated:

Table data without metadata like constraints and indexes get import faster.

Major time spent on validating constraints and importing indexes. we reduce the time by not validating the constraints and creating index with parallelism.

index altered

elapsed time: 06:30:00 hours

For faster import,we can enable novalidate to import first and then we can validate constraint.

After the process, validate constraints by replacing “enable validate” on the script_ddl.sql file if there are performance issues.

This approach saved 9 hours from the previous method.

2)Again do the import process by firing the import parallely for four tables with four putty sessions at the same time to parallelize the operation manually.Follow the same process 1) but split the tables.

session 1

impdp tables=big1

session 2

impdp tables=big2

session 3

impdp tables=big3

session 4

impdp tables=big4

this approach reduced 30 mins from previous method.

elapsed time: 06:01:00 hours

The methods shown are tested in a different environment.The performance varies based on the configuration of the system.There is no guarantee of speed.Please test it in POC before refreshing in production or development.

If you have big schemas,then follow the same approach with high parallel servers and make sure to split the import jobs with multiple chunks of dumpfile to make parallel more effective

eg: if a schema SCOTT has 50 tables,then you can follow the same method which has been used for 4 tables.But parallel should be equal to or less than 50 (or 48 or 32 …) based the CPU power you have.

If out of 50 tables only ten tables are big and remaining 45 tables are considerably small,then start import of 5 big tables separately with 5 different putty sessions excluding index,constraints and statistics and parallely import 45 small tables by schema import excluding the 5 big tables with 48 parallel degree without excluding index,constraint and statistics so that you save time and dont fall into serialization trap!

Init parameters to refine before fresh migration:

Processes and sessions parameter should be set appropriately to avoid ORA-0020

Flashback and force logging can be switch off if enabled

db_block_checksum and db_block_checking are datablock checking parameters for block validation and block corruption. These can be set to off to bypass the block validation.

disk_asynch_io should be enabled to encourage asynchronous IO from OS level

parallel_min_servers and parallel_max_servers are parameters which set soft and hard limit for parallelism. Set it wisely

parallel_degree_policy can be set to AUTO if tons of CPU is available but still oracle not use parallel

12c feature _optimizer_gather_stats_on_load should be disabled before load to avoid auto statistics gathering and enable after load

aq_tm_process should be set to a higher value to control the message queue of the jobs. Expdp/Impdp totally depends on AQ processing.

_memory_broker_stat_interval should be increased to 999 to control auto sga resize operations

SQL> select name,value from v$parameter where name in ('db_block_checking','db_block_checksum','sessions','processes','disk_asynch_io','parallel_max_servers','parallel_min_servers','parallel_degree_policy','undo_retention');
NAME                 VALUE
-------------------- --------------------
processes            500
sessions             772
disk_asynch_io       TRUE
db_block_checksum    TYPICAL
undo_retention       900
db_block_checking    FALSE
parallel_min_servers 4
parallel_max_servers 40
parallel_degree_poli MANUAL
cy
9 rows selected.

THANK YOU !! Hope you enjoyed the post