Interview Questions-DBA
What is the difference between Rule based Optimizer and Cost based Optimizer in oracle?
In brief the rule-based method means that when executing a query the database must follow certain predefined rules and matter what data is stored in affected database tables. The cost-based method means the database must decide which query execution plan to choose using best guess approach that takes into account what data is stored in db.
The Oracle cost-based optimizer is designed to determine the most efficient way to carry out a SQL statement, but it can’t reach do this without good, up-to-date statistical information on the data being accessed. The optimizer can use a rules-based approach to work without statistical information, but this approach is less intelligent than the cost-based approach. With the rules-based approach, the optimizer chooses an execution plan based a set of rules about what types of operations usually execute faster than other types. With the cost-based approach, the optimizer factors in statistical information about the contents of the particular schema objects (tables, clusters, or indexes) being accessed.
The Rule Based Optimizer (RBO) is now obsolete in Oracle 10g. The functionality is still present but no new functionality has been included in it and it is no longer supported by Oracle. It is only present to provide backwards compatibility during the migration to the query optimizer (Cost Based Optimizer).
============================================================================================================================
*****What is directionary managed table space and Locally managed Tablespace?
- DICTIONARY:
Specifies that the tablespace is managed using dictionary tables (tables in the SYS schema). This is the default in Oracle8i.
In DMT, to keep track of the free or used status of blocks, oracle uses data dictionry tables. When an extent is allocated or freed for reuse, free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table
- LOCAL:
Specifies that tablespace is locally managed. This is the default in Oracle9i. Exception for the SYSTEM tablespace
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile.
============================================================================================================================
What are a Global Index and Local Index?
When you create a partitioned table, you should create an index on the table. The index may be partitioned according to the same range values that were used to partition the table.
Local keyword in the index partition tells oracle to create a separate index for each partition of the table.
The Global clause in create index command allows you to create a non-partitioned index or to specify ranges for the index values that are different from the ranges for the table partitions. Local indexes may be easier to manage than global indexes however, global indexes may perform uniqueness checks faster than local (portioned) indexes perform them.
============================================================================================================================
When will you 'resync catalog'?
when your target database has just undergone a number of physical changes
and when the target database is performing a very large number of log switches in between the backups.
Oracle recommends that you resynchronize the recovery catalog after making any changes to the physical structure of a target database. You issue the RESYNC CATALOG command as follows, after connecting to the target
database.
============================================================================================================================
In which colom name u'll find the DB id at DB level?
DBID from V$database;
============================================================================================================================
*****What is HWM value and Where can one find the high water mark for a table?
EXAMPLE:- Suppose if there are 100rows in a table ,then the high water mark in a table is present at the 100th row ,and later on you deleted the 50rows now the table contains 50rows,but the high water mark is still at 100th row, so if u add more rows in a table the the high water mark increases .
SOLUTION:- To reduce the high water mark in a table use export-datapump utility and then import it using import-datapump utility.
2nd option :- (after export use truncate to delete the table).
EFFECTS :- The high water mark decreases.
Same in case of data if u want to remove whole data then use Truncate command,will be better then Delete because it will release the memory and the High water mark will be
resetted to the original value.Hence later on, fetching on that table will be faster....
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Latches,Locks,Enqueues:
Latches are used to protect Oracle data structures to be modified or run by more than one process. They are more restrictive than locks. Locking has a similar concept. When a transaction being modified is locked, it cannot be modified until committed. However, locks allow some user interaction. Enqueues are associated with a transaction or a session that serialize access to database resources. When an enqueue is associated with a session, it cannot be held by another session. Semaphores are mainly used for communication between processes. They control the processes by allowing them stop, wait and resume their processing
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****What is the use of Views and Metirialized Views?
A view is logical representation of table or virtual table it doesn't have data on its own . It takes the output of a query and makes it appear like a virtual table.
Materialized view has a physical structure it stores data in local machine or on its own. It can be refreshed automatically or manually.
Materialized views are schema objects, it storing the results of a query in a separate schema object
Materialized view is a concept mainly used in Datawarehousing .The main purpose of Materialized view is to do calculations and display data from multiple tables using joins.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****If voting disk/OCR file got corrupted and don’t have backups, how to get them?
Ans:
We have to install Clusterware.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****How to find out the nodes in cluster (or) how to find out the master node?
# olsnodes -- Which ever displayed first, is the master node of the cluster.
# select MASTER_NODE from v$ges_resource;
# find out which is the master node, you can see ocssd.log file and search for "master node number".
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****How to know the public IPs, private IPs, VIPs in RAC?
# olsnodes -n -p -i
node1-pub 1 node1-prv node1-vip
node2-pub 2 node2-prv node2-vip
***** How you will Identify block correption?
SQL> select * from v$database_block_corruption;
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
---------- ---------- ---------- ------------------ ---------
17 337317 1 0 FRACTURED
Block recovery can only be done using RMAN.
The entire database can be open while performing block recovery.
RMAN>
run {
allocate channel c1 type 'SBT_TAPE'
parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
allocate channel c2 type 'SBT_TAPE'
parms 'ENV=(TDPO_OPTFILE=/opt/tivoli/tsm/client/oracle/bin64/tdpo.opt)';
blockrecover datafile 17 block 337317;
release channel c1;
release channel c2;
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
********Process Parameter execeeded max-value:
error:ORA-12519: TNS:no appropriate service handler found.
SQL> SELECT upper(resource_name) as resource_name,current_utilization,max_utilization
FROM v$resource_limit WHERE resource_name in ('processes', 'sessions');
RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION INITIAL_AL
------------------------------ ------------------- --------------- ----------
PROCESSES 147 150 150
SESSIONS 150 155 170
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****Redo log sessions are blocking the DB user sessions.
1.Configure FAST_START_MTTR_TARGET to enable incremental checkpoint for this instance
2.Resize the Online Redo logs according to the output of the V$INSTANCE_RECOVERY.OPTIMAL_LOGFILE_SIZE
3.We need to have at most 2-3 log switches per hour that why we need to resize the online redologs accordingly to achieve this during the peak hours.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****How do you calculate undo retention period?
Actual Undo Size
UNDO_RETENTION=------------------------------------
DB Block Size * Undo Blocks per Second
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****How to take the archive backup using sequence numbers?
backup archivelog from sequence 34324 untill sequence 35000 delete input;
or
backup archivelog from sequence 11030 until sequence 11326 delete input;
****Can we change the block size of the database?
NO
********If application user wants to have a differnet block sizes for his application, what will you do?
We can create the new tablespace with different blocksize.
********Will AMM will manages the all the instance parameter?
It is a static parameters and Largepool is not going to be managed by Autometic memory management.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Some of the other Examples
Restore and recover the whole database
RMAN> STARTUP FORCE MOUNT;
RMAN> RESTORE DATABASE;
RMAN> RECOVER DATABASE;
RMAN> ALTER DATABASE OPEN;
Restore and recover a tablespace
RMAN> SQL 'ALTER TABLESPACE users OFFLINE';
RMAN> RESTORE TABLESPACE users;
RMAN> RECOVER TABLESPACE users;
RMAN> SQL 'ALTER TABLESPACE users ONLINE';
Restore and recover a datafile
RMAN> SQL 'ALTER DATABASE DATAFILE 64 OFFLINE';
RMAN> RESTORE DATAFILE 64;
RMAN> RECOVER DATAFILE 64;
RMAN> SQL 'ALTER DATABASE DATAFILE 64 ONLINE';Steps for media recovery:
*******# specify the new location for each datafile
RUN {
SET NEWNAME FOR DATAFILE 3 to 'new_location';
RESTORE DATAFILE 3;
SWITCH DATAFILE 3;
RECOVER DATAFILE 3;
}
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****Use the following BLOCKRECOVER command to recover the corrupted blocks
BLOCKRECOVER DATAFILE 7 BLOCK 233, 235 DATAFILE 4 BLOCK 101;
BLOCKRECOVER CORRUPTION LIST;
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
create the datafile and recover that datafile.
SQL> alter database create datafile ‘…path..’ size n;
RMAN> recover datafile file_id;
4. What are the parameters we’ve to set in primary/standby for Data Guard?
5. What is the use of fal_server & fal_client, is it mandatory to set these?
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****How you will find out fragmentation of index?
AUTO_SPACE_ADVISOR_JOB,AUTO_SPACE_ADVISOR(11G) will run in daily maintenance window and report fragmented indexes/Tables.
col owner FOR a10
col comments FOR a60
SET lines 130
SELECT OWNER,JOB_NAME,ENABLED,COMMENTS FROM DBA_SCHEDULER_JOBS;
SQL> EXEC DBMS_AUTO_TASK_ADMIN.ENABLE (client_name=>'auto space advisor', operation=>NULL, window_name=>NULL);
PL/SQL PROCEDURE successfully completed.
EXECUTE DBMS_SCHEDULER.DISABLE('AUTO_SPACE_ADVISOR_JOB');
analyze index validate structure;
This populates the table ‘index_stats’. It should be noted that this table contains only one row and therefore only one index can be analysed at a time.
An index should be considered for rebuilding under any of the following conditions:
* The percentage of deleted rows exceeds 30% of the total, i.e. if del_lf_rows / lf_rows > 0.3.
* If the ‘HEIGHT’ is greater than 4.
* If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****Why more archivelogs are generated, when database is begin backup mode?
During begin backup mode datafile headers get freezed and as result row information cannot be retrieved as a result the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive logs. Normally only deltas (change vectors) are logged to the redo logs. When in backup mode, Oracle will write complete changed blocks to the redo log files.
Mainly to overcome fractured blocks. Most of the cases Oracle block size is equal to or a multiple of the operating system block size.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****How to check backup of OCR files?
Ans:
#ocrconfig –showbackup
*****How to take backup of OCR file?
Ans:
#ocrconfig -manualbackup
#ocrconfig -export file_name.dmp
#ocrdump -backupfile my_file
$cp -p -R /u01/app/crs/cdata /u02/crs_backup/ocrbackup/RAC1
*****How to recover OCR file?
Ans:
#ocrconfig -restore backup_file.ocr
#ocrconfig -import file_name.dmp
*****What is local OCR?
Ans:
/etc/oracle/local.ocr
/var/opt/oracle/local.ocr
*****How to take backup of voting file?
Ans:
dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0
crsctl backup css votedisk -- from 11g R2
Backing Up Voting Disks
--------------------------------
“dd” command is used to backup voting disk while the Cluster Ready Services (CRS) process is active; you do not need to stop the crsd.bin process before taking a backup of the voting disk.
dd if=voting_disk_name of=backup_file_name
If your voting disk is stored on a raw device, use the device name in place of voting_disk_name, for example:
dd if=/dev/sdd1 of=/tmp/voting.dmp
Recovering Voting Disks
----------------------------
Run the following command to recover a voting disk where backup_file_name is the name of the voting disk backup file and voting_disk_name is the name of the active voting disk:
dd if=backup_file_name of=voting_disk_name
Adding and Removing Voting Disks
-------------------------------------
You can dynamically add and remove voting disks
Run the following command as the root user to add a voting disk:
crsctl add css votedisk path
Run the following command as the root user to remove a voting disk:
crsctl delete css votedisk path
To check the current CRS version
------------------------------------
crsctl check crs activeversion
*****How do I identify the voting disk location?
Ans:
# crsctl query css votedisk
*****How do I identify the OCR file location?
check /var/opt/oracle/ocr.loc or /etc/ocr.loc
Ans:
# ocrcheck
*****If voting disk/OCR file got corrupted and don’t have backups, how to get them?
Ans:
We have to install Clusterware.
*****Who will manage OCR files?
Ans:
cssd will manage OCR.
*****Who will take backup of OCR files?
Ans:
crsd will take backup.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
*****What is the use of SCAN IP (SCAN name) and will it provide load balancing?
Ans:
Single Client Access Name (SCAN) is a new Oracle Real Application Clusters (RAC) 11g Release 2, feature that provides a single name for clients to access an Oracle Database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.
*****How many SCAN listeners will be running?
Ans:
Three SCAN listeners only.
Single Client Access Name (SCAN)
=========================================================================================================================
How can one optimize %XYZ% queries?
It is possible to improve %XYZ% (wildcard search) queries by forcing the optimizer to scan all the entries from the index instead of the table. This can be done by specifying hints.
If the index is physically smaller than the table (which is usually the case) it will take less time to scan the entire index than to scan the entire table.
============================================================================================================================
How does one tune Oracle Wait event XYZ?Here are some of the wait events from V$SESSION_WAIT and V$SYSTEM_EVENT views:
db file sequential read: Tune SQL to do less I/O. Make sure all objects are analyzed. Redistribute I/O across disks.
buffer busy waits: Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i)/ Analyze contention from SYS.V$BH ("buffer busy waits" was replaced with "read by other session" Oracle 10g).
log buffer space: Increase LOG_BUFFER parameter or move log files to faster disks
log file sync: If this event is in the top 5, you are committing too often (talk to your developers)
log file parallel write: deals with flushing out the redo log buffer to disk. Your disks may be too slow or you have an I/O bottleneck.
Two useful sections in Oracle's Database Performance Tuning Guide:
=============================================================================================================================
Does Oracle use my index or not?
SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME INDEX_NAME MON USE
------------------------------ ------------------------------ --- ---
T1 T1_IDX YES NO
SQL> SELECT * FROM t1 WHERE c1 = 1;
no rows selected
SQL> SELECT table_name, index_name, monitoring, used FROM v$object_usage;
TABLE_NAME INDEX_NAME MON USE
------------------------------ ------------------------------ --- ---
T1 T1_IDX YES YES
To reset the values in the v$object_usage view, disable index monitoring and re-enable it:
ALTER INDEX indexname NOMONITORING USAGE;
ALTER INDEX indexname MONITORING USAGE;
=============================================================================================================================
My query was fine last week and now it is slow. Why?The likely cause of this is because the execution plan has changed. Generate a current explain plan of the offending query and compare it to a previous one that was taken when the query was performing well. Usually the previous plan is not available.
Some factors that can cause a plan to change are:
Which tables are currently analyzed? Were they previously analyzed? (ie. Was the query using RBO and now CBO?)
Has OPTIMIZER_MODE been changed in INIT<SID>.ORA?
Has the DEGREE of parallelism been defined/changed on any table?
Have the tables been re-analyzed? Were the tables analyzed using estimate or compute? If estimate, what percentage was used?
Have the statistics changed?
Has the SPFILE/ INIT<SID>.ORA parameter DB_FILE_MULTIBLOCK_READ_COUNT been changed?
Has the INIT<SID>.ORA parameter SORT_AREA_SIZE been changed?
Have any other INIT<SID>.ORA parameters been changed?
What do you think the plan should be? Run the query with hints to see if this produces the required performance.
It can also happen because of a very high high water mark. Typically when a table was big, but now only contains a couple of records. Oracle still needs to scan through all the blocks to see if they contain data.
============================================================================================================================
=============================================================================================================================******** Explain the use of setting GLOBAL_NAMES equal to TRUE.
Setting GLOBAL_NAMES dictates how you might connect to a database. This variable is either TRUE or FALSE and if it is set to TRUE it enforces database links to have the same name as the remote database to which they are linking.
=============================================================================================================================
What is COMPRESS and CONSISTENT setting in EXPORT utility?
If COMPRESS=Y, the INITIAL storage parameter is set to the total size of all extents allocated for the object. The change takes effect only when the object is imported.
Setting CONSISTENT=Y exports all tables and references in a consistent state. This slows the export, as rollback space is used. If CONSISTENT=N and a record is modified during the export, the data will become inconsistent.
=============================================================================================================================
What is difference between Shared Server/Multithreaded and Dedicated Server?
Oracle Database creates server processes to handle the requests of user processes connected to an instance.
A dedicated server process, which services only one user process
A shared server process, which can service multiple user processes
Your database is always enabled to allow dedicated server processes, but you must specifically configure and enable shared server by setting one or more initialization parameters.
=============================================================================================================================
********If query is processing slow then what steps should be followed?
Need to check with
.Blocking sessions , index level,locks on that resource ... etc
.Change the execution plan due to gathering the table statistics.
.Rebuild the indexes if required
After that compare the both execution plans
=============================================================================================================================
********Oracle 10g database.One table is dropped one hour ago by mistake and we don't have the backup and flash back feature is not enabled?How you will able to recover?
1.select object_name, original_name, type, can_undrop as "UND", can_purge as "PUR", droptime from recyclebin;
flashback table tst to before drop;
2.Suppose the table is not available in recycle bin aslo flashback is not enable.But if you have recent RMAN backup containing table.
you can recover the table by creating a duplicate database from production database using until log sequenece
RMAN> RUN
2> {
3> SET UNTIL SEQUENCE 11 THREAD 1;
4> DUPLICATE TARGET DATABASE TO "clone" NOFILENAMECHECK;
5> }
then
export the users table from clone(duplicate) database.
import the users table to the production database.
3.Restore from v$recycle_bin or If you have any DEV or UAT or standby database, export and import into production database.
=============================================================================================================================
****************If we don't have backup one datafile is missing how we do recovery of that datafile if backup is not there
Ex:Here I lost the datafile 'D:\ORACLE\APP\ORADATA\CHE\TEST.DBF';
SQL> select file#,status from v$datafile;
Make the datafile offline which is need to be recover
SQL> alter database datafile 9 offline;
SQL> select * from v$recover_file;
SQL> select file#,status from v$datafile;
Create the datafile
SQL> alter database create datafile 'D:\ORACLE\APP\ORADATA\CHE\TEST.DBF';
RMAN> connect target /
RMAN> recover datafile 9;
SQL> alter database datafile 9 online;
Note:We can't do for system & sysaux tablespace. If temp/undo tablespace is corrupted we can drop & recreate the temporary/undo tablespace.
=============================================================================================================================
********What is hard & Soft Parsing. how we avoid Hard parsing for a query, means what parameter should use in query to avoid it.
Hard parsing: when the required data is not in the buffer to read, oracle need to pars and read the data from the disk called as Hard parsing.
Soft Parsing: if the data required is in the buffer, oracle will provide data from the buffer and no io are done called soft parsing.
Hard parsing does IO to read data from disk, so may hamper the performance. It can be improve by finding the hints and increasing the shared pool area.
=============================================================================================================================
********If Temporary tablespace is full what will happen to database............
Solution: moving a table to a different tablespace with the
ALTER TABLE…MOVE statement
(or)
create temporary tablespace temp_new
tempfile ‘C:\OracleBase\OracleData\DBNAME\TEMP02.DBF’ size 1G autoextend OFF extent management local uniform size 512K;
alter database default temporary tablespace temp_new;
(or)Try in 11g with ALTER TABLESPACE temp SHRINK SPACE;
=============================================================================================================================
********If undo tablespace is full what will happen to database....
we will get ORA-01555 snapshot too old error.
It depends on many things like undo retention and undo guarantee parameters as well as content of the tablespace
to avoid this add the datafile to undo tablespace
=============================================================================================================================
if Archive log destination full database hangs then how we take backup of archive logs when database is hang state......
Connect to RMAN
RMAN>delete noprompt ARCHIVELOG UNTIL TIME 'SYSDATE-5'
then run the backup
============================================================================================================================
*********If one archive deleted before applying on standby and u do not have backup how u resolve this issue?. Database size is in TB so recreating Standby is not good idea.?
get the current scn number from standby database, take backup of primary database using this scn number, copy the backup file to standby location and recover the standby database using no redo option in rman. Create standby controlfile from the standby database and mount standby database using this controlfile, start mrp process.
============================================================================================================================
******what is backup strategy of oracle? what we have to tell if interview ask that question?
1)Hot backup(Online bkp) 2)Cold backup(Offline bkp) 3)Logical backup(table,schema,TBS,Full DB bkp levels), explain each kind of backups in brief.
Generally,
We are Taken daily as Incremental Backup.
at weekend Incremental Full Backup.
Monthly we are taken Cold backup.
But mostly depends on Application Down time and All.
Some Development Database we are taken Logical Backup.
============================================================================================================================
what is oratab file will contains? If it crashes What will happen to Database?
oratab contains the databsae SID name,oracle home path and Y/N value.
SID_NAME:ORACLE_HOME PATH:Y/N
If thevalue is Y, the database automatically restarts when the server is restarted
If Someone remove oratab file by manually. or it corrupt so run again root.sh oratab will create automatically.
============================================================================================================================
************Im taking full database backup daily using rman by using command backup database,So I took full db backup using rman from monday to saturday,Now i want to restore wednesday full db backup how can i do
RMAN will always select the most recent backup.so you can restore staurday's backup only
you are taking full db backup daily.. but in this case restore the wednesday control file and restore the db..
============================================================================================================================
*********HOW TO RESTORE OLD BACKUP USING RMAN
RMAN>restore backup from 'PATH';
You can use TAG(name of the backup) when you know which backup, exactly, you need.
============================================================================================================================
*******When DB performance(slow) is not good, what are things we need to check and in which sequence?
1)firstly we we check for Blocking sessions,Long Running sessions and check for Inactive sessions.
you go with views V$session,V$locks,V$session_longops
2) Check Index invalid/valid.again check BLevel of Index if is more than 3 then its should rebuilt.
3) Check what is last time we gather statistics if statistics not gather then gather statistics.
4) Generate Awr & Addm Report for analyzing.
============================================================================================================================
**********diff between Bitmap and BTree indexes...? situations to choose these indexes...?
B-Tree:-Is made of Branch nodes and Leaf nodes
where Branch nodes hold the prefix key value and also the link to Leaf node whereas,Leafe node is contains Indexed value and ROWID.
Bitmap:-It consists of the bits for every distinct value
we choose Bitmap index in warehousing environments where we dont have more dml operations and Btree in OLTP environment..
============================================================================================================================
********how will block change Tracking (Background Process) helps in RMAN incremental backups......?
Block change tracker will maintain the list of all blocks that were changed from the last backup, so when RMAN is taking the backup it will find changed from block change tracker and take backup of those blocks.
A background Process called CTWR gets enabled whenever u modify the data.This CTWR updates modified block to a seperate file at OS level.Whenever you take a backup RMAN takes the backup of modified blocks witout reading the entire datafile ,it gets information from CTWR.....
============================================================================================================================
********Difference between dmts and lmts
In DMTS extent allocation information will be stored in dictionary while in LMTS extent allocation information will be stored in Header itself
=============================================================================================================================
How can we see how many databases has registered in catalog database using from rman prompt
Using rc_database table.......
RMAN>list incarnation;
============================================================================================================================
*****which background process start when we enable block change tracking in RMAN?
CTWR(change tracking writer) process
If you enable Block change tracking, CTWR Background process starts.It tracks the block changes from Previous backup and make entry of those blocks in CTF(Change Tracking File). CTWR is Introduced from Oracle 10g to increase the performance of RMAN Incremental Backup.
============================================================================================================================
How to transfer a tablespace from one location to another ??
To transfor a tablespace one database to another database
1.put read only mode the tablespace(for ex:tts tablespace)
sql>alter tablespace tts read only;
2.export the tablespace user exp
$exp file=tts.dmp log=tts.log tablespaces=tts transport_tablespace=y tts_full_check=y
3.transfor the dump file and datafile to target database location
At targer side:
4.create the all neccesary users
5.import the dump file
$imp file=tts.dmp log=tts.log transport_tablespace=y datafile='$ORACLE_BASE/oradata/$ORACLE_SID/tts.dbf' ignore=y
6.put the tablespace read write mode on both databases
============================================================================================================================
******if I put one of my tablespace in backup mode
> alter tablespace <tb_name> begin backup;
from another session another user deleted the tablespace which I put in backup mode then what will
it ll not delete and show error
Error :-CANNOT PREVENT WRITES
============================================================================================================================
Difference between convention path and direct path in data pump,
direct path loading by-pass the sql buffer which is in sga
convention path loading through sql buffer so it slow compare to direct loding method.
============================================================================================================================
**********My DB was performing ok yesterday there is no change in code no backup or batch job is running and performance is very slow how to investiagate and how to solve the prob???
go through the statpack report and alert.log
1)check any batch,background process and crontab process are running.
2)check i/o's,cpu usuage & load by using vmstat,top,sar,free,swap.
3)Generate AWR reports amd ADDM reports.
============================================================================================================================
**********What r the different ways to recreate the control file when the db is not opened (bcz control file is lost)
If we have the trace backup of the control file, then we will use this file to recreate the control file to open the Database.
startup database in mount state and "ALTER DATABASE BACKUP CONTROLFILE TO TRACE;"
u will find the file in .trc
see the latest trace files in udump location. u can get it from there.
============================================================================================================================
*********what are patches in oracle
oralce patches:
1. Interim patch, we can also called as one-of-one patch or opatch.,to fix the bugs on database we user opatch.
->interim patch will be apply through opatch utility.
Loc:$ORACLE_HOME/Opatch/opatch
2.Cpu patch(critical upgrade patch): Quarterly basis we apply this patch,by downloading metalink.
Applying through opatch utility.
3.Patchset: Group of patches is called patchset .
To upgrade database version like 10.2.0.1.0 to 10.2.0.4.0
we apply patchset.
Applying through:OUT(runInstaller)
=============================================================================================================================
********What is row migration & row Chaning?
when we are updating any record and it saved in pct used if space is available in pct used 5 bytes but we have to stores 8 bytes of data then it will take space from other block and stores in the from of chain
when row is too big to accumudate its migrate into an other block this concecpt is called Row migration
when row is too big its speared into multiple block is called row chaining
============================================================================================================================
********What is fragmentation?
When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.
select table_name, avg_row_len, num_rows, round(avg_row_len * num_rows / 8100) used_space, blocks from user_tables order by table_name ;
TABLE_NAME AVG_ROW_LEN NUM_ROWS USED_SPACE BLOCKS
-------------------- ----------- ---------- ---------- ----------
T1 85 5000 52 63
T2 85 5000 52 18
T3 107 5000 66 32
SQL> exec dbms_stats.gather_table_stats('SCOTT','BIG1');
PL/SQL procedure successfully completed.
============================================================================================================================
What do you mean by High Water Mart(HWM)?
If one DB Block is filled up to 80% later user deleted 20% in between the 80%, as a result a gap will occur but while reading table will read entire block along with empty space again performance issue, however we can resolve with defragmantation, so point is that 80% level in DB Block is called High Water Mark.
============================================================================================================================
how to know the number of users are connecting to the db
At the OS level you can come to know how many users are logged into the Database by using
ps -ef | grep oracle |wc -l
At o/s level
-->who | wc -l
At database level
-->select count(username) from dba_users group by username
=============================================================================================================================
=============================================================================================================================
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home