workout with logical backups
Work-Out
with EXPORT and IMPORT:
Exp/imp is helpful to take the
backup of database if it is up and running only. That’s the only reason to called
as a logical backups. It is not possible to take the backup of a database while
it is down. It is a OS based command and it is
provide the platform independent dump files. Export is used to export
the data of database. Import is used for import the data into database. Dump
file just being as an alternative for
clod backups.
Advantage of EXP/IMP:
1) physical backups are done by DBA but exp/imp
can be done by even regular database
users.
2) it can be done by different levels as table,
user, FULL DB (by DBA)
3) It supports the backward compatibility.
If we are exported in 11g, the data will be imported to 10g through version
keyword. Forward means 11g dump file is imported into 12c its by default.
4) we can copy an object from one
schema to another schema. As well as its supports for another database.
5) it will support for DB
upgradation, PLATFORM migrations and reorganization
6) it will support the
incremental backups in various flavor as INCREMENTAL, COMPLETE, CUMILATIVE
7) support the network links to another database
to import the data.
Disadvantage of EXP/IMP:
1
) we can’t pause
and resume the database level operations
2) We can’t assign job names to
db operations
3) Traditional backups are the
client side backup
4) Incremental backups are
supported by traditional exp/imp but not by the data pump.
POINT to REMEMBER :
1) Always specifies LOG=exp.log. to know what
happen while taking the backup
2) Always specifies BUFFER= value. To make export
faster
3) specifies FILE name, otherwise
it will override the dump file .
4) always keep NOHUP or AT. If there is any
network issues or your command prompt will be discarded. If your command will
take more time to process, the session may time-out. Instead of this issues, if
your fired command with NOHUP it will run in background.
5) while going to import your
entire database just go with INDEXS=N. Try to create indexes manually after the
IMP is done.
EXPORT parameters and how to work
with them:
- File= full.dmp // to specifies the file name where to export.
- EXP help=y // it will show details and parameter of EXP
- BUFFER= give your own size, default it is 256kb. It is helpful to improve the performance of EXP.
- DIRECT=y // exp will collect the information by directly read the blocks. It will not include any memory components as SGA, DBBC. It is just a bypass to SGA. Conventional path Export uses the SQL
SELECT
statement to extract data from tables. Data is read from disk into a buffer cache, and rows are transferred to the evaluating buffer. The data, after passing expression evaluation, is transferred to the Export client, which then writes the data into the export file. Direct path Export is much faster than conventional path Export because data is read from disk into the buffer cache and rows are transferred directly to the Export client. The evaluating buffer is bypassed. The data is already in the format that Export expects, thus avoiding unnecessary data conversion. The data is transferred to the Export client, which then writes the data into the export file. (oracle docs) - COMPRESS= Y // multiple extent information is gathered into single extent, helpful in REORGANIZATION
- GRANTS=Y,INDEXES=Y,ROWS=Y CONSTRAINTS=Y // default it is yes.
- LOG=log.log // it is showing the output information of EXP. Helpful whathergetting any doubts we can read it.
- FULL=Y default it is NO, if it is Y. it will took the entire database backup
- OWNER=USER // user or schema level backup
- TABLES=emp,dept // table level bnackup
- INCTYPE=COMPLETE,INCREMENTAL,CUMMULATIVE // helpful for incremental BP.
- PARFILE=test.ext // you can create your own parameter file. By reading parfile it will export or import can be done. Some restrictions are there.
- Feedback= n value (by defult it is 0) it will give the feed backup of every 10 lines.
- TRANSPORT_TABLESPACE= name // collect the the TTS metadata.
- TABLESPACES= RAJA,RAVI //you can give tablespaces name to take the backup
- VOLSIZE= number of blocks to write to each tape volume.
- FILESIZE= you can mention size of a dump file. If file size is 1mb after filling the 1mb data into particular file , it will ask for the new file name to export the data.
$
export filesize=1mb full=y file=raja1.dmp
Export file: expdat.dmp > raja2.dmp
Export file: expdat.dmp > raja3.dmp
Export file: expdat.dmp >
15. QUERY= you can write your own query
Prepair
a parfile=xyx.par
$ vi xyx.par
Tables=emp
Query=’where deptno=10’
Exp parfile=xyx.par //we can get
he result
16. Triggers= if yes. We will get the triggers
info
17. RESUMABLE= suspend when a space related
issues is encountered
18. RESUMABLE_NAME= name of the statement to
identified resumable
19. RESUMABLE_TIMEOUT= wait time for resumable
20. FLASHBACK_SCN= SCN used to set session snapshot
back to time
21. FLASHBACK_TIME= get the SCN to closest to time
IMPROT
parameters:
We have some common
parameters as BUFFER,FILE,GRANTS, INDEXS,ROWS,LOG, FULL,TABLES,INCTYPE,PARFILE,TRANSPORT_TABLESPACE,
TABLESPACES.
- · SHOW=Y // it will show content of the particular dump file. Helpful to find out the dump is valid or not .
- IGNORE=Y ignores the error while creating tables which already existing in the database default it is N
- DESTROY= default N. it is used to overwrite the datafile or tablespaces which is already existing the database. If tablespaces is already existing, it will be reused, it is undesirable. If error occurs just specifies IGNORE=Y
- INDEXFILE=xy.sql //it will create a sql file to rebuild the indexes or tables to gain the performance. All the indexes are written into xy.sql , after finishing the import we can run this script file manually.
- FROMUSER=shyam TOUSER=raja while transfer an object from one schema to anther schema $ imp system/manager file=shyam.dmp tables=emp,dept \ fromuser=shyam touser=raj
- COMMIT=Y. it will give the auto commit after the every insert . its completely controlled by BUFFER size what we are given. Purpose, we have given buffer = 64k. it will commit the inserted rows when it reaches the 64k. if the file contents are approximately 44k then import will be stopped with a commit so the data will be written into disk. Buffer will have chance to control on the COMMIT.
- POINT_IN_TIME_RECOVER=default N, Indicates whether or not Import recovers one or more tablespaces in an oracle database to a prior point in time, without affecting the rest of the database.
- SKIP_UNUSABLE_INDEXES= whether or not import skips building indexes that were set to the indexes that were set to the index unusable state.
- DATAFILES= data files to be transported into database
- TTS_OWNERS= users that own data in the transportable tablespace set
- RESUMABLE= suspend when a space related issues is encountered
- RESUMABLE_NAME= name of the statement to identified resumable
- RESUMABLE_TIMEOUT= wait time for resumable
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home