working with datapump
EXPDP/IMPDP
Datapump is facility to run at
serverside and much faster than exp/imp by providing the parallels. High speed
to transform the data from one database to another database. We can’t import
the dumpfile which is exported by EXP. Introduced from 10g onwards. To perform
the datapump, internally oracle has to contains two packages DBMS_METADATA,
DBMS_DATAPUMP. In order to execute this packages we need to mention streams_pool_size.
When we are invokes the datapump operations. It internally invokes two
background process to finish the task.
1)
Master
background process
2)
Worker
background process (slave)
Master back ground process is
responsible to create the master table based on your job_name. actuall work
done by slave . It is responsible to write the data from database to dump file.
Once job is completed master table automatically drop by oracle.
Advantages with datapump rather
than EXP/IMP:
·
We
can stop and resumes the job.
·
Server
side utility and faster in performance by
parallel.
·
We
can estimate the time to import or export
·
We
can attach and detach the jobs to long running’s
·
Support
for filtering data and metadata
·
Data
archival
·
Upgrading
to new release
·
Backing
up oracle database
·
Transfer
the data b/w database
·
We
can do full,schema,table,tablespace,TTS level exports
Process to work with DataPump:
·
Execute
the both packages
·
Create
a directory in OS level and DB level
·
Provide
read,write privileges to users who want to perform the export
Sql>
Create directory
dpump_dir as ‘/disk1/oradata/pruthvi’;
Sql> grant read,write on
directory dpump_dir to user1,user2;
$ mkdir –p
/disk1/oradata/pruthvi/dpump_dir
EXPDP/IMPDP
Datapump is facility to run at
serverside and much faster than exp/imp by providing the parallels. High speed
to transform the data from one database to another database. We can’t import
the dumpfile which is exported by EXP. Introduced from 10g onwards. To perform
the datapump, internally oracle has to contains two packages DBMS_METADATA,
DBMS_DATAPUMP. In order to execute this packages we need to mention streams_pool_size.
When we are invokes the datapump operations. It internally invokes two
background process to finish the task.
1)
Master
background process
2)
Worker
background process (slave)
Master back ground process is
responsible to create the master table based on your job_name. actuall work
done by slave . It is responsible to write the data from database to dump file.
Once job is completed master table automatically drop by oracle.
Advantages with datapump rather
than EXP/IMP:
·
We
can stop and resumes the job.
·
Server
side utility and faster in performance by
parallel.
·
We
can estimate the time to import or export
·
We
can attach and detach the jobs to long running’s
·
Support
for filtering data and metadata
·
Data
archival
·
Upgrading
to new release
·
Backing
up oracle database
·
Transfer
the data b/w database
·
We
can do full,schema,table,tablespace,TTS level exports
Process to work with DataPump:
·
Execute
the both packages
·
Create
a directory in OS level and DB level
·
Provide
read,write privileges to users who want to perform the export
Sql>
Create directory
dpump_dir as ‘/disk1/oradata/pruthvi’;
Sql> grant read,write on
directory dpump_dir to user1,user2;
$ mkdir –p
/disk1/oradata/pruthvi/dpump_dir
Points to remember :
·
if
we are not mention directory name, dump file will be stored in this location so
please mention the directory and dump file name. If we are not mention the dump
file name first time it is ok, Onwards it will show you an error.
·
$expdp
full=y
‘/oraeng/app/oracle/product/11.2.0/rdbms/log/expdat.dmp’
·
Must
and should mention your job name. it is easy to know which job is stoped,
attaching an existing job.
$expdp
full=y directory=datapump_dir dumpfile=dump1.dmp job_name=raja_job
logfile=logic1.log
·
We
can stop the job and we can resume the job
- To stop the job—press control+C, export> stop. Present job will be stopped
- To resume the job -- expdp attach=job_name, start_job
- Ps –ef | grep expdp
- Tail –f logfile
·
What
Is the use of NETWORK_LINK?
-----> If we are mention the network
link to access the target database dumpfiles.
·
What
is the TABLE_EXISTS_ACTION?
----- Action to take if imported object
already exists.
----- Valid keywords are: APPEND,
REPLACE, [SKIP] and TRUNCATE.
----- SKIP: it will skip the existing
rows
----- TRUNCATE: existing rows will be
deleted
----- Append: it will add the new info
to the particular table
----- Replace: it will replace the
entire table rows.
parameters of IMPDP/EXPDP
- ATTACH : Attach to an existing job. For example, ATTACH=job_name.
- CONTENT : Specifies data to load. Valid keywords are : [ALL],DATA_ONLY & METADATA_ONLY.
- DATA_OPTIONS: Data layer option flags. Valid keywords are: SKIP_CONSTRAINT_ERRORS.
- DIRECTORY: Directory object to be used for dump, log and sql files.
- DUMPFILE :List of dumpfiles to import from [expdat.dmp].
- ENCRYPTION_PASSWORD: Password key for accessing encrypted data within a dump file. Not valid for network import jobs.
- ESTIMATE: Calculate job estimates. Valid keywords are: [BLOCKS] and STATISTICS.
- EXCLUDE: Exclude specific object types. For example, EXCLUDE=SCHEMA:"='HR'".
- FLASHBACK_SCN: SCN used to reset session snapshot.
- FLASHBACK_TIME: Time used to find the closest corresponding SCN value.
- FULL: Import everything from source [Y].
- HELP: Display help messages [N].
- INCLUDE: Include specific object types. For example, INCLUDE=TABLE_DATA.
- JOB_NAME: Name of import job to create.
- LOGFILE: Log file name [import.log].
- NETWORK_LINK: Name of remote database link to the source system.
- NOLOGFILE : Do not write log file [N].
- PARALLEL: Change the number of active workers for current job.
- PARFILE: Specify parameter file.
- PARTITION_OPTIONS: Specify how partitions should be transformed. Valid keywords are: DEPARTITION, MERGE and [NONE].
- QUERY: Predicate clause used to import a subset of a table. For example, QUERY=employees:"WHERE department_id > 10".
- REMAP_DATA: Specify a data conversion function. For example, REMAP_DATA=EMP.EMPNO:REMAPPKG.EMPNO.
- REMAP_DATAFILE: Redefine datafile references in all DDL statements.
- REMAP_SCHEMA: Objects from one schema are loaded into another schema.
- REMAP_TABLE: Table names are remapped to another table. For example, REMAP_TABLE=EMP.EMPNO:REMAPPKG.EMPNO.
- REMAP_TABLESPACE: Tablespace object are remapped to another tablespace.
- REUSE_DATAFILES: Tablespace will be initialized if it already exists [N].
- SCHEMAS: List of schemas to import.
- SKIP_UNUSABLE_INDEXES: Skip indexes that were set to the Index Unusable state.
- SOURCE_EDITION: Edition to be used for extracting metadata.
- SQLFILE: Write all the SQL DDL to a specified file.
- STATUS: Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
- STREAMS_CONFIGURATION: Enable the loading of Streams metadata
- TABLE_EXISTS_ACTION: Action to take if imported object already exists. Valid keywords are: APPEND, REPLACE, [SKIP] and TRUNCATE.
- TABLES: Identifies a list of tables to import. For example, TABLES= HR.EMPLOYEES,SH.SALES:SALES_1995.
- TABLESPACES: Identifies a list of tablespaces to import.
- TARGET_EDITION: Edition to be used for loading metadata.
- TRANSFORM: Metadata transform to apply to applicable objects Valid keywords are: OID, PCTSPACE, SEGMENT_ATTRIBUTES and STORAGE.
- TRANSPORTABLE : Options for choosing transportable data movement. Valid keywords are: ALWAYS and [NEVER]. Only valid in NETWORK_LINK mode import operations.
- TRANSPORT_DATAFILES: List of datafiles to be imported by transportable mode.
- TRANSPORT_FULL_CHECK: Verify storage segments of all tables [N].
- TRANSPORT_TABLESPACES: List of tablespaces from which metadata will be loaded. Only valid in NETWORK_LINK mode import operations.
- VERSION: Version of objects to import Valid keywords are: [COMPATIBLE], LATEST or any valid database version. Only valid for NETWORK_LINK and SQLFILE.
- CONTINUE_CLIENT: Return to logging mode. Job will be restarted if idle.
- EXIT_CLIENT: Quit client session and leave job running.
- HELP: Summarize interactive commands.
- KILL_JOB: Detach and delete job.
- PARALLEL: Change the number of active workers for current job.
- START_JOB: Start or resume current job. Valid keywords are: SKIP_CURRENT.
- STATUS: Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
- STOP_JOB: Orderly shutdown of job execution and exits the client.
Valid keywords are: IMMEDIATE.
The following commands are valid while in interactive mode.
Note: abbreviations are allowed.
Expdp
- ATTACH : Attach to an existing job. For example, ATTACH=job_name.
- DIRECTORY : Directory object to be used for dump and log files.
- DUMPFILE : Specify list of destination dump file names [expdat.dmp].
- ESTIMATE : Calculate job estimates. Valid keyword values are: [BLOCKS] and STATISTICS.
- ESTIMATE_ONLY : Calculate job estimates without performing the export.
- EXCLUDE : Exclude specific object types. For example, EXCLUDE=SCHEMA:"='HR'".
- FILESIZE :Specify the size of each dump file in units of bytes.
- FULL Export entire database [N].
- HELP :Display Help messages [N].
- INCLUDE: Include specific object types. For example, INCLUDE=TABLE_DATA.
- JOB_NAME : Name of export job to create.
- LOGFILE Specify log file name [export.log].
- NETWORK_LINK Name of remote database link to the source system.
- NOLOGFILE Do not write log file [N].
- PARALLEL Change the number of active workers for current job.
- PARFILE Specify parameter file name.
- QUERY For example, QUERY=employees:"WHERE department_id > 10".
- COMPRESSION: Reduce the size of a dump file. Valid keyword values are: ALL, DATA_ONLY, [METADATA_ONLY] and NONE.
- CONTENT :Specifies data to unload. Valid keyword values are: [ALL], DATA_ONLY and METADATA_ONLY.
- DATA_OPTIONS : Data layer option flags. Valid keyword values are: XML_CLOBS.
- ENCRYPTION : Encrypt part or all of a dump file. Valid keyword values are: ALL, DATA_ONLY, ENCRYPTED_COLUMNS_ONLY, METADATA_ONLY and NONE.
- ENCRYPTION_ALGORITHM Specify how encryption should be done. Valid keyword values are: [AES128], AES192 and AES256.
- ENCRYPTION_MODE: Method of generating encryption key. Valid keyword values are: DUAL, PASSWORD and [TRANSPARENT].
- ENCRYPTION_PASSWORD: Password key for creating encrypted data within a dump file.
- FLASHBACK_SCN SCN used to reset session snapshot.
- FLASHBACK_TIME : Time used to find the closest corresponding SCN value.
- REMAP_DATA: Specify a data conversion function. For example, REMAP_DATA= EMP.EMPNO:REMAPPKG.EMPNO.
- REUSE_DUMPFILES :Overwrite destination dump file if it exists [N].
- SAMPLE : Percentage of data to be exported.
- SCHEMAS List of schemas to export [login schema].
- SOURCE_EDITION Edition to be used for extracting metadata.
- STATUS : Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
- TABLES: Identifies a list of tables to export. For example, TABLES=HR.EMPLOYEES, SH.SALES: SALES_1995.
- TABLESPACES Identifies a list of tablespaces to export.
- TRANSPORTABLE Specify whether transportable method can be used. Valid keyword values are: ALWAYS and [NEVER].
- TRANSPORT_FULL_CHECK Verify storage segments of all tables [N].
- TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
- VERSION: Version of objects to export. Valid keyword values are: [COMPATIBLE], LATEST or any valid database version.
- ADD_FILE Add dumpfile to dumpfile set.
- CONTINUE_CLIENT Return to logging mode. Job will be restarted if idle.
- EXIT_CLIENT Quit client session and leave job running.
- FILESIZE Default filesize (bytes) for subsequent ADD_FILE commands.
- HELP Summarize interactive commands.
- KILL_JOB Detach and delete job.
- PARALLEL: Change the number of active workers for current job.
- REUSE_DUMPFILES :Overwrite destination dump file if it exists [N].
- START_JOB :Start or resume current job.Valid keyword values are: SKIP_CURRENT.
- STATUS :Frequency (secs) job status is to be monitored where the default [0] will show new status when available.
- STOP_JOB :Orderly shutdown of job execution and exits the client. Valid keyword values are: IMMEDIATE.
Project
1: Schema refresh:
·
Take the backup of schema
$ expdp directory=datapump_dir
dumpfile=dump1.dmp logfile=log1.log schemas=shyam
·
Drop the schema
Sql>
drop user shyam cascade;
·
Import the schema
·
impdp
directory=datapump_dir dumpfile=dump1.dmp log=log2.log
project
2: EXCLUDE tables:
Impdp dumpfile=dump.dmp exclude=table:”in(‘EMP’)”
exclude=table:”in(’DEPT’)”
impdp directory=datapump_dir dumpfile=dump1.dmp
log=log2.log exclude=schema:"in('raja')"
project
3: transfer the schema from one user to another users
$ exp file=project.dmp owner=’U3’
$ Imp file=project.dmp fromuser=’U3’ touser=’U2’
Project
4 : transport tablespace
1)
Check the platform
Sql> select * from
v$transportable_platform;
2
choose your tablespaces which is to transfer, make it as read only
Sql> alter tablespace user1 read
only;
3
export the tablespace
$ expdp dumpfile=TS.dmp
logfile=users.log directory= datapump_dir transport_tablespaces=users
4)
move the dump file to client environment
5)
import the tablespace
$ impdp directory=datapump_dir
dumpfile=users.dmp transport_datafiles=’/disk1/oradata/pruthvi/user.dbf’
6)
make it as read write tablespace
SQL> alter tablespace user1 read
write;
·
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home