interview questions gathered from different sites.
How can i make export faster?
a. By
using DIRECT=Y and BUFFER parameters
b. Setting direct=yes, to
extract data by reading the data directly, bypasses the SGA, bypassing the SQL
command-processing layer (evaluating buffer), so it should be faster. Default
value N.
2 Which process is responsible for writing
data into dumpfile?
3. What
is the use of consistent=y parameter in export?
- It will take consistent
values while taking export of a table
4. What
the parameter COMPRESS will do during export?
a. During
import, It will put entire data in a single extent
b. Imports
into one extent. Specifies how export will manage the initial extent for the
table data. This parameter is helpful during database re-organization. Export
the objects (especially tables and indexes) with COMPRESS=Y. If table was
spawning 20 Extents of 1M each (which is not desirable, taking into account
performance), if you export the table with COMPRESS=Y, the DDL generated will
have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where
you do not have contiguous space on disk (tablespace), and do not want imports
to fail.
5. How
can we confirm that export dumpfile is valid?
By using SHOW=Y option
during import
6. If
you got a dumpfile to import and don’t know the fromuser, how you will get
that information?
a. We
can check that in export log file. If not, we can do import with SHOW=Y which
generates a log file. Fromuser will be there in that log file
7. What
precautions you will take t perform a schema exp/imp between a prod and
dev database?
We need to check if
already user existing in dev database
b. If
so, drop the user (take DDL and permissions info well before) or drop all the
objects
8. What
are the advantages of datapump over exp/imp?
9. Can
we import a 11g dumpfile into 10g database using datapump? If so, is it also
possible between 10g and 9i?
Yes we can import from 11g to 10g
using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9i
10. We
exported a table and imported into dev database. After checking we found
table is residing in SYSTEM tablespace. What could be the reason?
The user is having RESOURCE
role assigned. If we assign RESOURCE role, it will give some quota on SYSTEM
tablespace which is dangerous
11. What
you will do when export is running slow?
We need to skip taking export of indexes,
use BUFFER and DIRECT parameters
12. Import
process is running slow, what can you do now to make it faster?
We can skip importing indexes as it
is the most time taking job during import
13. You
are observing undo tablespace error during import, how you will avoid it?
We can use COMMIT=Y option
14.
What
is use of CONSISTENT option in exp?
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N
15. How to improve exp performance?
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
16. How to improve imp performance?
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
17. What is use of INDEXFILE option in
imp?
Will write DDLs of the objects in the dumpfile into the specified file.
Will write DDLs of the objects in the dumpfile into the specified file.
18. What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.
Will ignore the errors during import and will continue the import.
19. What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.
20. Why expdp is faster than exp (or) why
Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
21. How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.
Using parallel option which increases worker threads. This should be set based on the number of cpus.
22. How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.
Using parallel option which increases worker threads. This should be set based on the number of cpus.
23. In Data Pump, where the jobs info will be stored
(or) if you restart a job in Data Pump, how it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
24. What is the order of importing objects
in impdp?
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
25. How to import only metadata?
CONTENT= METADATA_ONLY
CONTENT= METADATA_ONLY
26. How to import into different
user/tablespace/datafile/table?
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
27. How to export/import without using
external directory?
28. Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can we import to 9i?
29. Using normal exp/imp, how to export in
higher version (11g) and import into lower version (10g/9i)?
31. In which cases imp/exp is used?
-Eliminate database fragmentation
-Schema refresh (move the schema from one database to another)
-Detect database corruption. Ensure that all the data can be read (if the data can be read that means there is no block corruption)
-Transporting tablespaces between databases
-Backup database objects
-Eliminate database fragmentation
-Schema refresh (move the schema from one database to another)
-Detect database corruption. Ensure that all the data can be read (if the data can be read that means there is no block corruption)
-Transporting tablespaces between databases
-Backup database objects
32. Which are the common IMP/EXP problems?
ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.
ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.
33. How can we check DATAPUMP file is corrupted or not?Sometimes we may be in situation, to check whether
the dumpfile exported long time back is VALID or not or our application team is
saying that the dumpfile provided by us is corrupted.
Use SQLFILE Parameter with import script to detect corruption. The use of this parameter will read the entire datapump export dumpfile and will report if corruption is detected.
impdp system/*** directory=dump_dir dumpfile=expdp.dmp logfile=corruption_check.log sqlfile=corruption_check.sql
This will write all DDL statements (which will be executed if an import is performed) into the file which we mentioned in the command.
Use SQLFILE Parameter with import script to detect corruption. The use of this parameter will read the entire datapump export dumpfile and will report if corruption is detected.
impdp system/*** directory=dump_dir dumpfile=expdp.dmp logfile=corruption_check.log sqlfile=corruption_check.sql
This will write all DDL statements (which will be executed if an import is performed) into the file which we mentioned in the command.
34. How can we find elapsed time for particular object
during Datapump or Export?We have
an undocumented parameter ‘metrics’ in DATAPUMP to check how much it took to
export different objects types.
Expdp system/passwd directory=dump_dir dumpfile=expdp_full.dmp logfile=expdp_full.log full=y metrics=y;
Expdp system/passwd directory=dump_dir dumpfile=expdp_full.dmp logfile=expdp_full.log full=y metrics=y;
35. How to move table from one tablespace to another
tablespace?
You can use any of the below method:
1.Export the table, drop the table, create definition of table in new tablespace and then import the data using (imp ignore=y).
2.Create new table in new tablespace then drop the original table and rename temporary table with original table name.
CREATE TABLE temp_name TABLESPACE new_tablespace as select * from 'source_table';
DROP TABLE real_table;
RENAME temp_name to real_table;
You can use any of the below method:
1.Export the table, drop the table, create definition of table in new tablespace and then import the data using (imp ignore=y).
2.Create new table in new tablespace then drop the original table and rename temporary table with original table name.
CREATE TABLE temp_name TABLESPACE new_tablespace as select * from 'source_table';
DROP TABLE real_table;
RENAME temp_name to real_table;
36. What is the difference between SQL*loader and Import
utilities?
Both these utilities are used for loading the data into the database. The difference is that the import utility relies on the data being produced by another oracle utility Export while SQL*Loader is a high speed data loading mechanism allows data to be loaded that has been produced by other utilities from different data source.SQL * Loader loads data from standard OS files or flat file in oracle database tables.Export/Import allows moving existing data in oracle format to and from oracle database.
Both these utilities are used for loading the data into the database. The difference is that the import utility relies on the data being produced by another oracle utility Export while SQL*Loader is a high speed data loading mechanism allows data to be loaded that has been produced by other utilities from different data source.SQL * Loader loads data from standard OS files or flat file in oracle database tables.Export/Import allows moving existing data in oracle format to and from oracle database.
37. How to re-organize schema?
We can use dbms_redefinition package for online re-organization of schema objects. Otherwise using import/export and data pump utility you can recreate or re-organize your schema.
We can use dbms_redefinition package for online re-organization of schema objects. Otherwise using import/export and data pump utility you can recreate or re-organize your schema.
38. How we can improve the EXP Performance?
1.Set the BUFFER parameter to a high value (e.g. 2M)
2.If you run multiple export sessions, ensure they write to different physical disks.
1.Set the BUFFER parameter to a high value (e.g. 2M)
2.If you run multiple export sessions, ensure they write to different physical disks.
39. How we can improve the IMP performance?
1.Import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated), import the data and recreate the indexes
2.Store the dump file to be imported on a separate physical disk from the oracle data files
3.If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import
4.Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB) ) and COMMIT =y or set COMMIT=n (is the default behavior: import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.)
5.Use the direct path to import the data (DIRECT=y)
6.(if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init<SID>.ora file
7.(if possible) Set the LOG_BUFFER to a big value and restart oracle.
1.Import the table using INDEXFILE parameter (the import is not done, but a file which contains the indexes creation is generated), import the data and recreate the indexes
2.Store the dump file to be imported on a separate physical disk from the oracle data files
3.If there are any constraints on the target table, the constraints should be disabled during the import and enabled after import
4.Set the BUFFER parameter to a high value (ex. BUFFER=30000000 (~30MB) ) and COMMIT =y or set COMMIT=n (is the default behavior: import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables.)
5.Use the direct path to import the data (DIRECT=y)
6.(if possible) Increase DB_CACHE_SIZE (DB_BLOCK_BUFFERS prior to 9i) considerably in the init<SID>.ora file
7.(if possible) Set the LOG_BUFFER to a big value and restart oracle.
40. How Datapump works internally? What resources
will be utilized at Network level, in Memory level?
How to Export Dumpfile to different mount points (multiple dumpfiles) sametime both in EXP and EXPDP, as the destination has less space available?
How to Export Dumpfile to different mount points (multiple dumpfiles) sametime both in EXP and EXPDP, as the destination has less space available?
DESC dba_exp_version or dba_exp_file
Labels: INterview questions on exp/imp
0 Comments:
Post a Comment
Subscribe to Post Comments [Atom]
<< Home