Oracle Expdp/Impdp – Datapump Interview Questions/FAQs

Q1. How to export only ddl/metadata of a table?
Ans: you can use CONTENT=METADATA_ONLY parameter during export.

Q2: Which memory area used by datapump process?

Ans:  streams_pool_size. If streams_pool_size is zero 0 then probably you will get memory related error. Please check this parameter and set minimum to 96M value.

show parameter STREAMS_POOL_SIZE

NAME                                 TYPE        VALUE
———————————— ———– —————–
streams_pool_size                    big integer 96M

Q3: How to improve datapump performance so that export/import happens faster?
Ans:  
  • Allocate streams_pool_size memory. Below query will give you the recommended settings of this parameter.
 select ‘ALTER SYSTEM SET STREAMS_POOL_SIZE=’||                  (max(to_number(trim(c.ksppstvl)))+67108864)||’ SCOPE=SPFILE;’
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in (‘__streams_pool_size’,’streams_pool_size’);
ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;
  • Use CLUSTER=N : In a RAC environment it can improve the speed of Data Pump API based operations.
  • Set PARALLEL_FORCE_LOCAL to a value of TRUE since PARALLEL_FORCE_LOCAL could have a wider scope of effect than just Data Pump API based operations.
  • EXCLUDE=STATISTICS:  excluding the generation and export of statistics at export time will shorten the time needed to perform any export operation. The DBMS_STATS.GATHER_DATABASE_STATS procedure would then be used at the target database once the import operation was completed.
  • Use PARALLEL : If there is more than one CPU available and the environment is not already CPU bound or disk I/O bound or memory bound and multiple dump files are going be used (ideally on different spindles) in the DUMPFILE parameter, then parallelism has the greatest potential of being used to positive effect, performance wise.

Q4: How to monitor status of export/import – datapump operations/jobs?

Ans: From dba_datapump_jobs you can easily monitor the status. You can use the below query.
set linesize 200
set pagesize 200
col owner_name format a12
col job_name format a20
col operation format a12
col job_mode format a20
SELECT 
owner_name, 
job_name, 
operation, 
job_mode, 
state 
FROM 
dba_datapump_jobs
where 
state=’EXECUTING’;
SELECT   w.sid, w.event, w.seconds_in_wait
FROM   V$SESSION s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w
WHERE   s.saddr = d.saddr AND s.sid = w.sid;
SELECT 
OPNAME, 
SID, 
SERIAL#, 
CONTEXT, 
SOFAR, 
TOTALWORK,
    ROUND(SOFAR/TOTALWORK*100,2) “%_COMPLETE”
FROM 
V$SESSION_LONGOPS
WHERE 
OPNAME in
(
select 
d.job_name
from 
v$session s, 
v$process p, 
dba_datapump_sessions d
where 
p.addr=s.paddr 
and 
s.saddr=d.saddr
)
AND 
OPNAME NOT LIKE ‘%aggregate%’
AND 
TOTALWORK != 0
AND 
SOFAR <> TOTALWORK;
Q5: How to stop/start/kill datapump jobs?
Ans: expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job
You can also kill jobs from alter system kill session command. SID and SERIAL# you will get from the above command.
alter system kill session ‘SID,SERIAL#’ immediate;
Q6: How will you take consistent export backup? What is the use of flashback_scn ?
Ans:  To take a consistent export backup you can use the below method:
SQL:  
           select to_char(current_scn) from v$database;
Expdp parfile content:
—————————
directory=OH_EXP_DIR 
dumpfile=exporahow_4apr_<yyyymmdd>.dmp 
logfile=exporahow_4apr_<yyyymmdd>.log 
schemas=ORAHOW 
flashback_scn=<<current_scn>>
Q7: How to drop constraints before import?
Ans: 
set feedback off;
spool /oradba/orahow/drop_constraints.sql;
select ‘alter table SCHEMA_NAME.’ || table_name || ‘ drop constraint ‘ || constraint_name || ‘ cascade;’
from dba_constraints where owner = ‘SCHEMA_NAME’
  and not (constraint_type = ‘C’)
  order by table_name,constraint_name;
  Spool off;
  exit;
Q8: I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don’t have that much available disk space? What could be the reason that only ddl is consuming huge space?
Ans: Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below. 
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
  STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.
Q9: If you don’t have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?
Ans: You can use network link/ DB Link for export.  You can use network_link by following these simple steps:

Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax

Q10: Tell me some of the parameters you have used during export?
Ans:

CONTENT:         Specifies data to unload where the valid keywords are:
                             (ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY       Directory object to be used for dumpfiles and logfiles.
DUMPFILE         List of destination dump files (expdat.dmp),
                             e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE_ONLY         Calculate job estimates without performing the export.
EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE              Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN         SCN used to set session snapshot back to.
FULL                  Export entire database (N).
HELP                  Display Help messages (N).
INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME              Name of export job to create.
LOGFILE               Log file name (export.log).
NETWORK_LINK          Name of remote database link to the source system.
NOLOGFILE             Do not write logfile (N).
PARALLEL              Change the number of active workers for current job.
PARFILE               Specify parameter file.
QUERY                 Predicate clause used to export a subset of a table.
SCHEMAS               List of schemas to export (login schema).
TABLES                Identifies a list of tables to export – one schema only.
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION               Version of objects to export where valid keywords are:
                      (COMPATIBLE), LATEST, or any valid database version.

Q11: You are getting undo tablespace error during import, how you will avoid it?
Ans: We can use COMMIT=Y option

Q12: Can we import a 11g dumpfile into 10g database using datapump? 
Ans: Yes we can import from 11g to 10g using VERSION option.