Data Pump Export Best Practices
Best Practices and recommended parameter settings for Oracle Data Pump export operations.
General Recommendationsβ
- The recommended way to run a Data Pump export is in schema mode
- Use a parameter file (
PARFILE=xxx.par
)
Don't use sys as sysdba for the export
Data Pump Patchingβ
Apply the latest Data Pump Bundle Patch.
- Data Pump Recommended Proactive Patches For 19.10 and Above Doc ID 2819284.1
- The patch can be applied online but when executing datapatch no active Data Pump jobs should be running
Consistencyβ
By default consistency is active on a per table level.
Make sure the undo tablespace is large enough because a consistent export requires undo blocks
To export the database with a consistent timestamp:
FLASHBACK_TIME=systimestamp
To export the database with a consistent SCN:
FLASHBACK_SCN=nnn
Legacy Mode:
CONSISTENT=y
If the source database has Data Guard enabled the standby can be converted to a snapshot standby.
This database can then be used for the consistent export.
Diagnosticsβ
Always specify the LOGTIME=ALL
and METRICS=YES
parameters to log more timing/worker information and show the access_method per table in the logfile.
Here are some example logfiles with different diagnostics settings:
- Logfile Example (Default Settings):
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "SCHEMA1"."TABLE1" 5.117 KB 3 rows
- Logfile Example (
LOGTIME=ALL
):
12-APR-23 12:46:40.660: Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
12-APR-23 12:46:40.828: Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
12-APR-23 12:46:42.352: Processing object type SCHEMA_EXPORT/TABLE/TABLE
12-APR-23 12:46:48.775: . . exported "SCHEMA1"."TABLE1" 5.117 KB 3 rows
- Logfile Example (
METRICS=YES
):
W-1 Startup took 0 seconds
W-2 Startup took 0 seconds
W-1 Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
W-1 Completed 1 TABLESPACE_QUOTA objects in 0 seconds
W-1 Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
W-1 Completed 1 PROCACT_SCHEMA objects in 0 seconds
W-1 Processing object type SCHEMA_EXPORT/TABLE/TABLE
W-1 Completed 1 TABLE objects in 0 seconds
W-1 . . exported "SCHEMA1"."TABLE1" 5.117 KB 3 rows in 0 seconds using direct_path
- Logfile Example (
LOGTIME=ALL
+METRICS=YES
):
12-APR-23 11:09:49.519: W-1 Startup took 0 seconds
12-APR-23 11:09:50.883: W-2 Startup took 0 seconds
12-APR-23 11:09:52.593: W-2 Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
12-APR-23 11:09:52.594: W-2 Completed 1 TABLESPACE_QUOTA objects in 0 seconds
12-APR-23 11:09:52.734: W-2 Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
12-APR-23 11:09:52.740: W-2 Completed 1 PROCACT_SCHEMA objects in 0 seconds
12-APR-23 11:09:55.705: W-1 Processing object type SCHEMA_EXPORT/TABLE/TABLE
12-APR-23 11:09:55.766: W-1 Completed 1 TABLE objects in 1 seconds
12-APR-23 11:10:01.275: W-1 . . exported "SCHEMA1"."TABLE1" 5.117 KB 3 rows in 0 seconds using direct_path
Source DB Settingsβ
STREAMS_POOL_SIZE
β
Data Pump uses Advanced Queuing so STREAMS_POOL_SIZE needs to be set to a minimum of 256 MB.
-
Check the current size:
select current_size/1024/1024 as current_size_mb
from v$sga_dynamic_components
where component='streams pool'; -
Set
STREAMS_POOL_SIZE
to minimum of 256 MB:alter system set streams_pool_size=256m scope=both;
Dictionary Statisticsβ
Make sure the dictionary stats (SYS
, SYSTEM
) are current before running the export:
exec dbms_stats.gather_schema_stats ('SYS');
exec dbms_stats.gather_schema_stats ('SYSTEM');
Recommended Parametersβ
Overview of the most important and relevant Data Pump export parameters. Click here for examples.
SCHEMAS=n
β
Schema export is the recommended method to run a Data Pump export and the schemas can be specified with this parameter.
Example:
expdp ... schemas=schema1,schema2,schema3
LOGTIME=ALL
β
Always specify the logtime diagnostics parameter to log more timing information.
METRICS=YES
β
Always specify the metrics diagnostics parameter to show more worker information and the access_method per table in the logfile.
PARALLEL=n
β
This parameter is only valid for the Enterprise Edition
Use parallel processes to speed up the export operation. The number of processes depends on the actual number of cpus on the source system.
Degree of parallelism:
Platform | Degree |
---|---|
Cloud OCPU | 1 x OCPU count |
Cloud ECPU | .25 x ECPU count |
On-prem x86 | 2 x CPU cores count |
On-prem (Other) | Depends |
DUMPFILE=name%L.dmp
β
For optimal performance use multiple dump files for the Data Pump export. The %L parameter will append a number based on the parallel settings to the filename.
- Legacy/Old Format:
dumpfile=name%U.dmp
EXCLUDE=STATISTICS
β
Exclude optimizer statistics from the export. After the import the statistics should be gathered on the target db or transported from the source database with DBMS_STATS
.
- Regular
- Transportable Tablespaces
exclude=statistics
exclude=table_statistics,index_statistics
Optional Parametersβ
This is a list of optional Data Pump parameters that can be set during export:
FILESIZE=nG
β
To limit the size of the individual export dump files.
Example:
expdp ... dumpfile=mydump%L.dmp filesize=5G
Checksumβ
The checksum will be calculated at the end of the export job. This can take some time depending on the size of the export files.
This is a new feature available starting from Oracle 21c and can be used to calculate a checksum during the export operation.
There are 2 parameters that can be set: CHECKSUM
and CHECKSUM_ALGORITHM
. The default for CHECKSUM
is no but is changed to yes automatically if CHECKSUM_ALGORITHM
is specified. If CHECKSUM_ALGORITHM
is not specified the default value is set to sha256.
Examples:
expdp ... checksum=yes checksum_algorithm=sha256
expdp ... checksum_algorithm=sha384
expdp ... checksum=yes
Compressionβ
Use compression if you have the advanced compression option licensed.
The recommended compression algorithm method is MEDIUM
.
Example:
expdp ... compress=all compress_algorithm=medium
By default the compression parameter is set to METADATA_ONLY
which does not require an Advanced Compression Option license because only metadata information is compressed.
Setting the parameter to ALL
or DATA_ONLY
requires an extra license because of the actual data compression.
Valid Options:
COMPRESSION=[ALL | DATA_ONLY | METADATA_ONLY | NONE]
Multitenantβ
To avoid any noisy neighbours if the resources are insufficient set the following parameters:
-
Restrict the number of concurrent Data Pump jobs for a PDB
alter system set max_datapump_jobs_per_pdb=n;
-
Restrict the parallel degree setting for a Data Pump Job
alter system set max_datapump_parallel_per_job=n container=all;
Examplesβ
- Command
- Parameterfile
expdp userid=system@pdb1 \
schemas=SCHEMA1 \
directory=DATA_PUMP_DIR \
dumpfile=dump%L.dmp \
logfile=expdp.log \
filesize=1G \
flashback_time=systimestamp \
parallel=2 \
exclude=statistics \
logtime=all \
metrics=yes
userid=system@pdb1
schemas=SCHEMA1
directory=DATA_PUMP_DIR
dumpfile=dump%L.dmp
logfile=expdp.log
filesize=1G
flashback_time=systimestamp
parallel=2
exclude=statistics
logtime=all
metrics=yes