Which memory area used by datapump process?
streams_pool_size.
If streams_pool_size is zero then will get memory error.Set minimum to 96M value.show parameter STREAMS_POOL_SIZE.
How to export only ddl/metadata of a table?
Use CONTENT=METADATA_ONLY parameter during export.
How to improve datapump performance so that export/import happens faster?
Allocate value for streams_pool_size memory.Below query can helpin recomending value:
select ‘ALTER SYSTEM SET STREAMS_POOL_SIZE=’||(max(to_number(trim(c.ksppstvl)))+67108864)||’ SCOPE=SPFILE;’
from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c
where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in (‘__streams_pool_size’,’streams_pool_size’);
ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;
Use CLUSTER=N
Set PARALLEL_FORCE_LOCAL to TRUE
EXCLUDE=STATISTICS
excluding the generation and export of statistics at export time, will shorten the time needed to perform export operation.Later on target run DBMS_STATS.GATHER_DATABASE_STATS.
Use PARALLEL :
If CPU available, not of CPU bound or disk I/O bound.
How to monitor status of export/import – datapump operations/jobs?
dba_datapump_jobs help in knowing the status.
set linesize 200
set pagesize 200
col owner_name format a12
col job_name format a20
col operation format a12
col job_mode format a20
SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs where state=’EXECUTING’;
SELECT w.sid, w.event, w.seconds_in_wait from v$session s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w WHERE s.saddr = d.saddr AND s.sid = w.sid;
SELECT OPNAME, SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) “%_COMPLETE” FROM V$SESSION_LONGOPS WHERE OPNAME in(select d.job_name from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr) AND OPNAME NOT LIKE ‘%aggregate%’ AND TOTALWORK != 0 AND SOFAR <> TOTALWORK;
How to stop/start/kill datapump jobs?
expdp / as sysdba attach=job_name
export>status
export>stop_job
export>start_jop
export>kill_job
We can kill jobs from alter system kill session command.
How will you take consistent export backup? What is the use of flashback_scn ?
select to_char(current_scn) from v$database;
How to drop constraints before import?
set feedback off;
spool /oradba/orahow/drop_constraints.sql;
select ‘alter table SCHEMA_NAME.’ || table_name || ‘ drop constraint ‘ || constraint_name || ‘ cascade;’
from dba_constraints where owner = ‘SCHEMA_NAME’ and not (constraint_type = ‘C’) order by table_name,constraint_name;
Spool off;
exit;
Getting undo tablespace error during import, how you will avoid it?
use COMMIT=Y option.
Is import a 11g dumpfile into 10g database using datapump?
Yes we can import from 11g to 10g using VERSION option.
If you don’t have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?
You can use network link/ DB Link for export. You can use network_link by following these simple steps:
Create a TNS entry for the remote database in your tnsnames.ora file
Test with tnsping sid
Create a database link to the remote database
Specify the database link as network_link in your expdp or impdp syntax
I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don’t have that much available disk space? What could be the reason that only ddl is consuming huge space?
Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below.
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
As you can see above, oracle allocating 128MB for one table initially even if row count is zero.
To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.
some of the parameters you have used during export?
CONTENT: Specifies data to unload where the valid keywords are:
(ALL), DATA_ONLY, and METADATA_ONLY.
DIRECTORY Directory object to be used for dumpfiles and logfiles.
DUMPFILE List of destination dump files (expdat.dmp),
e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.
ESTIMATE_ONLY Calculate job estimates without performing the export.
EXCLUDE Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.
FILESIZE Specify the size of each dumpfile in units of bytes.
FLASHBACK_SCN SCN used to set session snapshot back to.
FULL Export entire database (N).
HELP Display Help messages (N).
INCLUDE Include specific object types, e.g. INCLUDE=TABLE_DATA.
JOB_NAME Name of export job to create.
LOGFILE Log file name (export.log).
NETWORK_LINK Name of remote database link to the source system.
NOLOGFILE Do not write logfile (N).
PARALLEL Change the number of active workers for current job.
PARFILE Specify parameter file.
QUERY Predicate clause used to export a subset of a table.
SCHEMAS List of schemas to export (login schema).
TABLES Identifies a list of tables to export – one schema only.
TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.
VERSION Version of objects to export where valid keywords are:
(COMPATIBLE), LATEST, or any valid database version.
=======================================================================
Oracle DBA Interview Question and Answer - Export/Import
What is use of CONSISTENT option in exp?
When you export a table, you are guaranteed that the contents of that table will be consistent with the time that the export of that table was started. This means that if you start exporting the table at 12:00 and someone makes changes to the data in the table at 12:05 and your export of this table finishes at 12:10, then the export will not contain any of the changes made between 12:00 and 12:10. You cannot change this behavior with Oracle's export utility.
The CONSISTENT parameter controls whether or not the entire export is consistent, even between tables. If CONSISTENT=N (the default), then the export of a table will be consistent, but changes can occur between tables. If CONSISTENT=Y, then the entire dump file is consistent with the point in time that you started the export.
What is use of DIRECT=Y option in exp?
Normally export will follow the process of SELECT statement i.e data from disk will be copied to buffer cache and then it will be written to dump file. When we use direct path by specifying DIRECT=Y in export command, then oracle will copy data directly from disk to PGA and from there it is written to dumpfile.
What is use of COMPRESS option in exp?
If we specify COMPRESS=y during export then at the time of table creation while importing, the INITIAL extent of the table would be as large as the sum of all the extents allocated to the table in the original database.
If we specify COMPRESS=n during export then while creating table in the import, it will use the same values of INITIAL extent as in the original database.
Now lets say I have a table of 100 MB. There have been some deletions and updations and only 50 MB of actual data is there. I export the table with COMPRESS=y and recreate it in some other database. It will sum all the extents and assign as INITIAL extent while creating the table. There is only 50 MB of data in the table but it has allocated 100 MB already. In case, you have limited space this is not a very good option.
If I do with COMPRESS=N and then import the table, its INITIAL extent will be as large as INITIAL extent in the original database and then as required, new extents will be allocated. So now my table in the new database would be approximately 50 MB in size.
Which are the common IMP/EXP problems?
ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..
IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.
ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).
ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.
What is the Benefits of the Data Pump Technology?
The older export/import technology was client-based. The Data Pump technology is purely server based. All dump, log, and other files are created on the server by default. Data Pump technology offers several benefits over the traditional export and import data utilities.
The following are the main benefits of the Data Pump technology:
Improved performance: The performance benefits are significant if you are transferring huge
amounts of data.
Ability to restart jobs: You can easily restart jobs that have stalled due to lack of space or have
failed for other reasons. You may also manually stop and restart jobs.
Parallel execution capabilities: By specifying a value for the PARALLEL parameter, you can choose the number of active execution threads for a Data Pump Export or Data Pump Import job.
Ability to attach to running jobs: You can attach to a running Data Pump job and interact with
it from a different screen or location. This enables you to monitor jobs, as well as to modify
certain parameters interactively. Data Pump is an integral part of the Oracle database server,
and as such, it doesn’t need a client to run once it starts a job.
Ability to estimate space requirements: You can easily estimate the space requirements for
your export jobs by using the default BLOCKS method or the ESTIMATES method, before running
an actual export job (see the “Data Pump Export Parameters” section later in this chapter for
details).
Network mode of operation: Once you create database links between two databases, you can
perform exports from a remote database straight to a dump file set. You can also perform
direct imports via the network using database links, without using any dump files. The network
mode is a means of transferring data from one database directly into another database with
the help of database links and without the need to stage it on disk.
Fine-grained data import capability: Oracle9i offered only the QUERY parameter, which enabled
you to specify that the export utility extract a specified portion of a table’s rows. With Data Pump,
you have access to a vastly improved fine-grained options arsenal, thanks to new parameters
like INCLUDE and EXCLUDE.
Remapping capabilities: During a Data Pump import, you can remap schemas and tablespaces,
as well as filenames, by using the new REMAP_ * parameters. Remapping capabilities enable
you to modify objects during the process of importing data by changing old attributes to new
values. For example, the REMAP_SCHEMA parameter enables you to map all of user HR’s schema
to a new user, OE. The REMAP_SCHEMA parameter is similar to the TOUSER parameter in the old
import utility
How to improve exp performance?
1. Set the BUFFER parameter to a high value. Default is 256KB.
2. Stop unnecessary applications to free the resources.
3. If you are running multiple sessions, make sure they write to different disks.
4. Do not export to NFS (Network File Share). Exporting to disk is faster.
5. Set the RECORDLENGTH parameter to a high value.
6. Use DIRECT=yes (direct mode export).
How to improve imp performance?
1. Place the file to be imported in separate disk from datafiles.
2. Increase the DB_CACHE_SIZE.
3. Set LOG_BUFFER to big size.
4. Stop redolog archiving, if possible.
5. Use COMMIT=n, if possible.
6. Set the BUFFER parameter to a high value. Default is 256KB.
7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.
8. Use STATISTICS=NONE
9. Disable the INSERT triggers, as they fire during import.
10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
What are the datapump export modes?
you can perform Data Pump Export jobs in several modes:
Full export mode: You use the FULL parameter when you want to export the entire database in
one export session. You need the EXPORT_FULL_DATABASE role to use this mode.
Schema mode: If you want to export a single user’s data and/or objects only, you must use the
SCHEMAS parameter.
Tablespace mode: By using the TABLESPACES parameter, you can export all the tables in one or
more tablespaces. If you use the TRANSPORT_TABLESPACES parameter, you can export just the
metadata of the objects contained in one or more tablespaces. You may recall that you can
export tablespaces between databases by first exporting the metadata, copying the files of the
tablespace to the target server, and then importing the metadata into the target database.
Table mode: By using the TABLES parameter, you can export one or more tables. The TABLES
parameter is identical to the TABLES parameter in the old export utility.
What is COMPRESSION parameter in expdp?
The COMPRESSION parameter enables the user to specify which data to compress before writing theexport data to a dump file. By default, all metadata is compressed before it’s written out to an export dump file. You can disable compression by specifying a value of NONE for the COMPRESSION parameter, as shown here:
$ expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=NONE
The COMPRESSION parameter can take any of the following four values:
ALL: Enables compression for the entire operation.
DATA_ONLY: Specifies that all data should be written to the dump file in a compressed format.
METADATA_ONLY: Specifies all metadata be written to the dump file in a compressed format.
This is the default value.
NONE: Disables compression of all types.
What are Export Filtering Parameters in expdp?
Data Pump contains several parameters related to export filtering. Some of them are substitutes for old export parameters; others offer new functionality.
CONTENT
By using the CONTENT parameter, you can filter what goes into the export dump file. The CONTENT
parameter can take three values:
• ALL exports both table data and table and other object definitions (metadata).
• DATA_ONLY exports only table rows.
• METADATA_ONLY exports only metadata.
EXCLUDE and INCLUDE
The EXCLUDE and INCLUDE parameters are two mutually exclusive parameters that you can use to perform what is known as metadata filtering. Metadata filtering enables you to selectively leave out or include certain types of objects during a Data Pump Export or Import job. In the old export utility, you used the CONSTRAINTS, GRANTS, and INDEXES parameters to specify whether you wanted to export those objects. Using the EXCLUDE and INCLUDE parameters, you now can include or exclude many other kinds of objects besides the four objects you could filter previously. For example, if you don’t wish to export any packages during the export, you can specify this with the help of the EXCLUDE parameter.
QUERY
The QUERY parameter serves the same function as it does in the traditional export utility: it lets you selectively export table row data with the help of a SQL statement. The QUERY parameter permits you to qualify the SQL statement with a table name, so that it applies only to a particular table. Here’s an example:
QUERY=OE.ORDERS: "WHERE order_id > 100000"
In this example, only those rows in the orders table (owned by user OE) where the order_id is
greater than 100,000 are exported.
What is Network Link Parameter and how it works?
The Data Pump Export utility provides a way to initiate a network export. Using the NETWORK_LINK parameter, you can initiate an export job from your server and have Data Pump export data from a remote database to dump files located on the instance from which you initiate the Data Pump Export job.
Here’s an example that shows you how to perform a network export:
$ expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=finance
DUMPFILE=network_export.dmp LOGFILE=network_export.log
In the example, the NETWORK_LINK parameter must have a valid database link as its value. This
means that you must have created the database link ahead of time. This example is exporting data from the finance database on the prod1 server.
Let’s say you have two databases, called local and remote. In order to use the NETWORK_LINK parameter and pass data directly over the network, follow these steps:
1. Create a database link to the remote database, which is named remote in this example:
SQL> CREATE DATABASE LINK remote
CONNECT TO scott IDENTIFIED BY tiger
USING 'remote.world';
2. If there isn’t one already, create a Data Pump directory object:
SQL> CREATE DIRECTORY remote_dir1 AS '/u01/app/oracle/dp_dir';
3. Set the new directory as your default directory, by exporting the directory value:
$ export DATA_PUMP_DIR=remote_dir1
4. Perform the network export from the database named remote:
$ expdp system/sammyy1 SCHEMAS=SCOTT FILE_NAME=network.dmp NETWORK_LINK=finance
You’ll see that the Data Pump Export job will create the dump file network.dmp (in the directory location specified by remote_dir1) on the server hosting the database named local. However, the data within the dump file is extracted from the user scott’s schema in the remote database (named remote in our example). You can see that the NETWORK_LINK parameter carries the dump files over the network from a remote location to the local server. All you need is a database link from a database on the local server to the source database on the remote server.
What is use of INDEXFILE option in imp?
Will write DDLs of the objects in the dumpfile into the specified file.
What is use of IGNORE option in imp?
Will ignore the errors during import and will continue the import.
What are the differences between expdp and exp (Data Pump or normal exp/imp)?
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.
Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
How to improve expdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.
How to improve impdp performance?
Using parallel option which increases worker threads. This should be set based on the number of cpus.
In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
What is the order of importing objects in impdp?
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
How to import only metadata?
CONTENT= METADATA_ONLY
How to import into different user/tablespace/datafile/table?
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
STEPS TO USE EXPDP TO EXPORT DATA FROM PHYSICAL STANDBY DATABASE:
https://www.sultandba.com/2020/06/use-expdp-export-data-physical-standby-database.html