Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Thursday, October 22, 2020

OAM, OID and OIM:

 OAM: Oracle Access Manager:

OAM is the access manager, where we have abilities such as Single Sign On (SSO),Authentication

Authorization, Real time session management, Auditing and Policy Administration.

3 consoles in OAM i.e. Weblogic Console(/console), Enterprise Manager Console(/em), OAM Console (/oamconsole) in Oracle Access Manager.

Three main Consoles in OAM mainly

a) OAM Console : To register and manage system configurations, security elements, and policies.

b) WebLogic Console : Used by Administrators to Manage & Configure WebLogic Server (J2EE Server) on which OAM is deployed

c) FMW EM Console : Used by Administrators to Manage & Configure OAM Application


OID: Oracle Internet Directory:

OID is a LDAP implementation like Active Directory/AD. This is where users are stored for OAM.

Learn how to Install, Configure, Start/Stop OID, and how to Create a User/Group in OID. How to install Oracle WebLogic Server, how to install & configure IDM for OID server and how to verify WebLogic console and ODSM console.

OIM: Oracle Identity Manager:

OIM does life cycle management of a user or any similar entity. We may use OIM to provision the users. So basically it serves user management and reconciliation activities. If We have OIM, we can provision accounts to EBS or reconcile users from EBS to OIM.

when an employee is joined to your organization, you just create his user in  OIM and then reflect the necessary user/account definitions to the EBS .. FND_USER account, PERSON definitions etc.

When will OIM come into picture? Is this required for EBS?

Why does EBS require OID with OAM?

Reconciliation is for updating the Oracle Identity Manager. Provisioning is to put data from Oracle Identity Manager to another system.

Oracle Internet Directory (OID) : An LDAP directory server that stores its data in an Oracle database.

Oracle Identity Manager (OIM) : User Provisioning product acquired from Thor. It includes also reconciliation and administration tools. Reconciliation is the process by which operations, such as user creation, modification, or deletion, started on the target system are communicated to Oracle Identity Manager. The reconciliation process compares the entries in Oracle Identity Manager repository and the target system repository, determines the difference between the two repositories, and applies the latest changes to Oracle Identity Manager repository. In terms of data flow, provisioning provides the outward flow from the provisioning system (Oracle Identity Manager) by using a push model, in which the provisioning system indicates the changes to be made to the target system. Reconciliation is for updating the Oracle Identity Manager. Provisioning is to put data from Oracle Identity Manager to another system.

https://www.learn-it-with-examples.com/middleware/oracle-iam/oiam-architecture/oracle-identity-management-architecture-overview.html



Friday, October 16, 2020

TNS-01201: Listener cannot find executable :

 TNS-01201: Listener cannot find executable :

when starting listener getting error:

lsnrctl start LISTENER

TNS-01201: Listener cannot find executable

commented the entry in listener file:

diff listener.ora_10152020 listener.ora

43,52c34,43

< SID_LIST_LISTENER =

<    (SID_LIST =

<       (SID_DESC =

<         (GLOBAL_DBNAME = SID)

<           (ORACLE_HOME = /mnt/app/oracle/product/12.2.0/dbhome_1)

<            (SID_NAME = SID)

<       )

<      )

---

> ##SID_LIST_LISTENER =

> ###   (SID_LIST =

> ###      (SID_DESC =

> ###        (GLOBAL_DBNAME = SID)

> ##          (ORACLE_HOME = /mnt/app/oracle/product/12.2.0/dbhome_1)

> ###           (SID_NAME = SID)

> ###      )

> ###     )

WebLogic password decryption:

source $DOMAIN_HOME/bin/setDomainEnv.sh

grep password $DOMAIN_HOME/servers/AdminServer/security/boot.properties | sed -e "s/^password=\(.*\)/\1/"

cd $DOMAIN_HOME/security

java weblogic.WLST decrypt_password.py

=====================================

cat decrypt_password.py

from weblogic.security.internal import *

from weblogic.security.internal.encryption import *

encryptionService = SerializedSystemIni.getEncryptionService(".")

clearOrEncryptService = ClearOrEncryptedService(encryptionService)

# Take encrypt password from user

pwd = raw_input("Paste encrypted password ({AES}fk9EK...): ")

# Delete unnecessary escape characters

preppwd = pwd.replace("\\", "")

# Display password

print "Decrypted string is: " + clearOrEncryptService.decrypt(preppwd)

====================

cd /u01/APPS/fs1/FMW_Home/user_projects/domains/EBS_domain/bin

$ . setDomainEnv.sh

grep password $DOMAIN_HOME/servers/AdminServer/security/boot.properties | sed -e "s/^password=\(.*\)/\1/"

{AES}Dmp67vNZSWnrVVIiM9mQTNT10RvVb+E25VettYNJfuzk=

$

cd $DOMAIN_HOME/security

$ java weblogic.WLST decrypt_password.py

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Paste encrypted password ({AES}fk9EK...): {AES}Dmp67vNZSWnrVVIiM9mQTNT10RvVb+E25VettYNJfuzk=

Decrypted string is: ********** >>(Password)

$


==============

cd /u01/app/oracle/middleware/oracle_common/common/bin

./wlst.sh

wls:/offline> domain = "/u01/data/domains/DOMAIN_domain/"

wls:/offline> service = weblogic.security.internal.SerializedSystemIni.getEncryptionService(domain)

wls:/offline> encryption = weblogic.security.internal.encryption.ClearOrEncryptedService(service)

wls:/offline> print encryption.decrypt("{AES}shlOmkwqDdyL+0LW9KapN3gf/4R+Tj44yfNRt+pqWNJAYoJpXvBddeFqSFhPOxY1")

Thursday, October 15, 2020

Expdp/Impdp – Datapump Questions/Answers:

 Which memory area used by datapump process?

streams_pool_size. 

If streams_pool_size is zero then will get memory error.Set minimum to 96M value.show parameter STREAMS_POOL_SIZE.

How to export only ddl/metadata of a table?

Use CONTENT=METADATA_ONLY parameter during export.

How to improve datapump performance so that export/import happens faster?

Allocate value for streams_pool_size memory.Below query can helpin recomending value:

select ‘ALTER SYSTEM SET STREAMS_POOL_SIZE=’||(max(to_number(trim(c.ksppstvl)))+67108864)||’ SCOPE=SPFILE;’

from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c

where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in (‘__streams_pool_size’,’streams_pool_size’);

ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;

Use CLUSTER=N 

Set PARALLEL_FORCE_LOCAL to TRUE 

EXCLUDE=STATISTICS

excluding the generation and export of statistics at export time, will shorten the time needed to perform export operation.Later on target run DBMS_STATS.GATHER_DATABASE_STATS.

Use PARALLEL :

If CPU available, not of CPU bound or disk I/O bound.

How to monitor status of export/import – datapump operations/jobs?

dba_datapump_jobs help in knowing the status.

set linesize 200

set pagesize 200

col owner_name format a12

col job_name format a20

col operation format a12

col job_mode format a20

SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs where state=’EXECUTING’;

SELECT w.sid, w.event, w.seconds_in_wait from  v$session s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w WHERE   s.saddr = d.saddr AND s.sid = w.sid;

SELECT OPNAME, SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) “%_COMPLETE” FROM V$SESSION_LONGOPS WHERE OPNAME in(select d.job_name from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr) AND OPNAME NOT LIKE ‘%aggregate%’ AND  TOTALWORK != 0 AND SOFAR <> TOTALWORK;

How to stop/start/kill datapump jobs?

expdp / as sysdba attach=job_name

export>status

export>stop_job

export>start_jop

export>kill_job

We can kill jobs from alter system kill session command.

How will you take consistent export backup? What is the use of flashback_scn ?

select to_char(current_scn) from v$database;

How to drop constraints before import?

set feedback off;

spool /oradba/orahow/drop_constraints.sql;

select ‘alter table SCHEMA_NAME.’ || table_name || ‘ drop constraint ‘ || constraint_name || ‘ cascade;’

from dba_constraints where owner = ‘SCHEMA_NAME’ and not (constraint_type = ‘C’) order by table_name,constraint_name;

Spool off;

exit;

Getting undo tablespace error during import, how you will avoid it?

use COMMIT=Y option.

Is import a 11g dumpfile into 10g database using datapump? 

Yes we can import from 11g to 10g using VERSION option.

If you don’t have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?

You can use network link/ DB Link for export.  You can use network_link by following these simple steps:

Create a TNS entry for the remote database in your tnsnames.ora file

Test with tnsping sid

Create a database link to the remote database

Specify the database link as network_link in your expdp or impdp syntax

I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don’t have that much available disk space? What could be the reason that only ddl is consuming huge space?


 Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below. 

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255

NOCOMPRESS LOGGING

STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1

BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

As you can see above, oracle allocating 128MB for one table initially even if row count is zero.

To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.

some of the parameters you have used during export?

CONTENT:         Specifies data to unload where the valid keywords are:

                             (ALL), DATA_ONLY, and METADATA_ONLY.

DIRECTORY       Directory object to be used for dumpfiles and logfiles.

DUMPFILE         List of destination dump files (expdat.dmp),

                             e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ESTIMATE_ONLY         Calculate job estimates without performing the export.

EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.

FILESIZE              Specify the size of each dumpfile in units of bytes.

FLASHBACK_SCN         SCN used to set session snapshot back to.

FULL                  Export entire database (N).

HELP                  Display Help messages (N).

INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.

JOB_NAME              Name of export job to create.

LOGFILE               Log file name (export.log).

NETWORK_LINK          Name of remote database link to the source system.

NOLOGFILE             Do not write logfile (N).

PARALLEL              Change the number of active workers for current job.

PARFILE               Specify parameter file.

QUERY                 Predicate clause used to export a subset of a table.

SCHEMAS               List of schemas to export (login schema).

TABLES                Identifies a list of tables to export – one schema only.

TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.

VERSION               Version of objects to export where valid keywords are:

                      (COMPATIBLE), LATEST, or any valid database version.

=======================================================================

Oracle DBA Interview Question and Answer - Export/Import

What is use of CONSISTENT option in exp?


When you export a table, you are guaranteed that the contents of that table will be consistent with the time that the export of that table was started. This means that if you start exporting the table at 12:00 and someone makes changes to the data in the table at 12:05 and your export of this table finishes at 12:10, then the export will not contain any of the changes made between 12:00 and 12:10. You cannot change this behavior with Oracle's export utility.


The CONSISTENT parameter controls whether or not the entire export is consistent, even between tables. If CONSISTENT=N (the default), then the export of a table will be consistent, but changes can occur between tables. If CONSISTENT=Y, then the entire dump file is consistent with the point in time that you started the export.


What is use of DIRECT=Y option in exp?


Normally export will follow the process of SELECT statement i.e data from disk will be copied to buffer cache and then it will be written to dump file. When we use direct path by specifying DIRECT=Y in export command, then oracle will copy data directly from disk to PGA and from there it is written to dumpfile.


What is use of COMPRESS option in exp?


If we specify COMPRESS=y during export then at the time of table creation while importing, the INITIAL extent of the table would be as large as the sum of all the extents allocated to the table in the original database.


If we specify COMPRESS=n during export then while creating table in the import, it will use the same values of INITIAL extent as in the original database.


Now lets say I have a table of 100 MB. There have been some deletions and updations and only 50 MB of actual data is there. I export the table with COMPRESS=y and recreate it in some other database. It will sum all the extents and assign as INITIAL extent while creating the table. There is only 50 MB of data in the table but it has allocated 100 MB already. In case, you have limited space this is not a very good option.


If I do with COMPRESS=N and then import the table, its INITIAL extent will be as large as INITIAL extent in the original database and then as required, new extents will be allocated. So now my table in the new database would be approximately 50 MB in size.


Which are the common IMP/EXP problems?


ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..

IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.

ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).

ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.


What is the Benefits of the Data Pump Technology?


The older export/import technology was client-based. The Data Pump technology is purely       server based. All dump, log, and other files are created on the server by default. Data Pump technology offers several benefits over the traditional export and import data utilities.


The following are the main benefits of the Data Pump technology:


Improved performance: The performance benefits are significant if you are transferring huge

amounts of data.


Ability to restart jobs: You can easily restart jobs that have stalled due to lack of space or have

failed for other reasons. You may also manually stop and restart jobs.


Parallel execution capabilities: By specifying a value for the PARALLEL parameter, you can choose the number of active execution threads for a Data Pump Export or Data Pump Import job.


Ability to attach to running jobs: You can attach to a running Data Pump job and interact with

it from a different screen or location. This enables you to monitor jobs, as well as to modify

certain parameters interactively. Data Pump is an integral part of the Oracle database server,

and as such, it doesn’t need a client to run once it starts a job.


Ability to estimate space requirements: You can easily estimate the space requirements for

your export jobs by using the default BLOCKS method or the ESTIMATES method, before running

an actual export job (see the “Data Pump Export Parameters” section later in this chapter for

details).


Network mode of operation: Once you create database links between two databases, you can

perform exports from a remote database straight to a dump file set. You can also perform

direct imports via the network using database links, without using any dump files. The network

mode is a means of transferring data from one database directly into another database with

the help of database links and without the need to stage it on disk.


Fine-grained data import capability: Oracle9i offered only the QUERY parameter, which enabled

you to specify that the export utility extract a specified portion of a table’s rows. With Data Pump,

you have access to a vastly improved fine-grained options arsenal, thanks to new parameters

like INCLUDE and EXCLUDE.


Remapping capabilities: During a Data Pump import, you can remap schemas and tablespaces,

as well as filenames, by using the new REMAP_ * parameters. Remapping capabilities enable

you to modify objects during the process of importing data by changing old attributes to new

values. For example, the REMAP_SCHEMA parameter enables you to map all of user HR’s schema

to a new user, OE. The REMAP_SCHEMA parameter is similar to the TOUSER parameter in the old

import utility


How to improve exp performance?


1. Set the BUFFER parameter to a high value. Default is 256KB.

2. Stop unnecessary applications to free the resources.

3. If you are running multiple sessions, make sure they write to different disks.

4. Do not export to NFS (Network File Share). Exporting to disk is faster.

5. Set the RECORDLENGTH parameter to a high value.

6. Use DIRECT=yes (direct mode export).


How to improve imp performance?


1. Place the file to be imported in separate disk from datafiles.

2. Increase the DB_CACHE_SIZE.

3. Set LOG_BUFFER to big size.

4. Stop redolog archiving, if possible.

5. Use COMMIT=n, if possible.

6. Set the BUFFER parameter to a high value. Default is 256KB.

7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.

8. Use STATISTICS=NONE

9. Disable the INSERT triggers, as they fire during import.

10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.


What are the datapump export modes?


you can perform Data Pump Export jobs in several modes:


Full export mode: You use the FULL parameter when you want to export the entire database in

one export session. You need the EXPORT_FULL_DATABASE role to use this mode.


Schema mode: If you want to export a single user’s data and/or objects only, you must use the

SCHEMAS parameter.


Tablespace mode: By using the TABLESPACES parameter, you can export all the tables in one or

more tablespaces. If you use the TRANSPORT_TABLESPACES parameter, you can export just the

metadata of the objects contained in one or more tablespaces. You may recall that you can

export tablespaces between databases by first exporting the metadata, copying the files of the

tablespace to the target server, and then importing the metadata into the target database.


Table mode: By using the TABLES parameter, you can export one or more tables. The TABLES

parameter is identical to the TABLES parameter in the old export utility.


What is COMPRESSION parameter in expdp?


The COMPRESSION parameter enables the user to specify which data to compress before writing theexport data to a dump file. By default, all metadata is compressed before it’s written out to an export dump file. You can disable compression by specifying a value of NONE for the COMPRESSION parameter, as shown here:


$ expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=NONE


The COMPRESSION parameter can take any of the following four values:


ALL: Enables compression for the entire operation.


DATA_ONLY: Specifies that all data should be written to the dump file in a compressed format.


METADATA_ONLY: Specifies all metadata be written to the dump file in a compressed format.

This is the default value.


NONE: Disables compression of all types.


What are Export Filtering Parameters in expdp?


Data Pump contains several parameters related to export filtering. Some of them are substitutes for old export parameters; others offer new functionality.


CONTENT


By using the CONTENT parameter, you can filter what goes into the export dump file. The CONTENT


parameter can take three values:


• ALL exports both table data and table and other object definitions (metadata).

• DATA_ONLY exports only table rows.

• METADATA_ONLY exports only metadata.


EXCLUDE and INCLUDE


The EXCLUDE and INCLUDE parameters are two mutually exclusive parameters that you can use to perform what is known as metadata filtering. Metadata filtering enables you to selectively leave out or include certain types of objects during a Data Pump Export or Import job. In the old export utility, you used the CONSTRAINTS, GRANTS, and INDEXES parameters to specify whether you wanted to export those objects. Using the EXCLUDE and INCLUDE parameters, you now can include or exclude many other kinds of objects besides the four objects you could filter previously. For example, if you don’t wish to export any packages during the export, you can specify this with the help of the EXCLUDE parameter.


QUERY


The QUERY parameter serves the same function as it does in the traditional export utility: it lets you selectively export table row data with the help of a SQL statement. The QUERY parameter permits you to qualify the SQL statement with a table name, so that it applies only to a particular table. Here’s an example:


QUERY=OE.ORDERS: "WHERE order_id > 100000"


In this example, only those rows in the orders table (owned by user OE) where the order_id is

greater than 100,000 are exported.


What is Network Link Parameter and how it works?


The Data Pump Export utility provides a way to initiate a network export. Using the NETWORK_LINK parameter, you can initiate an export job from your server and have Data Pump export data from a remote database to dump files located on the instance from which you initiate the Data Pump Export job.


Here’s an example that shows you how to perform a network export:


$ expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=finance

DUMPFILE=network_export.dmp LOGFILE=network_export.log


In the example, the NETWORK_LINK parameter must have a valid database link as its value. This

means that you must have created the database link ahead of time. This example is exporting data from the finance database on the prod1 server.


Let’s say you have two databases, called local and remote. In order to use the NETWORK_LINK parameter and pass data directly over the network, follow these steps:

1. Create a database link to the remote database, which is named remote in this example:

SQL> CREATE DATABASE LINK remote

 CONNECT TO scott IDENTIFIED BY tiger

 USING 'remote.world';


2. If there isn’t one already, create a Data Pump directory object:


SQL> CREATE DIRECTORY remote_dir1 AS '/u01/app/oracle/dp_dir';


3. Set the new directory as your default directory, by exporting the directory value:


$ export DATA_PUMP_DIR=remote_dir1


4. Perform the network export from the database named remote:


$ expdp system/sammyy1 SCHEMAS=SCOTT FILE_NAME=network.dmp NETWORK_LINK=finance


You’ll see that the Data Pump Export job will create the dump file network.dmp (in the directory location specified by remote_dir1) on the server hosting the database named local. However, the data within the dump file is extracted from the user scott’s schema in the remote database (named remote in our example). You can see that the NETWORK_LINK parameter carries the dump files over the network from a remote location to the local server. All you need is a database link from a database on the local server to the source database on the remote server.


What is use of INDEXFILE option in imp?


Will write DDLs of the objects in the dumpfile into the specified file.


What is use of IGNORE option in imp?


Will ignore the errors during import and will continue the import.


What are the differences between expdp and exp (Data Pump or normal exp/imp)?


Data Pump is server centric (files will be at server).

Data Pump has APIs, from procedures we can run Data Pump jobs.

In Data Pump, we can stop and restart the jobs.

Data Pump will do parallel execution.

Tapes & pipes are not supported in Data Pump.

Data Pump consumes more undo tablespace.

Data Pump import will create the user, if user doesn’t exist.


Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?


Data Pump is block mode, exp is byte mode.

Data Pump will do parallel execution.

Data Pump uses direct path API.


How to improve expdp performance?


Using parallel option which increases worker threads. This should be set based on the number of cpus.


How to improve impdp performance?


Using parallel option which increases worker threads. This should be set based on the number of cpus.


In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?


Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.

Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.


What is the order of importing objects in impdp?


 Tablespaces

 Users

 Roles

 Database links

 Sequences

 Directories

 Synonyms

 Types

 Tables/Partitions

 Views

 Comments

 Packages/Procedures/Functions

 Materialized views


How to import only metadata?


CONTENT= METADATA_ONLY


How to import into different user/tablespace/datafile/table?

REMAP_SCHEMA

REMAP_TABLESPACE

REMAP_DATAFILE

REMAP_TABLE

REMAP_DATA

STEPS TO USE EXPDP TO EXPORT DATA FROM PHYSICAL STANDBY DATABASE:

https://www.sultandba.com/2020/06/use-expdp-export-data-physical-standby-database.html

Tuesday, October 13, 2020

RMAN feature in 12c:

 1.SYSBACKUP Privilege:

The SYSBACKUP privilege allows the DBA to perform RMAN backup commands without additional privileges. Prior to 12c, users needed SYSDBA privilege to backup the database.

rman target ' "/ as sysbackup" '

rman target ' "/ as sysdba" '

rman target /

rman target1 ‘ “bkpadm@PDB1 as sysbackup” ‘

rman target ‘ “sysadm@PDB1 as sysdba” ‘

rman target sysadm@PDB1

2.SQL Interface Improvements:

SQL commands can be run in RMAN.DDL/DML Commands can be from RMAN Command prompt.

SHUTDOWN/STARTUP the database and also can use ALTER commands.

RMAN DESCRIBE provides the same functionality of SQL*Plus DESCRIBE.

RMAN> desc dba_profiles;

RMAN> select sysdate from dual;

RMAN> create table table5(col1 number, col2 varchar2(20));

RMAN> insert into table5 values (1,'Test3');

RMAN> update table5 set col1=2;

RMAN> drop table table5;

RMAN> shutdown immediate

RMAN> startup mount

RMAN> alter database open;

3.Support for multitenant container and pluggable databases:

RMAN provides full support for backup and recovery for CDB and PDB.

RMAN you can back up an entire container database or individual pluggable databases and also can perform point-in-time recovery.  But it is recommended that you turn on control file auto backup.Otherwise point-in-time recovery for pluggable databases may not work efficiently when you need to undo data file additions or deletions.

When you back up a container database, RMAN backs up the root, pluggable databases in the container, and archive logs.while restoring we can choose the whole container, one or more pluggable databases or the root only.

Backup the CDB, PDB, and root:

You should have SYSBACKUP or SYSDBA privilege to backup any of the databases

Container Database (CDB) as same as non-container database using below command:

RMAN> BACKUP DATABASE plus ARCHIVELOG;

Pluggable Database (PDB) using below command:

RMAN> BACKUP PLUGGABLE DATABASE PDB1, PDB2;

Or connect to pluggable Database in RMAN :

% rman target sys@PDB1

RMAN> BACKUP DATABASE;

backup the root using below command:

RMAN> BACKUP DATABASE ROOT;

restore the whole CDB using below script.restoring CDB database will restore all the pluggable databases.

RMAN> RUN { 

     STARTUP MOUNT;

     RESTORE DATABASE;

     RECOVER DATABASE;

     ALTER DATABASE OPEN;

}

restore only ROOT Database using below script:

RMAN> RUN {

     STARTUP MOUNT;

     RESTORE DATABASE ROOT;

     RECOVER DATABASE ROOT;

     ALTER DATABASE OPEN;

}

restore Pluggable Databases in two ways. Either you can restore from root container and connect directly to PDB to restore.

Use below script to restore from root. Using this approach you can able to restore and recover multiple PDB’s with a single command.

RMAN > RUN {

     RESTORE PLUGGABLE DATABASE PDB1, PDB2;

     RECOVER PLUGGABLE DATABASE PDB1, PDB2;

     ALTER PLUGGABLE DATABASE PDB1, PDB2 OPEN;

}

Use below script to connect PDB, restore and recover the database. Using this approach you will be able to restore and recover only one PDB.

$ rman target=bkpadm@PDB1

     RMAN> run{

     RESTORE DATABASE;

     RECOVER DATABASE;

}

performing a point-in-time recovery of the CDB or PDB are the same as a normal database.

when you perform Point-in-time recovery on the CDB, it will effect on all the PDBs as well.

When you perform point-in-time recovery on a PDB, it will affect that single database.

The command to perform a point-in-time recovery is:

SET UNTIL TIME "TO_DATE(’10-Feb-2014 01:00:00’,’DD-MON-YYYY HH24:MI:SS’)";

SET UNTIL SCN 19900005; # alternatively, specify SCN

SET UNTIL SEQUENCE 120; # alternatively, specify log seq

4.Backup of Archived redo logs:

We can back up or delete archive logs when they connect to root as a common user with SYSDBA or SYSBACKUP privilege, but you cannot back up or delete archive logs when you connect to PDB as a local user with SYSDBA or SYSBACKUP privilege.

We only able to switch the archived logs when connect to the root of a CDB, but cannot switch archived redo logs when connected to a PDB.

If there is  more than one archive log destination, when RMAN used  to backup the archive redo logs it backs up only one copy of the archived redo logs. RMAN does not include more than one copy because multiple destinations will have same log sequence number.

5.DUPLICATE enhancements:

When you duplicate a database using RMAN DUPLICATE, the database is created and opened with RESETLOGS mode. With Oracle database 12c, you can specify that the database must not be opened with “NOOPEN” clause.

NOOPEN clause is useful in following situations:

If need to make changes to initialization parameters such as block change tracking, flashback database settings.

Opening the database conflict with other source database

If  plan to create database for upgrade and want to open in upgrade mode.

Below command used to creates duplicate database, but not open.

RMAN> DUPLICATE TARGET DATABASE TO DB1 FROM ACTIVE DATABASE NOOPEN;

6.Multi section Backup Improvements:

This backup functionality introduced in 11g to handle large data file backups. In this RMAN can break up a large file into sections during the backup and recovery, which will improve the performance of large datafiles backup. We can select the size using the SECTION SIZE keyword and each channel will create separate files within the backup set, and backup the database in parallel. This only supports backup sets in 11g.

In 12c, the multisection backup supports incremental backups and image copies, including backup sets (introduced in 11g).  This functionality can only be used for data files, We cannot use this to backup control files.

If the SECTION SIZE that is selected is larger than the actual file, then RMAN will not use multisection backup functionality. If specify a small SECTION SIZE that produces more than 256 sections then RMAN increases the SECTION SIZE to a value that results 256 sections.

To Create Multisection backup image copies.

RMAN> BACKUP AS COPY SECTION SIZE 1024M DATABASE;

To Create multisection incremental level1 backup:

RMAN> BACKUP INCREMENTAL LEVEL 1 SECTION SIZE 1024M DATABASE;

To enhance the backup performance, use unused block compression and block change tracking in conjunction with multisection incremental backups.

7.Restoring and Recovering Files over Network:

RMAN can restore and recover a database, tablespace, datafile, controlfile or spfile over network from a physical standby database. Restore to be happens over the network, RESTORE… FROM SERVICE command and RECOVER…FROM SERVICE command is used.

FROM SERVICE clause specifies the service name of the physical standby.

multisection, compression and encryption can be used to improve backup and restore performance.

This is very useful in scenarios involving a standby database:

Restore or recover files lost by connecting through the network on one database (standby/primary) which is available on the other database.

Recover a standby database to using an incremental backup taken off the primary to roll forward the standby.

Example to restoring data file over the network from physical standby to primary database:

Connect to Primary:

RMAN> CONNECT TARGET /

Backup sets encrypted using AES128 encryption algorithm:

RMAN> SET ENCRYPTION ALGORITHM 'AES128';

Restore datafile on the primary using service “standby_db”:

RMAN> RESTORE DATAFILE '/u01/oradata/datafile1.dbf' FROM SERVICE standby_db SECTION SIZE 1024M;

8.Active Database Duplication Improvements:

Active Database duplication  introduced in oracle 11g.This feature create clone or standby database by copying the data files and archive logs using the TARGET (Source) database channels over the network to clone AUXILIARY database. In this method no backups of the TARGET database is required. Processing load will be on the TARGET instance as TARGET database channels is being used.

In Oracle 12c, Active Database duplication can be performed using the backup sets. Sufficient AUXILIARY channels is allocated to connect TARGET database and retrieve the backup’s sets over the network, this reduced the load on the TARGET (source) database. Unused block compression can be used to reduce the size of the backups transported over the network. Encrypt backups and multisection backups can also be used. Network consumption reduces and performance improves in 12c.

RMAN to use backup sets we need to establish connection to target database using a net service name and one of the below conditions should satisfy:

(i)AUXILIARY channels numbers should be equal or greater than TARGET channels.

(ii)DUPLICATION …FROM ACTIVE DATABASE should contain either USING BACKUPSET, USING COMPRESSED BACKUPSET or SECTION SIZE .

9.Unified auditing and RMAN:

UNIFIED_AUDIT_TRAIL dictionary view has a set of fields (prefixed with RMAN_) which automatically record RMAN related events. AUDIT_ADMIN or AUDIT_VIEWER role required in order to query the UNIFIED_AUDIT_TRAIL view.

10.Recovering Tables and Table Partitions from RMAN Backups:

A dropped or purged (truncated) table can now back in time without affecting the rest of the objects within the database/tablespace.

RMAN 12c will create an auxiliary instance in a separate disk location of required files, recovers the tables or table partitions up to the desired point in time. It will then create a data pump export dump containing the recovered objects. 

11.Storage Snapshot Optimization:

With this feature we can take storage snapshot of database using third-party technologies without keeping the database in BACKUP mode. When need to recover, point in time can be used of the snapshot. Roll forward can be used by using the database archive logs, and use this snapshot feature to recover part or all of the database.

Recommended to use different storage to keep snapshot from where the database is running.

Following are required to backup database using storage snapshot optimization:

(i)The snapshot preserves the write order for each file.

(ii)The database is crash consistent during the snapshot.

(iii)The snapshot technology stores the time at which the snapshot is completed.

If above compliance id not fulfilled with third-party snapshot technology vendor, to take snapshot we have to keep database in BACKUP mode.

Following operational changes  ONLINE, OFFLINE, DROP, RENAME, ADD, READONLY and SHRINK on data files and table spaces during the snapshot then the snapshots are unusable.

Recovery time cannot be earlier than the snapshot time.

To recover complete database:

RECOVER DATABASE;

To recover database using particular snapshot

RECOVER DATABASE UNTIL TIME ‘11/10/2019 12:00:00’ SNAPSHOT TIME ‘11/9/2019 11:00:00’

partial recovery using archived redo logs:

RECOVER DATABASE UNTIL CANCEL SNAPSHOT TIME ‘11/19/2019 09:00:00’

Rman related question/answer:

 Difference between using recovery catalog and control file?

Old backup information in control file will be lost When new incarnation happens but it will be preserved in recovery catalog.

Scripts can be stored in recovery catalog. Recovery catalog is central and can have information of many databases.

What is the use of crosscheck command in RMAN?

Crosscheck will be useful to check whether the catalog information is intact with OS level information.

What are the differences between crosscheck and validate commands?

Validate command is to examine a backup set and report whether it can be restored. RMAN scans all of the backup pieces in the specified backup sets and looks at the checksum to verify that the contents are intact so that backup can be successfully restored if necessary.

Crosscheck command is to verify the status of backups and copies recorded in the RMAN repository against media such as disk or tape. The crosscheck command only processes files created on the same device type as the channel running crosscheck.

What is the difference between backup set and backup piece?

Backup set is logical. Backup piece is physical.

What is obsolete backup and expired backup?

Expired means that the backup piece or backup set is not found in the backup destination.

Obsolete means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.

Difference between auxiliary channel and maintenance channel?

New features in Oracle 12c RMAN?

New features in Oracle 11g RMAN?

What is the use of nofilenamecheck in RMAN ?

When NOFILENAMECHECK is used with the DUPLICATE command, RMAN does not validate the filenames during restoration. If the primary database and the standby database are on the same host, The DUPLICATE NOFILENAMECHECK option should not be used.

Do we need to shutdown db to change block change tracking file?

No.

What are all the  database file's that RMAN cannot backup ?

RMAN cannot take backup of the pfile, redo logs, tns or network configuration files, password files, external tables and the contents of the Oracle home files.

Can I take RMAN backup when the database is down ?

No, we can take RMAN backup only when the target database is open or in mount stage.

Wednesday, October 7, 2020

Unplugging/Plugging PDB:

 In multitenant environment, a PDB can unplug from a CDB and then plug it into another CDB.Once PDB is unplugged, it needs to be dropped from CDB as it become unusable in this CDB.Same PDB can be plugged back into the same CDB.

Unplugging PDB:

==================

Connect to PDB using sqlplus and list the location of all the datafiles of PDB which is going to unplugged.


sql>sqlplus "/ as sysdba"

sql>alter session set container=pdb1;

sql>select file_name from dba_data_files;


Close PDB:

========== 

sql>alter pluggable database pdb1 close immediate;


Unplug PDB:

===========

sql> alter pluggable database pdb1 unplug into '/u01/pdb1_db.xml';


Complete path is needed to specify which would created during this process and will contain information about this PDB. Same xml would be used during plugging into CDB.


Copy Datafile:

===============

Copy the datafiles listed in the above step to the host of CDB where we want to plug.Also copy xml file.


Drop unplugged PDB:

===================

sql>drop pluggable database pdb1 keep datafiles;


**We can skip this step, if plugging PDB on a different host.

KEEP DATAFILES clause used with drop command so that datafile dont get dropped and can be re-used id plugging in the same CDB.


Plugging PDB:

================

This would be in destination.Compatibility of PDB should be checked against CDB where it will be plugged in.

Below codes can be used, O/p should be Yes to go ahead further:

SET SERVEROUTPUT ON

DECLARE

compatible CONSTANT VARCHAR2(3) :=

CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(

pdb_descr_file => '/u01/pdb1_db.xml',

pdb_name       => 'PDB1')

WHEN TRUE THEN 'YES'

ELSE 'NO'

END;

BEGIN

DBMS_OUTPUT.PUT_LINE(compatible);

END;

Plug in PDB in target CDB:

============================

Datafiles location on the destination host/CDB can be same or different. If location is same, we don’t need to do anything, otherwise we will need to use FILE_NAME_CONVERT option of CREATE PLUGGABLE DATABASE command to specify the new location of the datafiles. 

sql> create pluggable database pdb1 using '/u01/pdb1_db.xml'  copy file_name_convert=('/u01/app/pdb1','/u01/db/pdb1');

sql> alter pluggable database pdb1 open;

***COPY keyword is default to be used to copy files to the new destination. If destination is same as source, specify NOCOPY.

If you have copied datafiles on a different location on the destination, you will need to edit .xml file to specify the location where you have copied the datafiles so that during plugging in phase, datafiles can be searched at this location.