Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Thursday, October 22, 2020

OAM, OID and OIM:

 OAM: Oracle Access Manager:

OAM is the access manager, where we have abilities such as Single Sign On (SSO),Authentication

Authorization, Real time session management, Auditing and Policy Administration.

3 consoles in OAM i.e. Weblogic Console(/console), Enterprise Manager Console(/em), OAM Console (/oamconsole) in Oracle Access Manager.

Three main Consoles in OAM mainly

a) OAM Console : To register and manage system configurations, security elements, and policies.

b) WebLogic Console : Used by Administrators to Manage & Configure WebLogic Server (J2EE Server) on which OAM is deployed

c) FMW EM Console : Used by Administrators to Manage & Configure OAM Application


OID: Oracle Internet Directory:

OID is a LDAP implementation like Active Directory/AD. This is where users are stored for OAM.

Learn how to Install, Configure, Start/Stop OID, and how to Create a User/Group in OID. How to install Oracle WebLogic Server, how to install & configure IDM for OID server and how to verify WebLogic console and ODSM console.

OIM: Oracle Identity Manager:

OIM does life cycle management of a user or any similar entity. We may use OIM to provision the users. So basically it serves user management and reconciliation activities. If We have OIM, we can provision accounts to EBS or reconcile users from EBS to OIM.

when an employee is joined to your organization, you just create his user in  OIM and then reflect the necessary user/account definitions to the EBS .. FND_USER account, PERSON definitions etc.

When will OIM come into picture? Is this required for EBS?

Why does EBS require OID with OAM?

Reconciliation is for updating the Oracle Identity Manager. Provisioning is to put data from Oracle Identity Manager to another system.

Oracle Internet Directory (OID) : An LDAP directory server that stores its data in an Oracle database.

Oracle Identity Manager (OIM) : User Provisioning product acquired from Thor. It includes also reconciliation and administration tools. Reconciliation is the process by which operations, such as user creation, modification, or deletion, started on the target system are communicated to Oracle Identity Manager. The reconciliation process compares the entries in Oracle Identity Manager repository and the target system repository, determines the difference between the two repositories, and applies the latest changes to Oracle Identity Manager repository. In terms of data flow, provisioning provides the outward flow from the provisioning system (Oracle Identity Manager) by using a push model, in which the provisioning system indicates the changes to be made to the target system. Reconciliation is for updating the Oracle Identity Manager. Provisioning is to put data from Oracle Identity Manager to another system.

https://www.learn-it-with-examples.com/middleware/oracle-iam/oiam-architecture/oracle-identity-management-architecture-overview.html



Friday, October 16, 2020

TNS-01201: Listener cannot find executable :

 TNS-01201: Listener cannot find executable :

when starting listener getting error:

lsnrctl start LISTENER

TNS-01201: Listener cannot find executable

commented the entry in listener file:

diff listener.ora_10152020 listener.ora

43,52c34,43

< SID_LIST_LISTENER =

<    (SID_LIST =

<       (SID_DESC =

<         (GLOBAL_DBNAME = SID)

<           (ORACLE_HOME = /mnt/app/oracle/product/12.2.0/dbhome_1)

<            (SID_NAME = SID)

<       )

<      )

---

> ##SID_LIST_LISTENER =

> ###   (SID_LIST =

> ###      (SID_DESC =

> ###        (GLOBAL_DBNAME = SID)

> ##          (ORACLE_HOME = /mnt/app/oracle/product/12.2.0/dbhome_1)

> ###           (SID_NAME = SID)

> ###      )

> ###     )

WebLogic password decryption:

source $DOMAIN_HOME/bin/setDomainEnv.sh

grep password $DOMAIN_HOME/servers/AdminServer/security/boot.properties | sed -e "s/^password=\(.*\)/\1/"

cd $DOMAIN_HOME/security

java weblogic.WLST decrypt_password.py

=====================================

cat decrypt_password.py

from weblogic.security.internal import *

from weblogic.security.internal.encryption import *

encryptionService = SerializedSystemIni.getEncryptionService(".")

clearOrEncryptService = ClearOrEncryptedService(encryptionService)

# Take encrypt password from user

pwd = raw_input("Paste encrypted password ({AES}fk9EK...): ")

# Delete unnecessary escape characters

preppwd = pwd.replace("\\", "")

# Display password

print "Decrypted string is: " + clearOrEncryptService.decrypt(preppwd)

====================

cd /u01/APPS/fs1/FMW_Home/user_projects/domains/EBS_domain/bin

$ . setDomainEnv.sh

grep password $DOMAIN_HOME/servers/AdminServer/security/boot.properties | sed -e "s/^password=\(.*\)/\1/"

{AES}Dmp67vNZSWnrVVIiM9mQTNT10RvVb+E25VettYNJfuzk=

$

cd $DOMAIN_HOME/security

$ java weblogic.WLST decrypt_password.py

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

Paste encrypted password ({AES}fk9EK...): {AES}Dmp67vNZSWnrVVIiM9mQTNT10RvVb+E25VettYNJfuzk=

Decrypted string is: ********** >>(Password)

$


==============

cd /u01/app/oracle/middleware/oracle_common/common/bin

./wlst.sh

wls:/offline> domain = "/u01/data/domains/DOMAIN_domain/"

wls:/offline> service = weblogic.security.internal.SerializedSystemIni.getEncryptionService(domain)

wls:/offline> encryption = weblogic.security.internal.encryption.ClearOrEncryptedService(service)

wls:/offline> print encryption.decrypt("{AES}shlOmkwqDdyL+0LW9KapN3gf/4R+Tj44yfNRt+pqWNJAYoJpXvBddeFqSFhPOxY1")

Thursday, October 15, 2020

Expdp/Impdp – Datapump Questions/Answers:

 Which memory area used by datapump process?

streams_pool_size. 

If streams_pool_size is zero then will get memory error.Set minimum to 96M value.show parameter STREAMS_POOL_SIZE.

How to export only ddl/metadata of a table?

Use CONTENT=METADATA_ONLY parameter during export.

How to improve datapump performance so that export/import happens faster?

Allocate value for streams_pool_size memory.Below query can helpin recomending value:

select ‘ALTER SYSTEM SET STREAMS_POOL_SIZE=’||(max(to_number(trim(c.ksppstvl)))+67108864)||’ SCOPE=SPFILE;’

from sys.x$ksppi a, sys.x$ksppcv b, sys.x$ksppsv c

where a.indx = b.indx and a.indx = c.indx and lower(a.ksppinm) in (‘__streams_pool_size’,’streams_pool_size’);

ALTER SYSTEM SET STREAMS_POOL_SIZE=XXXX MB SCOPE=SPFILE;

Use CLUSTER=N 

Set PARALLEL_FORCE_LOCAL to TRUE 

EXCLUDE=STATISTICS

excluding the generation and export of statistics at export time, will shorten the time needed to perform export operation.Later on target run DBMS_STATS.GATHER_DATABASE_STATS.

Use PARALLEL :

If CPU available, not of CPU bound or disk I/O bound.

How to monitor status of export/import – datapump operations/jobs?

dba_datapump_jobs help in knowing the status.

set linesize 200

set pagesize 200

col owner_name format a12

col job_name format a20

col operation format a12

col job_mode format a20

SELECT owner_name, job_name, operation, job_mode, state FROM dba_datapump_jobs where state=’EXECUTING’;

SELECT w.sid, w.event, w.seconds_in_wait from  v$session s, DBA_DATAPUMP_SESSIONS d, V$SESSION_WAIT w WHERE   s.saddr = d.saddr AND s.sid = w.sid;

SELECT OPNAME, SID, SERIAL#, CONTEXT, SOFAR, TOTALWORK,ROUND(SOFAR/TOTALWORK*100,2) “%_COMPLETE” FROM V$SESSION_LONGOPS WHERE OPNAME in(select d.job_name from v$session s, v$process p, dba_datapump_sessions d where p.addr=s.paddr and s.saddr=d.saddr) AND OPNAME NOT LIKE ‘%aggregate%’ AND  TOTALWORK != 0 AND SOFAR <> TOTALWORK;

How to stop/start/kill datapump jobs?

expdp / as sysdba attach=job_name

export>status

export>stop_job

export>start_jop

export>kill_job

We can kill jobs from alter system kill session command.

How will you take consistent export backup? What is the use of flashback_scn ?

select to_char(current_scn) from v$database;

How to drop constraints before import?

set feedback off;

spool /oradba/orahow/drop_constraints.sql;

select ‘alter table SCHEMA_NAME.’ || table_name || ‘ drop constraint ‘ || constraint_name || ‘ cascade;’

from dba_constraints where owner = ‘SCHEMA_NAME’ and not (constraint_type = ‘C’) order by table_name,constraint_name;

Spool off;

exit;

Getting undo tablespace error during import, how you will avoid it?

use COMMIT=Y option.

Is import a 11g dumpfile into 10g database using datapump? 

Yes we can import from 11g to 10g using VERSION option.

If you don’t have sufficient disk space on the database server, how will take the export? OR, How to export without dumpfile?

You can use network link/ DB Link for export.  You can use network_link by following these simple steps:

Create a TNS entry for the remote database in your tnsnames.ora file

Test with tnsping sid

Create a database link to the remote database

Specify the database link as network_link in your expdp or impdp syntax

I exported dumpfile of metadata/ddl only from production but during import in test machine it is consuming huge size and probably we don’t have that much available disk space? What could be the reason that only ddl is consuming huge space?


 Below are the snippet ddl of one table extracted from prod. As you can see that during table creation oracle always allocate the initial bytes as shown below. 

PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255

NOCOMPRESS LOGGING

STORAGE(INITIAL 1342177280 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645

PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1

BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)

As you can see above, oracle allocating 128MB for one table initially even if row count is zero.

To avoid this you need to set deferred_segment_creation parameter value to true. By default it is false.

some of the parameters you have used during export?

CONTENT:         Specifies data to unload where the valid keywords are:

                             (ALL), DATA_ONLY, and METADATA_ONLY.

DIRECTORY       Directory object to be used for dumpfiles and logfiles.

DUMPFILE         List of destination dump files (expdat.dmp),

                             e.g. DUMPFILE=scott1.dmp, scott2.dmp, dmpdir:scott3.dmp.

ESTIMATE_ONLY         Calculate job estimates without performing the export.

EXCLUDE               Exclude specific object types, e.g. EXCLUDE=TABLE:EMP.

FILESIZE              Specify the size of each dumpfile in units of bytes.

FLASHBACK_SCN         SCN used to set session snapshot back to.

FULL                  Export entire database (N).

HELP                  Display Help messages (N).

INCLUDE               Include specific object types, e.g. INCLUDE=TABLE_DATA.

JOB_NAME              Name of export job to create.

LOGFILE               Log file name (export.log).

NETWORK_LINK          Name of remote database link to the source system.

NOLOGFILE             Do not write logfile (N).

PARALLEL              Change the number of active workers for current job.

PARFILE               Specify parameter file.

QUERY                 Predicate clause used to export a subset of a table.

SCHEMAS               List of schemas to export (login schema).

TABLES                Identifies a list of tables to export – one schema only.

TRANSPORT_TABLESPACES List of tablespaces from which metadata will be unloaded.

VERSION               Version of objects to export where valid keywords are:

                      (COMPATIBLE), LATEST, or any valid database version.

=======================================================================

Oracle DBA Interview Question and Answer - Export/Import

What is use of CONSISTENT option in exp?


When you export a table, you are guaranteed that the contents of that table will be consistent with the time that the export of that table was started. This means that if you start exporting the table at 12:00 and someone makes changes to the data in the table at 12:05 and your export of this table finishes at 12:10, then the export will not contain any of the changes made between 12:00 and 12:10. You cannot change this behavior with Oracle's export utility.


The CONSISTENT parameter controls whether or not the entire export is consistent, even between tables. If CONSISTENT=N (the default), then the export of a table will be consistent, but changes can occur between tables. If CONSISTENT=Y, then the entire dump file is consistent with the point in time that you started the export.


What is use of DIRECT=Y option in exp?


Normally export will follow the process of SELECT statement i.e data from disk will be copied to buffer cache and then it will be written to dump file. When we use direct path by specifying DIRECT=Y in export command, then oracle will copy data directly from disk to PGA and from there it is written to dumpfile.


What is use of COMPRESS option in exp?


If we specify COMPRESS=y during export then at the time of table creation while importing, the INITIAL extent of the table would be as large as the sum of all the extents allocated to the table in the original database.


If we specify COMPRESS=n during export then while creating table in the import, it will use the same values of INITIAL extent as in the original database.


Now lets say I have a table of 100 MB. There have been some deletions and updations and only 50 MB of actual data is there. I export the table with COMPRESS=y and recreate it in some other database. It will sum all the extents and assign as INITIAL extent while creating the table. There is only 50 MB of data in the table but it has allocated 100 MB already. In case, you have limited space this is not a very good option.


If I do with COMPRESS=N and then import the table, its INITIAL extent will be as large as INITIAL extent in the original database and then as required, new extents will be allocated. So now my table in the new database would be approximately 50 MB in size.


Which are the common IMP/EXP problems?


ORA-00001: Unique constraint ... violated - Perhaps you are importing duplicate rows. Use IGNORE=N to skip tables that already exist (imp will give an error if the object is re-created) or the table could be dropped/ truncated and re-imported if we need to do a table refresh..

IMP-00015: Statement failed ... object already exists... - Use the IGNORE=Y import parameter to ignore these errors, but be careful as you might end up with duplicate rows.

ORA-01555: Snapshot too old - Ask your users to STOP working while you are exporting or use parameter CONSISTENT=NO (However this option could create possible referential problems, because the tables are not exported from one snapshot in time).

ORA-01562: Failed to extend rollback segment - Create bigger rollback segments or set parameter COMMIT=Y (with an appropriate BUFFER parameter) while importing.


What is the Benefits of the Data Pump Technology?


The older export/import technology was client-based. The Data Pump technology is purely       server based. All dump, log, and other files are created on the server by default. Data Pump technology offers several benefits over the traditional export and import data utilities.


The following are the main benefits of the Data Pump technology:


Improved performance: The performance benefits are significant if you are transferring huge

amounts of data.


Ability to restart jobs: You can easily restart jobs that have stalled due to lack of space or have

failed for other reasons. You may also manually stop and restart jobs.


Parallel execution capabilities: By specifying a value for the PARALLEL parameter, you can choose the number of active execution threads for a Data Pump Export or Data Pump Import job.


Ability to attach to running jobs: You can attach to a running Data Pump job and interact with

it from a different screen or location. This enables you to monitor jobs, as well as to modify

certain parameters interactively. Data Pump is an integral part of the Oracle database server,

and as such, it doesn’t need a client to run once it starts a job.


Ability to estimate space requirements: You can easily estimate the space requirements for

your export jobs by using the default BLOCKS method or the ESTIMATES method, before running

an actual export job (see the “Data Pump Export Parameters” section later in this chapter for

details).


Network mode of operation: Once you create database links between two databases, you can

perform exports from a remote database straight to a dump file set. You can also perform

direct imports via the network using database links, without using any dump files. The network

mode is a means of transferring data from one database directly into another database with

the help of database links and without the need to stage it on disk.


Fine-grained data import capability: Oracle9i offered only the QUERY parameter, which enabled

you to specify that the export utility extract a specified portion of a table’s rows. With Data Pump,

you have access to a vastly improved fine-grained options arsenal, thanks to new parameters

like INCLUDE and EXCLUDE.


Remapping capabilities: During a Data Pump import, you can remap schemas and tablespaces,

as well as filenames, by using the new REMAP_ * parameters. Remapping capabilities enable

you to modify objects during the process of importing data by changing old attributes to new

values. For example, the REMAP_SCHEMA parameter enables you to map all of user HR’s schema

to a new user, OE. The REMAP_SCHEMA parameter is similar to the TOUSER parameter in the old

import utility


How to improve exp performance?


1. Set the BUFFER parameter to a high value. Default is 256KB.

2. Stop unnecessary applications to free the resources.

3. If you are running multiple sessions, make sure they write to different disks.

4. Do not export to NFS (Network File Share). Exporting to disk is faster.

5. Set the RECORDLENGTH parameter to a high value.

6. Use DIRECT=yes (direct mode export).


How to improve imp performance?


1. Place the file to be imported in separate disk from datafiles.

2. Increase the DB_CACHE_SIZE.

3. Set LOG_BUFFER to big size.

4. Stop redolog archiving, if possible.

5. Use COMMIT=n, if possible.

6. Set the BUFFER parameter to a high value. Default is 256KB.

7. It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import. Indexes can easily be recreated after the data was successfully imported.

8. Use STATISTICS=NONE

9. Disable the INSERT triggers, as they fire during import.

10. Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.


What are the datapump export modes?


you can perform Data Pump Export jobs in several modes:


Full export mode: You use the FULL parameter when you want to export the entire database in

one export session. You need the EXPORT_FULL_DATABASE role to use this mode.


Schema mode: If you want to export a single user’s data and/or objects only, you must use the

SCHEMAS parameter.


Tablespace mode: By using the TABLESPACES parameter, you can export all the tables in one or

more tablespaces. If you use the TRANSPORT_TABLESPACES parameter, you can export just the

metadata of the objects contained in one or more tablespaces. You may recall that you can

export tablespaces between databases by first exporting the metadata, copying the files of the

tablespace to the target server, and then importing the metadata into the target database.


Table mode: By using the TABLES parameter, you can export one or more tables. The TABLES

parameter is identical to the TABLES parameter in the old export utility.


What is COMPRESSION parameter in expdp?


The COMPRESSION parameter enables the user to specify which data to compress before writing theexport data to a dump file. By default, all metadata is compressed before it’s written out to an export dump file. You can disable compression by specifying a value of NONE for the COMPRESSION parameter, as shown here:


$ expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=hr_comp.dmp COMPRESSION=NONE


The COMPRESSION parameter can take any of the following four values:


ALL: Enables compression for the entire operation.


DATA_ONLY: Specifies that all data should be written to the dump file in a compressed format.


METADATA_ONLY: Specifies all metadata be written to the dump file in a compressed format.

This is the default value.


NONE: Disables compression of all types.


What are Export Filtering Parameters in expdp?


Data Pump contains several parameters related to export filtering. Some of them are substitutes for old export parameters; others offer new functionality.


CONTENT


By using the CONTENT parameter, you can filter what goes into the export dump file. The CONTENT


parameter can take three values:


• ALL exports both table data and table and other object definitions (metadata).

• DATA_ONLY exports only table rows.

• METADATA_ONLY exports only metadata.


EXCLUDE and INCLUDE


The EXCLUDE and INCLUDE parameters are two mutually exclusive parameters that you can use to perform what is known as metadata filtering. Metadata filtering enables you to selectively leave out or include certain types of objects during a Data Pump Export or Import job. In the old export utility, you used the CONSTRAINTS, GRANTS, and INDEXES parameters to specify whether you wanted to export those objects. Using the EXCLUDE and INCLUDE parameters, you now can include or exclude many other kinds of objects besides the four objects you could filter previously. For example, if you don’t wish to export any packages during the export, you can specify this with the help of the EXCLUDE parameter.


QUERY


The QUERY parameter serves the same function as it does in the traditional export utility: it lets you selectively export table row data with the help of a SQL statement. The QUERY parameter permits you to qualify the SQL statement with a table name, so that it applies only to a particular table. Here’s an example:


QUERY=OE.ORDERS: "WHERE order_id > 100000"


In this example, only those rows in the orders table (owned by user OE) where the order_id is

greater than 100,000 are exported.


What is Network Link Parameter and how it works?


The Data Pump Export utility provides a way to initiate a network export. Using the NETWORK_LINK parameter, you can initiate an export job from your server and have Data Pump export data from a remote database to dump files located on the instance from which you initiate the Data Pump Export job.


Here’s an example that shows you how to perform a network export:


$ expdp hr/hr DIRECTORY=dpump_dir1 NETWORK_LINK=finance

DUMPFILE=network_export.dmp LOGFILE=network_export.log


In the example, the NETWORK_LINK parameter must have a valid database link as its value. This

means that you must have created the database link ahead of time. This example is exporting data from the finance database on the prod1 server.


Let’s say you have two databases, called local and remote. In order to use the NETWORK_LINK parameter and pass data directly over the network, follow these steps:

1. Create a database link to the remote database, which is named remote in this example:

SQL> CREATE DATABASE LINK remote

 CONNECT TO scott IDENTIFIED BY tiger

 USING 'remote.world';


2. If there isn’t one already, create a Data Pump directory object:


SQL> CREATE DIRECTORY remote_dir1 AS '/u01/app/oracle/dp_dir';


3. Set the new directory as your default directory, by exporting the directory value:


$ export DATA_PUMP_DIR=remote_dir1


4. Perform the network export from the database named remote:


$ expdp system/sammyy1 SCHEMAS=SCOTT FILE_NAME=network.dmp NETWORK_LINK=finance


You’ll see that the Data Pump Export job will create the dump file network.dmp (in the directory location specified by remote_dir1) on the server hosting the database named local. However, the data within the dump file is extracted from the user scott’s schema in the remote database (named remote in our example). You can see that the NETWORK_LINK parameter carries the dump files over the network from a remote location to the local server. All you need is a database link from a database on the local server to the source database on the remote server.


What is use of INDEXFILE option in imp?


Will write DDLs of the objects in the dumpfile into the specified file.


What is use of IGNORE option in imp?


Will ignore the errors during import and will continue the import.


What are the differences between expdp and exp (Data Pump or normal exp/imp)?


Data Pump is server centric (files will be at server).

Data Pump has APIs, from procedures we can run Data Pump jobs.

In Data Pump, we can stop and restart the jobs.

Data Pump will do parallel execution.

Tapes & pipes are not supported in Data Pump.

Data Pump consumes more undo tablespace.

Data Pump import will create the user, if user doesn’t exist.


Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?


Data Pump is block mode, exp is byte mode.

Data Pump will do parallel execution.

Data Pump uses direct path API.


How to improve expdp performance?


Using parallel option which increases worker threads. This should be set based on the number of cpus.


How to improve impdp performance?


Using parallel option which increases worker threads. This should be set based on the number of cpus.


In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?


Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.

Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.

Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.


What is the order of importing objects in impdp?


 Tablespaces

 Users

 Roles

 Database links

 Sequences

 Directories

 Synonyms

 Types

 Tables/Partitions

 Views

 Comments

 Packages/Procedures/Functions

 Materialized views


How to import only metadata?


CONTENT= METADATA_ONLY


How to import into different user/tablespace/datafile/table?

REMAP_SCHEMA

REMAP_TABLESPACE

REMAP_DATAFILE

REMAP_TABLE

REMAP_DATA

STEPS TO USE EXPDP TO EXPORT DATA FROM PHYSICAL STANDBY DATABASE:

https://www.sultandba.com/2020/06/use-expdp-export-data-physical-standby-database.html

Tuesday, October 13, 2020

RMAN feature in 12c:

 1.SYSBACKUP Privilege:

The SYSBACKUP privilege allows the DBA to perform RMAN backup commands without additional privileges. Prior to 12c, users needed SYSDBA privilege to backup the database.

rman target ' "/ as sysbackup" '

rman target ' "/ as sysdba" '

rman target /

rman target1 ‘ “bkpadm@PDB1 as sysbackup” ‘

rman target ‘ “sysadm@PDB1 as sysdba” ‘

rman target sysadm@PDB1

2.SQL Interface Improvements:

SQL commands can be run in RMAN.DDL/DML Commands can be from RMAN Command prompt.

SHUTDOWN/STARTUP the database and also can use ALTER commands.

RMAN DESCRIBE provides the same functionality of SQL*Plus DESCRIBE.

RMAN> desc dba_profiles;

RMAN> select sysdate from dual;

RMAN> create table table5(col1 number, col2 varchar2(20));

RMAN> insert into table5 values (1,'Test3');

RMAN> update table5 set col1=2;

RMAN> drop table table5;

RMAN> shutdown immediate

RMAN> startup mount

RMAN> alter database open;

3.Support for multitenant container and pluggable databases:

RMAN provides full support for backup and recovery for CDB and PDB.

RMAN you can back up an entire container database or individual pluggable databases and also can perform point-in-time recovery.  But it is recommended that you turn on control file auto backup.Otherwise point-in-time recovery for pluggable databases may not work efficiently when you need to undo data file additions or deletions.

When you back up a container database, RMAN backs up the root, pluggable databases in the container, and archive logs.while restoring we can choose the whole container, one or more pluggable databases or the root only.

Backup the CDB, PDB, and root:

You should have SYSBACKUP or SYSDBA privilege to backup any of the databases

Container Database (CDB) as same as non-container database using below command:

RMAN> BACKUP DATABASE plus ARCHIVELOG;

Pluggable Database (PDB) using below command:

RMAN> BACKUP PLUGGABLE DATABASE PDB1, PDB2;

Or connect to pluggable Database in RMAN :

% rman target sys@PDB1

RMAN> BACKUP DATABASE;

backup the root using below command:

RMAN> BACKUP DATABASE ROOT;

restore the whole CDB using below script.restoring CDB database will restore all the pluggable databases.

RMAN> RUN { 

     STARTUP MOUNT;

     RESTORE DATABASE;

     RECOVER DATABASE;

     ALTER DATABASE OPEN;

}

restore only ROOT Database using below script:

RMAN> RUN {

     STARTUP MOUNT;

     RESTORE DATABASE ROOT;

     RECOVER DATABASE ROOT;

     ALTER DATABASE OPEN;

}

restore Pluggable Databases in two ways. Either you can restore from root container and connect directly to PDB to restore.

Use below script to restore from root. Using this approach you can able to restore and recover multiple PDB’s with a single command.

RMAN > RUN {

     RESTORE PLUGGABLE DATABASE PDB1, PDB2;

     RECOVER PLUGGABLE DATABASE PDB1, PDB2;

     ALTER PLUGGABLE DATABASE PDB1, PDB2 OPEN;

}

Use below script to connect PDB, restore and recover the database. Using this approach you will be able to restore and recover only one PDB.

$ rman target=bkpadm@PDB1

     RMAN> run{

     RESTORE DATABASE;

     RECOVER DATABASE;

}

performing a point-in-time recovery of the CDB or PDB are the same as a normal database.

when you perform Point-in-time recovery on the CDB, it will effect on all the PDBs as well.

When you perform point-in-time recovery on a PDB, it will affect that single database.

The command to perform a point-in-time recovery is:

SET UNTIL TIME "TO_DATE(’10-Feb-2014 01:00:00’,’DD-MON-YYYY HH24:MI:SS’)";

SET UNTIL SCN 19900005; # alternatively, specify SCN

SET UNTIL SEQUENCE 120; # alternatively, specify log seq

4.Backup of Archived redo logs:

We can back up or delete archive logs when they connect to root as a common user with SYSDBA or SYSBACKUP privilege, but you cannot back up or delete archive logs when you connect to PDB as a local user with SYSDBA or SYSBACKUP privilege.

We only able to switch the archived logs when connect to the root of a CDB, but cannot switch archived redo logs when connected to a PDB.

If there is  more than one archive log destination, when RMAN used  to backup the archive redo logs it backs up only one copy of the archived redo logs. RMAN does not include more than one copy because multiple destinations will have same log sequence number.

5.DUPLICATE enhancements:

When you duplicate a database using RMAN DUPLICATE, the database is created and opened with RESETLOGS mode. With Oracle database 12c, you can specify that the database must not be opened with “NOOPEN” clause.

NOOPEN clause is useful in following situations:

If need to make changes to initialization parameters such as block change tracking, flashback database settings.

Opening the database conflict with other source database

If  plan to create database for upgrade and want to open in upgrade mode.

Below command used to creates duplicate database, but not open.

RMAN> DUPLICATE TARGET DATABASE TO DB1 FROM ACTIVE DATABASE NOOPEN;

6.Multi section Backup Improvements:

This backup functionality introduced in 11g to handle large data file backups. In this RMAN can break up a large file into sections during the backup and recovery, which will improve the performance of large datafiles backup. We can select the size using the SECTION SIZE keyword and each channel will create separate files within the backup set, and backup the database in parallel. This only supports backup sets in 11g.

In 12c, the multisection backup supports incremental backups and image copies, including backup sets (introduced in 11g).  This functionality can only be used for data files, We cannot use this to backup control files.

If the SECTION SIZE that is selected is larger than the actual file, then RMAN will not use multisection backup functionality. If specify a small SECTION SIZE that produces more than 256 sections then RMAN increases the SECTION SIZE to a value that results 256 sections.

To Create Multisection backup image copies.

RMAN> BACKUP AS COPY SECTION SIZE 1024M DATABASE;

To Create multisection incremental level1 backup:

RMAN> BACKUP INCREMENTAL LEVEL 1 SECTION SIZE 1024M DATABASE;

To enhance the backup performance, use unused block compression and block change tracking in conjunction with multisection incremental backups.

7.Restoring and Recovering Files over Network:

RMAN can restore and recover a database, tablespace, datafile, controlfile or spfile over network from a physical standby database. Restore to be happens over the network, RESTORE… FROM SERVICE command and RECOVER…FROM SERVICE command is used.

FROM SERVICE clause specifies the service name of the physical standby.

multisection, compression and encryption can be used to improve backup and restore performance.

This is very useful in scenarios involving a standby database:

Restore or recover files lost by connecting through the network on one database (standby/primary) which is available on the other database.

Recover a standby database to using an incremental backup taken off the primary to roll forward the standby.

Example to restoring data file over the network from physical standby to primary database:

Connect to Primary:

RMAN> CONNECT TARGET /

Backup sets encrypted using AES128 encryption algorithm:

RMAN> SET ENCRYPTION ALGORITHM 'AES128';

Restore datafile on the primary using service “standby_db”:

RMAN> RESTORE DATAFILE '/u01/oradata/datafile1.dbf' FROM SERVICE standby_db SECTION SIZE 1024M;

8.Active Database Duplication Improvements:

Active Database duplication  introduced in oracle 11g.This feature create clone or standby database by copying the data files and archive logs using the TARGET (Source) database channels over the network to clone AUXILIARY database. In this method no backups of the TARGET database is required. Processing load will be on the TARGET instance as TARGET database channels is being used.

In Oracle 12c, Active Database duplication can be performed using the backup sets. Sufficient AUXILIARY channels is allocated to connect TARGET database and retrieve the backup’s sets over the network, this reduced the load on the TARGET (source) database. Unused block compression can be used to reduce the size of the backups transported over the network. Encrypt backups and multisection backups can also be used. Network consumption reduces and performance improves in 12c.

RMAN to use backup sets we need to establish connection to target database using a net service name and one of the below conditions should satisfy:

(i)AUXILIARY channels numbers should be equal or greater than TARGET channels.

(ii)DUPLICATION …FROM ACTIVE DATABASE should contain either USING BACKUPSET, USING COMPRESSED BACKUPSET or SECTION SIZE .

9.Unified auditing and RMAN:

UNIFIED_AUDIT_TRAIL dictionary view has a set of fields (prefixed with RMAN_) which automatically record RMAN related events. AUDIT_ADMIN or AUDIT_VIEWER role required in order to query the UNIFIED_AUDIT_TRAIL view.

10.Recovering Tables and Table Partitions from RMAN Backups:

A dropped or purged (truncated) table can now back in time without affecting the rest of the objects within the database/tablespace.

RMAN 12c will create an auxiliary instance in a separate disk location of required files, recovers the tables or table partitions up to the desired point in time. It will then create a data pump export dump containing the recovered objects. 

11.Storage Snapshot Optimization:

With this feature we can take storage snapshot of database using third-party technologies without keeping the database in BACKUP mode. When need to recover, point in time can be used of the snapshot. Roll forward can be used by using the database archive logs, and use this snapshot feature to recover part or all of the database.

Recommended to use different storage to keep snapshot from where the database is running.

Following are required to backup database using storage snapshot optimization:

(i)The snapshot preserves the write order for each file.

(ii)The database is crash consistent during the snapshot.

(iii)The snapshot technology stores the time at which the snapshot is completed.

If above compliance id not fulfilled with third-party snapshot technology vendor, to take snapshot we have to keep database in BACKUP mode.

Following operational changes  ONLINE, OFFLINE, DROP, RENAME, ADD, READONLY and SHRINK on data files and table spaces during the snapshot then the snapshots are unusable.

Recovery time cannot be earlier than the snapshot time.

To recover complete database:

RECOVER DATABASE;

To recover database using particular snapshot

RECOVER DATABASE UNTIL TIME ‘11/10/2019 12:00:00’ SNAPSHOT TIME ‘11/9/2019 11:00:00’

partial recovery using archived redo logs:

RECOVER DATABASE UNTIL CANCEL SNAPSHOT TIME ‘11/19/2019 09:00:00’

Rman related question/answer:

 Difference between using recovery catalog and control file?

Old backup information in control file will be lost When new incarnation happens but it will be preserved in recovery catalog.

Scripts can be stored in recovery catalog. Recovery catalog is central and can have information of many databases.

What is the use of crosscheck command in RMAN?

Crosscheck will be useful to check whether the catalog information is intact with OS level information.

What are the differences between crosscheck and validate commands?

Validate command is to examine a backup set and report whether it can be restored. RMAN scans all of the backup pieces in the specified backup sets and looks at the checksum to verify that the contents are intact so that backup can be successfully restored if necessary.

Crosscheck command is to verify the status of backups and copies recorded in the RMAN repository against media such as disk or tape. The crosscheck command only processes files created on the same device type as the channel running crosscheck.

What is the difference between backup set and backup piece?

Backup set is logical. Backup piece is physical.

What is obsolete backup and expired backup?

Expired means that the backup piece or backup set is not found in the backup destination.

Obsolete means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.

Difference between auxiliary channel and maintenance channel?

New features in Oracle 12c RMAN?

New features in Oracle 11g RMAN?

What is the use of nofilenamecheck in RMAN ?

When NOFILENAMECHECK is used with the DUPLICATE command, RMAN does not validate the filenames during restoration. If the primary database and the standby database are on the same host, The DUPLICATE NOFILENAMECHECK option should not be used.

Do we need to shutdown db to change block change tracking file?

No.

What are all the  database file's that RMAN cannot backup ?

RMAN cannot take backup of the pfile, redo logs, tns or network configuration files, password files, external tables and the contents of the Oracle home files.

Can I take RMAN backup when the database is down ?

No, we can take RMAN backup only when the target database is open or in mount stage.

Wednesday, October 7, 2020

Unplugging/Plugging PDB:

 In multitenant environment, a PDB can unplug from a CDB and then plug it into another CDB.Once PDB is unplugged, it needs to be dropped from CDB as it become unusable in this CDB.Same PDB can be plugged back into the same CDB.

Unplugging PDB:

==================

Connect to PDB using sqlplus and list the location of all the datafiles of PDB which is going to unplugged.


sql>sqlplus "/ as sysdba"

sql>alter session set container=pdb1;

sql>select file_name from dba_data_files;


Close PDB:

========== 

sql>alter pluggable database pdb1 close immediate;


Unplug PDB:

===========

sql> alter pluggable database pdb1 unplug into '/u01/pdb1_db.xml';


Complete path is needed to specify which would created during this process and will contain information about this PDB. Same xml would be used during plugging into CDB.


Copy Datafile:

===============

Copy the datafiles listed in the above step to the host of CDB where we want to plug.Also copy xml file.


Drop unplugged PDB:

===================

sql>drop pluggable database pdb1 keep datafiles;


**We can skip this step, if plugging PDB on a different host.

KEEP DATAFILES clause used with drop command so that datafile dont get dropped and can be re-used id plugging in the same CDB.


Plugging PDB:

================

This would be in destination.Compatibility of PDB should be checked against CDB where it will be plugged in.

Below codes can be used, O/p should be Yes to go ahead further:

SET SERVEROUTPUT ON

DECLARE

compatible CONSTANT VARCHAR2(3) :=

CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(

pdb_descr_file => '/u01/pdb1_db.xml',

pdb_name       => 'PDB1')

WHEN TRUE THEN 'YES'

ELSE 'NO'

END;

BEGIN

DBMS_OUTPUT.PUT_LINE(compatible);

END;

Plug in PDB in target CDB:

============================

Datafiles location on the destination host/CDB can be same or different. If location is same, we don’t need to do anything, otherwise we will need to use FILE_NAME_CONVERT option of CREATE PLUGGABLE DATABASE command to specify the new location of the datafiles. 

sql> create pluggable database pdb1 using '/u01/pdb1_db.xml'  copy file_name_convert=('/u01/app/pdb1','/u01/db/pdb1');

sql> alter pluggable database pdb1 open;

***COPY keyword is default to be used to copy files to the new destination. If destination is same as source, specify NOCOPY.

If you have copied datafiles on a different location on the destination, you will need to edit .xml file to specify the location where you have copied the datafiles so that during plugging in phase, datafiles can be searched at this location.

Tuesday, September 22, 2020

All about Autoconfig in oracle application:

 adautocfg.sh >> Call adconfig.sh which further call adconfig.pl.

adtmplreport.sh>>

[applmgr3@ ~]$ cd $AD_TOP/bin

[applmgr3@1 bin]$ ls adtmplreport.sh

[applmgr3@ bin]$ adtmplreport.sh contextfile=$CONTEXT_FILE target=/u03/applmgr3/inst/apps/PROD_xxx/ora/10.1.3/Apache/Apache/conf/httpd.conf.

adclonectx.pl >>To retrieve the applications tier context file ( if it is lost or deleted accidentally) , execute the following command on the applications tier :

perl <COMMON_TOP>/clone/bin/adclonectx.pl retrieve

adbldxml.sh >>Creates the Applications context file, e.g. .xml

adchkcfg.sh >>script at $AD_TOP/bin. This script will run autoconfig in test mode and create the difference file which tells us what is going to change, when u actually run autoconfig.

This script is used for analysis purpose. It generates a report that shows differences between the original configuration files and the AutoConfig generated configuration files. 

Autoconfig will run in 3 phases.

1.INIT     – Instantiate the drivers and templates

2.SETUP   – Fill the templated with values from xml and create files

3.PROFILE  – Update the profile values in database.

Log/Files Location:

$INST_TOP/appl/admin >> context file location.

$INST_TOP/admin/log  >> autoconfig log file.

Restore a autoconfig :

Partially. Adconfig will create a restore script at $INST_TOP/admin/out/. This restore.sh will copy the backed up files before autoconfig run to its original locations. But the profile values updated in the database can’t be restored back.

Customization:

The way to do customization was to insert custom code between the BeginCustomization and End Customization blocks in the XML file.

It was difficult todocument and keep track of the customizations that were made. Now Oracle hascome up with a better way to manage and maintain customizations.

Create the custom directory in $PROD_TOP/admin/template

$AD_TOP/bin/adtmplreport.sh contextfile=/appl/admin/.xml

target=$FND_TOP/admin/template/appsweb.cfg

The above command would return "$FND_TOP/admin/template/forms_web_1012_cfg.tmp", means

the "appsweb.cfg" file was created from the source Autoconfig template $FND_TOP/admin/template/forms_web_1012_cfg.tmp.

Copy the "forms_web_1012_cfg.tmp" into custom directory created earlier  and then edit it to your requirements

Copy the template $FND_TOP/admin/template/forms_web_1012_cfg.tmp  to  $FND_TOP/admin/template/custom

Sunday, April 26, 2020

Where to Find Your Tenancy's OCID:

Get the tenancy OCID from the Oracle Cloud Infrastructure Console on the Tenancy Details page:

1.Open the navigation menu, under Governance and Administration, go to Administration and click Tenancy Details.
2.The tenancy OCID is shown under Tenancy Information. Click Copy to copy it to your clipboard.


The tenancy OCID looks something like this (notice the word "tenancy" in it):

ocid1.tenancy.oc1..<unique_ID>

Sunday, April 19, 2020

Identity and Access Management (IAM) in Oracle Cloud Infrastructure

IAM Service Components :

1.Resource:
A cloud object that your company's employees create and use when interacting with Oracle Cloud Infrastructure. Resources include compute instances, block storage volumes, virtual cloud networks (VCNs), subnets, and route tables.

2.User:
An individual employee or system that needs to manage or use your company's Oracle Cloud Infrastructure resources. Users might need to launch instances, manage remote disks, work with your virtual cloud network, and so on.

3.Group:
A collection of users who all need the same type of access to a particular set of resources
or compartment.

4.Compartment

5.Tenancy

6.Policy:
A document that specifies who can access which resources, and how. Access is granted at the group level and compartment level, which means that you can write a policy that gives a group a specific type of access within a specific compartment, or to the tenancy itself. If you give a group access to the tenancy, the group automatically gets the same type of access to all the compartments inside the tenancy.

7.Home Region:
The region where your IAM resources reside. All IAM resources are global and available across all regions, but the master set of definitions resides in a single region, the home region. You make changes to your IAM resources in your home region, and the changes are automatically propagated to all regions. 

Tuesday, March 17, 2020

Oracle Database Table Fragmentation:

In oracle schema sometimes found in some tables having big difference in actual size (from User_segments) and expected size from user_tables (Num_rows*avg_row_length (in bytes)).This happens due to fragmentation in the table or stats for table are not updated into user_tables.

If only insert happens in a table then there will not be any fragmentation.Fragmentation happens when we update/delete data in table.
The space which gets freed up during non-insert DML operations is not immediately re-used (or sometimes, may not get  reuse ever at all). This leaves behind holes in table which results in table fragmentation.

how oracle manages space for tables.

When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.

When a lot of DML operations are applied on a table, the table will become fragmented because DML does not release free space from the table below the HWM.(High Water Mark).

When rows are inserted into the table, the high watermark(used blocks) of the table moved forward in order to accommodate new rows into the table. But during delete operation, oracle doesn’t allow high watermark to shift backward in order to decrease table size and release the free space.Ideally once we delete the data from the table, the free space should be released or reused but additionally oracle acquire new blocks to accommodate the new rows for insertion which causes hole into the table.

Table fragmentation, which cause slowness and a wastage of space.
Fragmentation is a common issue in oracle database which occurs due to excessive dml operations like insert followed by update and delete operations.


Gather table statistics:

In order to check exact difference in table actual size (dba_segments) and stats size (dba_tables). The difference between these value will report actual fragmentation to DBA. So, We have to have updated stats on the table stored in dba_tables. Check LAST_ANALYZED value for table in dba_tables. If this value is recent you can skip this step. Other wise i would suggest to gather table stats to get updated stats.

To fins LAST_ANALYZED:
select table_name,last_analyzed,stale_stats from user_tab_statistics where table_name='&TABLE_NAME';

To find Table size:
select table_name,bytes/(1024*1024*1024) from dba_table where table_name='&table_name';
select sum(bytes)/1024/1024/1024 from dba_segments where segment_name='&TABLE_NAME';

EXEC dbms_stats.gather_table_stats(ownname => 'TABLE_OWNER', tabname => 'TABLE_NAME', method_opt=> 'for all indexed columns size skewonly', granularity => 'ALL', degree => 8 ,cascade => true,estimate_percent => 15);

To find actual table size, fragmented size and percentage of fragmentation in a table.:

set pages 50000 lines 32767;
select owner,
       table_name,
       round((blocks * 8), 2) || 'kb' "Fragmented size",
       round((num_rows * avg_row_len / 1024), 2) || 'kb' "Actual size",
       round((blocks * 8), 2) - round((num_rows * avg_row_len / 1024), 2) || 'kb',
       ((round((blocks * 8), 2) - round((num_rows * avg_row_len / 1024), 2)) /
       round((blocks * 8), 2)) * 100 - 10 "reclaimable space % "
  from dba_tables
 where table_name = '&table_Name'
   AND OWNER LIKE '&schema_name';
=========== 
 
select table_name,avg_row_len,round(((blocks*16/1024)),2)||'MB' "TOTAL_SIZE",
round((num_rows*avg_row_len/1024/1024),2)||'Mb' "ACTUAL_SIZE",
round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2) ||'MB' "FRAGMENTED_SPACE",
(round(((blocks*16/1024)-(num_rows*avg_row_len/1024/1024)),2)/round(((blocks*16/1024)),2))*100 "percentage"
from all_tables WHERE table_name='&TABLE_NAME';

***If you find more than 20% fragmentation then you can proceed for de-fragmentation.

To find Top 10 fragmentation tables:
select *
      from (select table_name,
               round((blocks * 8), 2) "size (kb)",
               round((num_rows * avg_row_len / 1024), 2) "actual_data (kb)",
               (round((blocks * 8), 2) -
               round((num_rows * avg_row_len / 1024), 2)) "wasted_space (kb)"
          from dba_tables
         where (round((blocks * 8), 2) >
               round((num_rows * avg_row_len / 1024), 2))
         order by 4 desc)
 WHERE ROWNUM <= 10;



To find indexes on the table:
select index_name from dba_indexes where table_name='&TABLE_NAME';

To reset HWM / remove fragmentation:

There are three way to do this:

1. Alter table move (to another tablespace, or same tablespace) and rebuild indexes:-(Depends upon the free space available in the tablespace) 
2. Export and import the table:- (difficult to implement in production environment)
3. Shrink command (from Oracle 10g onwards)(Shrink command is only applicable for tables which are tablespace with auto segment space management)

Way1:

select index_name,status from dba_indexes  where table_name like '&table_name';

For same tablespace:

alter table <table_name> move;

For  new tablespace:

alter table <table_name> enable row movement;
alter table <table_name> move tablespace <new_tablespace_name>;

rebuild all the indexes:
alter index <INDEX_NAME> rebuild online;

Way2: 
Export and import the table.

Way3:
Shrink command:
Its in introduced in 10g,applicable for tables which are tablespace with auto segment space management.It is online automatic segment space.

row movement should be  enabled.
alter table <table_name> enable row movement;

alter table <table_name> shrink space compact; >>In this Rearrange rows and then reset HWM,All DML's will happen during this time.

alter table <table_name> shrink space;>>In Reset HWM, No DML will happen,its very quick

After completing these steps, table statistics must be gathered.

Monday, March 16, 2020

Oracle function-based index:

We will give demonstration with table name department created in the CREATE INDEX. 

Statement to creates an index on the first_name column of the department table:

CREATE INDEX department_i ON members(first_name);

if in WHERE clause first_name column is being used ,optimizer will definitely use the index:

select * from  department  WHERE first_name = 'Man';

if function is being used on the indexed column first_name as below then optimizer will not use index.

select * from  department  WHERE UPPER(first_name) = 'MAN';

To overcome above concern, Oracle introduced function-based indexes:

A function-based index calculates the result of a function that involves one or more columns and stores that result in the index.

syntax of creating a function-based index:

CREATE INDEX index_name ON table_name (expression)

index expression can be an arithmetic expression or an expression that contains a function such as a SQL function, PL/SQL function, and package function.

*** function-based index can be a btree or bitmap index.
example:

CREATE INDEX department_i ON members(UPPER(first_name));

Here Oracle will convert all values of first_name column to uppercase and stored  in index department_i .

Now below quey will use index to fetch the data.

select * from  department  WHERE UPPER(first_name) = 'MAN';

Saturday, March 14, 2020

Oracle database Indexes

Index:
An index stores the values in the indexed column(s). And for each value the locations of the rows that have it. Just like the index at the back of a book.

Types of index:

B-tree vs. Bitmap:
B-tree:
By default indexes are B-tree.These are balanced. This means that all the leaf nodes are at the same depth in the tree.

Bitmap:
A bitmap index is a two-dimensional array with zero and one (bit) values. it stores the indexing information in bit arrays. The query and search result is done by performing logical bitwise 0 or 1 operation. This make it very fast.

Comparison in both:
Bitmaps is it's easy to compress all those ones and zeros. So a bitmap index is typically smaller than the same B-tree index.

Rows where all the indexed values are null are NOT included in a B-tree. But they are in a bitmap! So the optimizer can use a bitmap to answer queries like:
where indexed_column is null;

You can get around this with B-trees by adding a constant to the end of an index. This makes the following composite index:
create index enable_for_is_null_i on tab ( indexed_column, 1 );

Bitmap indexes may lead to concurrency issues however, possibly blocking other DML on the same table. Therefore these should be avoided in an OLTP applications.

Why are B-trees the default instead of bitmaps?

Killing write concurrency.

They're one of the few situations in Oracle Database where an insert in one session can block an insert in another. This makes them questionable for most OLTP applications.

Well, whenever you insert, update or delete table rows, the database has to keep the index in sync. This happens in a B-tree by walking down the tree, changing the leaf entry as needed. You can see how this works with this visualization tool.

But bitmap locks the entire start/end rowid range! So say you add a row with the value RED. Any other inserts which try to add another row with the value RED to the same range are blocked until the first one commits!

This is an even bigger problem with updates. An update from a B-tree is really a delete of the old value and insert of the new one. But with a bitmap Oracle Database has to lock the affected rowid ranges for both the old and new values!

Function-based Indexes:

Unique Indexes:

Descending Indexes:

Concatenate or Composite index: 
It is an index on multiple columns in a table.

Global Partitioned Index:

Local Partitioned Index:

https://www.tutorialsteacher.com/sqlserver/indexes

Friday, March 13, 2020

Database Trace

Alter session set sql_trace=true;
alter system set trace_enabled=true;
execute dbms_support.start_trace_in_session (sid, serial#,bind=>true,wait=>true)
alter session set tracefile_identifier='577' event '10046 trace name context forever, level 1;
alter session set event '10046 trace name context off';


select * from dba_enabled_traces;

Remove persistence

applmgr@node101]$rm -rf $INST_TOP/ora/10.1.3/j2ee/oacore/persistence/*
applmgr@node101]$rm -rf $INST_TOP/ora/10.1.3/j2ee/oafm/persistence/*
applmgr@node101]rm -rf $INST_TOP/ora/10.1.3/j2ee/forms/persistence/*

Database Upgrade From 11.2.0.4 to 19.2.0.0 Using Manual Method:

1.Install new software for Oracle 19c. 

Make sure you have all the OS prerequisites in place by running the 19c preinstall package. 
yum install -y oracle-database-preinstall-19c
yum update -y
Install 19c Software
Run the root scripts when prompted.

2.Run preupgrade.jar

Put the latest “preupgrade.jar” into the 19c Oracle home.
Make sure you are using the original Oracle home and run the “preupgrade.jar”.

$ $ORACLE_BASE/product/19.0.0/dbhome_1/jdk/bin/java -jar 
$ORACLE_BASE/product/19.0.0/dbhome_1/rdbms/admin/preupgrade.jar TERMINAL TEXT

$OLD_ORACLE_HOME/jdk/bin/java -jar $NEW_ORACLE_HOME/rdbms/admin/preupgrade.jar -FILE DIR $OS_DIRECTORY


>The output from the “preupgrade.jar” lists a number of pre-upgrade recommendations. Some must be manually applied. Others are incorporated into the “preupgrade_fixups.sql” script. 

>Next step is to follow the guidelines displayed by the output, i.e. changing parameters, gathering stats, running the preupgrade_fixups.sql then shutt down the source database and copying the SPFILE and creating a new password file.

3.Upgrade the Database:

Start the database in STARTUP UPGRADE mode and run the upgrade.

# Regular upgrade command.
cd $ORACLE_HOME/rdbms/admin
$ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql

$ORACLE_HOME/perl/bin/perl catctl.pl -n 6 -l /home/oracle catupgrd.sql

# Shorthand command.
$ORACLE_HOME/bin/dbupgrade

4.Post-Upgrade:

Postupgrade_fixups.sql and time zone adjustment.

Database is now fully upgraded and ready to go.

5.Plugin the upgraded database as a PDB into a CDB

==============================
Assumptions
Prerequisities
Install 19c Software
Run preupgrade.jar
Perform Pre-Upgrade Actions
Upgrade the Database
Perform Post-Upgrade Actions
Create New Container Database (CDB)
Convert Non-CDB to PDB
Final Steps
===================
Upgrade Oracle 12.2.0.1.0 to Oracle 19.3

High Level steps :

  • Install Oracle 19c in the target server
  • Apply latest patch (not covered in this article)
  • Take RMAN and Export backup of 12c
  • EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS; in 12c database
  • PURGE DBA_RECYCLEBIN; in 12c database
  • Run Pre upgrade.jar
  • Wait until all materialized views have completed refreshing
  • Create restore point for flashback incase upgrade fails
  • Check Timezone version
  • Take care of the Required and Recommended actions before upgrade
  • Stop LISTENER in 12c
  • Stop the 12c Database
  • Copy listener, tnsnames and sqlnet.ora and password file to 19c network/admin
  • Copy init file to 19c dbs location
  • Start Listener in 19c location
  • Start the 19c DB in upgrade mode
  • Upgrade the Time Zone
  • Gather dictionary statistics after the upgrade
  • Execute the postupgrade fixups
  • Drop restore point

#) Time zone file.
sqlplus / as sysdba <<EOF

-- Check current settings.
SELECT * FROM v$timezone_file;

SHUTDOWN IMMEDIATE;
STARTUP UPGRADE;

-- Begin upgrade to the latest version.
SET SERVEROUTPUT ON
DECLARE
  l_tz_version PLS_INTEGER;
BEGIN
  l_tz_version := DBMS_DST.get_latest_timezone_version;

  DBMS_OUTPUT.put_line('l_tz_version=' || l_tz_version);
  DBMS_DST.begin_upgrade(l_tz_version);
END;
/

SHUTDOWN IMMEDIATE;
STARTUP;

-- Do the upgrade.
SET SERVEROUTPUT ON
DECLARE
  l_failures   PLS_INTEGER;
BEGIN
  DBMS_DST.upgrade_database(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.upgrade_database : l_failures=' || l_failures);
  DBMS_DST.end_upgrade(l_failures);
  DBMS_OUTPUT.put_line('DBMS_DST.end_upgrade : l_failures=' || l_failures);
END;
/

-- Check new settings.
SELECT * FROM v$timezone_file;

COLUMN property_name FORMAT A30
COLUMN property_value FORMAT A20

SELECT property_name, property_value
FROM   database_properties
WHERE  property_name LIKE 'DST_%'
ORDER BY property_name;

exit;
EOF

# 12) Ignored
# 13) AUTOFIXUP

# 14) Gather fixed object stats.
sqlplus / as sysdba <<EOF
EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
exit;
EOF

# AUTOFIXUP
sqlplus / as sysdba <<EOF
@/u01/app/oracle/cfgtoollogs/db11g/preupgrade/postupgrade_fixups.sql
exit;
EOF

Convert Non-CDB to PDB:

Convert Oracle 19c Non-CDB to PDB

Execute DBMS_PDB.DESCRIBE

oracle@db21:/opt/oracle/product/19.0.0/dbhome_1/network/admin$ sqlplus sys as sysdba

Connected to:

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production

Version 19.6.0.0.0

 SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 SQL> startup mount;

ORACLE instance started.

Total System Global Area 2147478488 bytes

Fixed Size                  8874968 bytes

Variable Size            1258291200 bytes

Database Buffers          872415232 bytes

Redo Buffers                7897088 bytes

Database mounted.

 SQL> alter database open read only;

 Database altered.

 SQL> exec DBMS_PDB.DESCRIBE ('/export/home/oracle/dgpdevl.xml');

 PL/SQL procedure successfully completed.

 SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

 Plug in the database to existing Oracle 19c CDB

 SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

 SQL> create pluggable database dgpdevl using '/export/home/oracle/dgpdevl.xml' nocopy tempfile reuse;

Pluggable database created.

SQL> show pdbs

 CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

         3 DGPDEVL                        MOUNTED

 SQL> alter pluggable database open;

 Pluggable database altered.

 Execute noncdb_to_pdb.sql script

 SQL> alter session set container=DGPDEVL;

 Session altered.

 SQL> @?/rdbms/admin/noncdb_to_pdb.sql

….

….

 13:47:57 SQL> set tab OFF

13:47:57 SQL> set termout ON

13:47:57 SQL> set time OFF

SQL> set timing OFF

SQL> set trimout ON

SQL> set trimspool ON

SQL> set underline "-"

SQL> set verify OFF

SQL> set wrap ON

SQL> set xmloptimizationcheck OFF

 Verify PDB plug-in operation via PDB_PLUG_IN_VIOLATIONS

 SQL> select con_id, type, message, status

  from PDB_PLUG_IN_VIOLATIONS

 where status <>'RESOLVED'

 order by time; 

    CON_ID TYPE      MESSAGE                                                                                              STATUS

---------- --------- ---------------------------------------------------------------------------------------------------- ---------

         4 WARNING   Character set mismatch: PDB character set US7ASCII. CDB character set AL32UTF8.                      PENDING

         4 WARNING   Database option APS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.          PENDING

         4 WARNING   Database option CATJAVA mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.      PENDING

         4 WARNING   Database option CONTEXT mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.      PENDING

         4 WARNING   Database option DV mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.           PENDING

         4 WARNING   Database option JAVAVM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.       PENDING

         4 WARNING   Database option OLS mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.          PENDING

         4 WARNING   Database option ORDIM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.        PENDING

         4 WARNING   Database option OWM mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.          PENDING

         4 WARNING   Database option SDO mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.          PENDING

         4 WARNING   Database option XML mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.          PENDING

         4 WARNING   Database option XOQ mismatch: PDB installed version NULL. CDB installed version 19.0.0.0.0.          PENDING

 

 Verify PDB is open in READ WRITE mode

 SQL> conn / as sysdba

Connected.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY  NO

         3DGPDEVL                        READ WRITE NO


https://isqlplus.com/oracle/upgrade/upgrading-oracle-database-11g-to-oracle-database-19c-non-cdb-to-cdb/

https://mikedietrichde.com/2019/07/23/database-migration-from-non-cdb-to-pdb-upgrade-plug-in-convert/

https://oracle-base.com/articles/19c/upgrading-to-19c#create-new-cdb

HOW TO FIND UNUSED INDEXES

Enable monitoring on an index:

ALTER INDEX index_name MONITORING USAGE;

Monitoring Query:

SELECT * FROM v$object_usage;

Disable monitoring of an index:

ALTER INDEX index_name NOMONITORING USAGE;

To enable monitoring on all indexes:

SET heading off
SET echo off
SET pages 10000
SPOOL start_monitoring.sql
SELECT 'alter index ' || owner || '.' || index_name || ' monitoring usage;'
FROM dba_indexes
WHERE owner NOT IN ('SYS', 'SYSTEM');
SPOOL off
@start_monitoring.sql

To stop monitoring on all indexes:

SET heading off
SET echo off
SET pages 10000
SPOOL stop_monitoring.sql
SELECT 'alter index ' || owner || '.' || index_name || ' nomonitoring usage;'
FROM dba_indexes
WHERE owner NOT IN ('SYS', 'SYSTEM');
SPOOL off
@stop_monitoring.sql

Query to find unused indexes:

SELECT * FROM dba_object_usage WHERE used = 'NO';

Imp:If you analyze an index, it will be marked as “used”.

Tuesday, March 10, 2020

Oracle 19c Feature

ORACLE 19C NEW FEATURE COMPARE EXECUTION PLANS WITH DBMS_XPLAN.COMPARE_PLANS

A new feature in Oracle 19c is the ability to use DBCA to not only perform a remote clone of a PDB, but also to relocate a PDB from one Container Database to another.


ORACLE 19C NEW FEATURE AUTOMATIC FLASHBACK OF STANDBY DATABASE

ORACLE 19C NEW FEATURE AUTOMATIC INDEXING

  • DBCA silent mode improvements. Use DBCA in silent mode to perform the following tasks:        
                           Clone a remote PDB.        
                           Relocate a PDB to another container database (CDB).
                          Create a duplicate of an Oracle Database.
  • Dry-run validation of an Oracle Clusterware upgrade.
  • Flashback in Data Guard.
  • DML mode in Data Guard.
  • Network file retention and size limits.
  • Automatic indexing feature.
  • Image-based Oracle client installation.
  • AutoUpgrade for an Oracle database.



https://gavinsoorma.com.au/knowledge-base/19c-new-feature-dbca-pluggable-database-remote-clone-and-relocate/
https://oracle-base.com/articles/19c/multitenant-dbca-pdb-remote-clone-19c

ORA-00600: internal error code, arguments: [kjxmgmb_nreq:!bat]

Issue:
We are running a RAC 2 nodes with database version:11.2.0.4.This morning after a kernel update or host reboot one of the instance i.e 2nd instance  refuse to startup due to the following error :

ORA-00600: internal error code, arguments: [kjxmgmb_nreq:!bat], [17], [56], [9], [], [], [], [], [], [], [], []
LMS0 (ospid: 24928): terminating the instance due to error 484
System state dump requested by (instance=2, osid=24928 (LMS0)), summary=[abnormal instance termination].

Fixes:

As temporary fix was :-

1. After another reboot, the instance has finally started.

Complete instance bounce including database

2.The permanent fix of the issue should be the patch 20250147 - ORA 600 [KJXMGMB_NREQ:!BAT]


Bug 20250147  - ORA-600 [kjxmgmb_nreq:!bat] can occur in RAC crashing the instance (Doc ID 20250147.8)


ORA-600 [kjxmgmb_nreq:!bat] (Doc ID 2211714.1)

Main Fix:

LMS are not running in RT and same as VKTM.

In 10gR2 and above the LMS process is intended to run in the real time scheduling class. In some instances we have seen this prevented due to incorrect ownership or permissions for the oradism executable which is stored in the $ORACLE_HOME/bin directory. See Document 602419.1 for more details on this.

 Check the file permissions of '$ORACLE_HOME/bin/oradism' the database instance.

 The permissions for 'oradism' must be 6550 with the setid for the group: e.g. (-r-sr-s--- 1 root oinstall 129 12 Feb 16 2008 oradism) .

 REFERENCE:  RAC and Oracle Clusterware Best Practices and Starter Kit (Platform Independent) (Doc ID 810394.1)

$ ls -l $ORACLE_HOME/bin/oradism
-rwxr-x--- 1 oracle orainst 71780 Sept 24  2011 $ORACLE_HOME/bin/oradism

Exachk reports incorrect oradism ownership and permission (Doc ID 2009788.1)

Also, the exachk must be run as root user. If not run as root user, warnings such as this can be flagged incorrectly.

Ensure root user was used to run exachk. The user will be reported in the top summary of the html report.

If reported on the db home, you can do the following to change permissions.

Run the following commands with root permission to resolve the issue.

#cd $ORACLE_HOME/bin/

#chown root oradism

#chmod u+s oradism

You can make the changes with the db open.