Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Thursday, November 28, 2019

Python simple program:

Print multiplication table of a number:
===========================
num = int(input("Enter the number: "))
print("Multiplication Table of", num)
for i in range(1, 11):
   print(num,"X",i,"=",num * i)

Print table of a number:
=================
num = int(input("Enter the number: "))
print("Multiplication Table of", num)
for i in range(1, 11):
   print(num * i)

Print sum of n natural number:
=======================
num = int(input("Enter the value of n: "))
hold = num
sum = 0
if num <= 0: 
   print("Enter a whole positive number!") 
else: 
   while num > 0:
        sum = sum + num
        num = num - 1;
   print("Sum of first", hold, "natural numbers is: ", sum)

Factorial of Number:
================
def factorial(num):
    """This is a recursive function that calls
   itself to find the factorial of given number"""
    if num == 1:
        return num
    else:
        return num * factorial(num - 1)
num = int(input("Enter a Number: "))
if num < 0:
    print("Factorial cannot be found for negative numbers")
elif num == 0:
    print("Factorial of 0 is 1")
else:
    print("Factorial of", num, "is: ", factorial(num))

Convert decimal to binary:
====================
def decimalToBinary(num):
   #This function converts decimal number to binary and prints it
    if num > 1:
        decimalToBinary(num // 2)
    print(num % 2, end='')
number = int(input("Enter any decimal number: "))
decimalToBinary(number)




https://dbaclass.com/monitor-your-db/



Monday, November 4, 2019

STARTUP Database failed ORA-38760 to turn on Flashback Database:

Error:

SQL> alter database open;
alter database open
*
ERROR at line 1:
ORA-38760: This database instance failed to turn on flashback database

Solution:
There are 2 possible solutions when a flashback log file was deleted manually:
(1) Turn off Flashback:
SQL> alter database flashback off;

After this turn the flashback on again if required.
SQL> alter database flashback on;

(2) In cases where "Guaranteed Restore Point" is defined, RVWR will still try to write to flashback log even though you have disable it and it will fail again with same error:

Check if flashback database is disable or not. When checked the flashback status after disabling flashback, it shows 'RESTORE POINT ONLY' instead of 'NO'.

SQL>  select flashback_on from v$database;

FLASHBACK_ON
------------------
RESTORE POINT ONLY

FLASHBACK_ON - possible values are as follows:

    YES                - Flashback is on
    NO                 - Flashback is off
    RESTORE POINT ONLY - Flashback is on but one can only flashback to guaranteed restore points

So the cause of the issue is Guaranteed Restore Point created on database.
The database would still try to write flashback data to the current flashback log because the database still has at least one Guaranteed Restore Point declared in the controlfile.

Find out the name of Guaranteed Restore Point and delete so that database would not try to write to flashback log on startup:

Now we have 3 options to know the restore point name:

2.1) Check the name from v$restore_point view but that would also fail with same error:

SQL> select * from v$restore_point;
SQL>select NAME,SCN,GUARANTEE_FLASHBACK_DATABASE,DATABASE_INCARNATION# from v$restore_point;

if this also error out.

2.2) Search for restore point name in alert log. In this case customer was purging alert log every year starting so could not find name for the Restore Point.

2.3) Dump the controlfile to get the restore point name:

SQL> oradebug setmypid
SQL> alter session set events 'immediate trace name controlf level 9';
SQL> oradebug tracefile_name

From trace file of controlfile dump, we could see below information:

RESTORE POINT #1:
 restore point name: <restore point test> guarantee flag: 1 incarnation: 2next record 0                            <<<< Name of restore point

Now we have name of Guaranteed Restore Point:

SQL> Drop restore point <restore point test>;

SQL> alter database open;

***This error may also occur if the FRA is full and thus a flashback log cannot be created.  Check V$flash_recovery_area_usage and/or V$recovery_file_dest regarding space availability in FRA. 

Reference doc id:
STARTUP Database failed ORA-38760 to turn on Flashback Database (Doc ID 1554596.1)

Removing Managed Server Instance

Process is similar to adding the managed server.

Done using the adProvisionEBS.pl and txkSetAppsConf.pl scripts

Execute adProvisionEBS.pl with option ebs-delete-managedserver. This will delete the managed server, and also update the respective context variables.

Run txkSetAppsConf.pl, Using the option configoption=removeMS.This removes details of the deleted managed servers from the OHS configuration files mod_wl_ohs.conf and apps.conf.

Restart HTTP server using adapcctl.sh script

Managing Managed Server Configurations:

 By default,application tier will have only a single instance of the oacore, oafm, forms and forms-c4ws services.

To increase the Web logic performance, we need to add up managed servers.

Filesystem: to be used run

There should  not be any active ADOP cycle.

Managed server additionbe removal  should be done only through scripts: adProvisionEBS.pl and txkSetAppsConf.pl 

Never use WebLogic Administration Console.

1.Execute the following command to add a new managed server. This will create a managed server and add a new entry to the context file for starting and stopping the new managed server via the adstrtal and adstpall scripts

$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl \

ebs-create-managedserver -contextfile=<CONTEXT_FILE> \

-managedsrvname=<MANAGED_SERVER_NAME> -servicetype=<SERVICE_TYPE> \

-managedsrvport=<MANAGED_SERVER_PORT> -logfile=<LOGFILE>

$ perl <AD_TOP>/patch/115/bin/adProvisionEBS.pl \

ebs-create-managedserver -contextfile=<CONTEXT_FILE> \

-managedsrvname=oacore_server2 -servicetype=oacore \

-managedsrvport=7203 -logfile=<APPLRGF>/TXK/addMS_oacoreserver2.log

After completed scripts it’s created new oacoreserver2

2.Perform the following steps on all application tier nodes participating in the same cluster where this managed server is added:

Source the run file system.

Execute the following command to add details of the newly added managed servers into the OHS configuration files mod_wl_ohs.conf and apps.conf on the current node:

$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl \

-contextfile=<CONTEXT_FILE> \

-configoption=addMS \

-oacore=<host>.<domain>:<port> \

-oafm=<host>.<domain>:<port> \

-forms=<host>.<domain>:<port> \

-formsc4ws=<host>.<domain>:<port>

where

The argument contextfile accepts the complete path to the context file.

The arguments oacore, oafm, forms, formsc4ws accept a comma-separated list of managed server details in the following format:

<host>.<domain>:<port>

host and domain are the hostname and domain name of the newly added node port is the port of the new managed server whose reference is to be added

For example, if the managed server oacore_server2 has been added on host 'testserver' and domain 'example.com' with port 7205, the following command should be executed: 

$ perl <FND_TOP>/patch/115/bin/txkSetAppsConf.pl -contextfile=<CONTEXT_FILE> \

-configoption=addMS -oacore=testserver.example.com:7205

3.perl $AD_TOP/bin/adSyncContext.pl contextfile=$CONTEXT_FILE

This script is used to explicitly pull the values of the WebLogic Server and Oracle HTTP Server configuration parameters and synchronize the corresponding context variable values accordingly.

4.Startup the New  managed server

$ sh <ADMIN_SCRIPTS_HOME>/admanagedsrvctl.sh start <MANAGED SERVER NAME>

5.If Oracle HTTP Server is enabled on the node, restart it as follows:

On UNIX:

$ sh <ADMIN_SCRIPTS_HOME>/adapcctl.sh stop

$ sh <ADMIN_SCRIPTS_HOME>/adapcctl.sh start

6.Run fs_clone :

During the next adop prepare, the Configuration Change Detector identifies that the addition has been made and the managed servers are automatically synced up from the run file system to the patch file system. The synchronization also gets done when fs_clone is executed.

adop option cleanup hung forever:

Error:

ORA-04020: deadlock detected while trying to lock object SYS.DBMS_SYS_SQL SQL: begin sys.ad_grants.cleanup; end;


Solution:
Connect to SQL*PLUS as APPS
 SQL>select count(1)
  from dba_tab_privs
  where table_name='DBMS_SYS_SQL'
  and privilege='EXECUTE'
  and grantee='APPS'
If retun row.
Run the following command: "exec sys.ad_grants.cleanup;"

 SQL>select count(1)
  from dba_tab_privs
  where table_name='DBMS_SYS_SQL'
  and privilege='EXECUTE'
  and grantee='APPS'
 
verify 0 rows are returned

Doc id: Adop Cleanup Issue: "[ERROR] [CLEANUP] ORA-04020: deadlock detected " (Doc ID 2424333.1)

Oracle Database Interactive Quick Reference:

12c

11g

Sunday, November 3, 2019

database character set

A character set determines what languages can be represented in the database.
Oracle recommends Unicode AL32UTF8 as the database character set. Unicode is the universal character set that supports most of the currently spoken languages of the world.

As AL32UTF8 is a multibyte character set, database operations on character data may be slightly slower when compared to single-byte database character sets, such as WE8MSWIN1252.


AL16UTF16 is only available as a choice for the National Character Set. If you want the database itself to be Unicode you can choose from UTF8 and AL32UTF8. AL32UTF8 is generally preferred because of its ongoing support of the Unicode Standard.

In most cases about 99% of the time, if you are using AL32UTF8 for your database character set, you wont make use of your national character set at all, so it doesnt matter.

If your database character set were AL32UTF8, there would be no advantage of using UTF-8 as your national character set. Occasionally, there may be a benefit to using UTF-16 for your national character set because that would potentially allow you to store Asian characters in less space (most Asian characters require 3 bytes to encode in UTF-8 and 2 bytes to encode in UTF-16).

select * from nls_database_parameters where parameter='NLS_CHARACTERSET';

SELECT * FROM NLS_DATABASE_PARAMETERS;

SELECT UNIQUE VALUE FROM V$NLS_VALID_VALUES WHERE PARAMETER ='CHARACTERSET' AND isdeprecated = 'FALSE';

SELECT NLS_CHARSET_ID('WE8MSWIN1252') FROM dual;

SELECT * FROM nls_database_parameters WHERE parameter LIKE '%CHARACTERSET%';

select * from nls_database_parameters  where parameter='NLS_CHARACTERSET';

SELECT * FROM NLS_DATABASE_PARAMETERS;

Saturday, November 2, 2019

Oracle DBA Interview Questions

Question related to Oracle database:

What is an Oracle Instance?
What information is stored in Control File?

When you start an Oracle DB which file is accessed first?

What is RDBMS?


Question related to Memory:

What is the Job of SMON, PMON processes?
What is a Checkpoint?
Which Process reads data from Datafiles?
Which Process writes data in Datafiles?
What is a Shared Pool?
What is PGA_AGGREGRATE_TARGET parameter?
Large Pool is used for what?
What is DB Buffer Cache Advisor?
Can you change SHARED_POOL_SIZE online?
When creating a new database, how would you handle the memory allocation?
Name some memory areas Oracle uses,and state their purpose?
What is difference between Multithreaded/Shared Server and Dedicated Server?
How does 32-bit Oracle compare with 64-bit in terms of memory restrictions?
What are the efects of using bind variables?
How would you ensure that a table’s blocks stayed in the SGA?
Why would a DBA pin packages into memory?
Explain what the parameter session_cached_cursors is used for?
What is kept in the Database Buffer Cache?


Question related to Transactions:

Explain how Oracle achieves read consistency?
Explain an ORA-01555 / Snapshot too old error and how you might prevent them?
What happens when a commit is issued?

A commit is a “end transaction” marker and make permanent all changes performed in the transaction.  A transaction is a sequence of SQL statements that Oracle Database treats as a single unit.

What happens when a Rollback hapens?




Some Backup and Recovery Questions:


What is written in Redo Log Files?
How many maximum Redo Logfiles one can have in a Database?
Explain the differences between an export, a hot backup and a cold backup and the benefits associated with each?
What is COMPRESS and CONSISTENT setting in EXPORT utility?
What is the difference between Direct Path and Convention Path loading?
What files are required to be included in a complete hot/online backup?
Explain the differences between ARCHIVELOG mode and NOARCHIVELOG mode and the benefits and disadvantages of each?
Can you take partial backups if the Database is running in NOARCHIVELOG mode?
Can you take Online Backups if the the database is running in NOARCHIVELOG mode?
How do you bring the database in ARCHIVELOG mode from NOARCHIVELOG mode?
You cannot shutdown the database for even some minutes, then in which mode you should run the database?
How can you backup a control file?
What are redo logs? Explain their importance?
What happens duiring a instance recover?
Explain what effect “NOLOGGING” commands have on the database? How would this affect standby databases and point in time recovery?
A large amount of data has been deleted by a user of an application. The data can’t be re-entered via the application because it’s not all available. What options would you investigate them?
A datafile has been deleted while the database was open and being used. What would you do?
Scenario: An RMAN backup is multiplexing all of the datafiles to one tape drive. (Filesperset>1 channels >1). What are the implications? How would you improve the situation?
Can you import objects from Oracle ver. 7.3 to 9i?
Scenario: A busy database only has a full backup to tape each day using RMAN. 250 GB of archivelogs are generated per day. What are the implications? How would you improve the situation?
Where should you place Archive logfiles, in the same disk where DB is or another disk?
Can you take online backup of a Control file if yes, how?
What is a Logical Backup?
Should you take the backup of Logfiles if the database is running in ARCHIVELOG mode?
Why do you take tablespaces in Backup mode?
What is the advantage of RMAN utility?
How RMAN improves backup time?
Can you take Offline backups using RMAN?
How do you see information about backups in RMAN?
What is a Recovery Catalog?
Should you place Recovery Catalog in the Same DB?
Can you use RMAN without Recovery catalog?
Can you take Image Backups using RMAN?
Can you use Backupsets created by RMAN with any other utility?
Where RMAN keeps information of backups if you are using RMAN without Catalog?
You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?
Which is more efficient Incremental Backups using RMAN or Incremental Export?
Can you start and shutdown DB using RMAN?
How do you recover from the loss of datafile if the DB is running in NOARCHIVELOG mode?
You loss one datafile and it does not contain important objects. The important objects are there in other datafiles which are intact. How do you proceed in this situation?
You lost some datafiles and you don’t have any full backup and the database was running in NOARCHIVELOG mode. What you can do now?
How do you recover from the loss of datafile if the DB is running in ARCHIVELOG mode?
You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week old and partial backup of this datafile which is just 1 day old. From which backup should you restore this file?
You loss controlfile how do you recover from this?
The current logfile gets damaged. What you can do now?
What is a Complete Recovery?
What is Cancel Based, Time based and Change Based Recovery?
Some user has accidentally dropped one table and you realize this after two days. Can you recover this table if the DB is running in ARCHIVELOG mode?
Do you have to restore Datafiles manually from backups if you are doing recovery using RMAN?
A database is running in ARCHIVELOG mode since last one month. A datafile is added to the database last week. Many objects are created in this datafile. After one week this datafile gets damaged before you can take any backup. Now can you recover this datafile when you don’t have any backups?
How do you recover from the loss of a controlfile if you have backup of controlfile?
Only some blocks are damaged in a datafile. Can you just recover these blocks if you are using RMAN?
Some datafiles were there on a secondary disk and that disk has become damaged and it will take some days to get a new disk. How will you recover from this situation?
Have you faced any emergency situation? Tell us how you resolved it?
At one time you lost parameter file accidentally and you don’t have any backup. How you will recreate a new parameter file with the parameters set to previous values.


Some Lock and Latches Questions:

What are latches used for?

What happens when a session has to wait for a latch?

Explain what deadlocks are and how to avoid them?

When do indexes help to reduce lock contention?

Some Storage Questions:

What is Automatic Management of Segment Space setting?

How do you control number of Datafiles one can have in an Oracle database?

How many Maximum Datafiles can there be in an Oracle Database?

Can you make a Datafile auto extendible? If yes, how?

What is a Locally Managed Tablespace?

Can you redefine a table Online?

Explain the relationship between a data block, an extent and a segment?

.What is the difference between Range Partitioning and Hash Partitioning?

What is the difference between a TEMPORARY tablespace and a PERMANENT tablespace?

How do you move tables from one tablespace to another tablespace?

How do you add a datafile to a tablespace?

How do you resize a datafile?

What view would you use to look at the size of a datafile?

What view would you use to determine free space in a (non-tempfile) tablespace?

What is PCT Increase setting?

Describe what the PCT_FREE setting does.

What are chained rows and migrated rows? What could you do to limit their effect on performance?

Some Performance Questions:


When should you rebuild indexes?

Can you built indexes online?

Can you see Execution Plan of a statement?

How would you determine what session’s arer connected and what resources they are waiting for?

How can you gather statistics on a table?

How often you should collect statistics for a table?

How do you collect statistics for a table, schema and Database?

Can you make collection of Statistics for tables automatic?

How can you enable a trace for a session?

How would you go about generating an EXPLAIN plan?

How would you go about increasing the buffer cache hit ratio?

Name some Oracle performance tools or sources of statistics?

How would you begin investigating a performance problem, beginning from a user’s complaint?

What is Cost Based Optimization?

What is STATSPACK tool?

Can you assign Priority to users?

Some Data Warehouse Questions:

What characterises a data warehouse as opposed to an OLTP database?

Which Oracle features are commonly useful to data warehouse databases?

Briefly describe how each of the features from the previous question work?

What tools might be used for ETL?


 Some Development Support Questions:


List and describe typical environments used in the development life of an application? (Eg. Production, development, QA,etc)

How would you manage the promotion of changes through the environments to ensure success when released into production?

How can a DBA help developers to design an application?

Explain the differences between a FUNCTION, PROCEDURE, PACKAGES and anonymous PL/SQL blocks?

After an import of a schema has finished, what should you check before notifying the users that the database can be used?

Give two types of referential integrity constraints?

How would you create a copy of a database with new name on a different server?

You have just compiled a PL/SQL package but got some errors. How would you view the errors?

On which columns you should create Indexes?

What types of Indexes are available in Oracle?

What is B-Tree Index?

A table is having few rows, should you create indexes on this table?

A Column is having many repeated values which type of index you should create on this column, if you have to?

What is an Index Organized Table?

What is a Global Index and Local Index?


Some Questions about RAC:

Briefly describe what RAC is, and what the intended benefits are?

What are two essential RAC hardware components?

What are two methods of RAC load balancing?

What does cache fusion mean?

What is the voting disk for?

What is the hangcheck timer for in Linux?

What performance issues should be considered when migrating an application to RAC?

Some Replication Questions:

Describe some types of Oracle Replication?

What is the difference between a complete refresh and a fast refresh?


Some Unix/Linux Questions:

How do you see how many instances are running?

How do you automate starting and shutting down of databases in UNIX?

How do you list the files in an Unix directory while also showing hidden files?

You have written a script to take backups. How do you make it run automatically every week?

How do you execute a UNIX command in the background?

What is OERR utility?

How do you see Virtual Memory Statistics in Linux?

How do you see how much hard disk space is free in Linux?

What is SAR?

What is SHMMAX?

Swap partition must be how much the size of RAM?

What is DISM in Solaris?

How do you see how many memory segments are acquired by Oracle Instances?

How do you see which segment belongs to which database instances?

What is VMSTAT, IOSTAT and NETSTAT?

How would you change all ocurrences of a value using VI?

How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?

Give two kernel parameters that affect an Oracle install?

How do you remove Memory segments?

What is the difference between Soft Link and Hard Link?

What is stored in oratab file?

What unix command will control the default file permissions when files are created?

Give the command to display space usage on the UNIX file system?

Explain the read, write, and execute permissions on an UNIX directory?

How do you see how many processes are running in UNIX?

How do you kill a process in UNIX?

Can you change priority of a Process in UNIX?

Briefly, how do you install Oracle software on UNIX?

Miscellaneous Questions:


How do you delete duplicate rows in a table?

Which default Database roles are created when you create a Database?

Name three files that can be used to configure SQL*Net for server and client?

What problems are associated with the use of generic user ids?

Where would you look for errors from the database engine?

An application is generating an ORA-00600 error in the database. What would you do about it?

Give two examples of how you might determine the structure of the table “DEPT”?

What is the difference between an pfile and spfile?

Compare TRUNCATE and DELETE for a table?

Give the stages of instance startup to a usable state where normal users may access it?

When creating a user, what are the minimun permissions that must be granted to allow them to connect to the database?

How would you find out what a particular pfile does?

What methods can you use to compact an index?

A table has been identified as needing reorganization. How would you plan for it? How would you implement it?

A user can’t loggin to a remote Oracle database via an application using SQL*Net from their PC. What would you do to determine the cause?

A database is not shutting down 15 minutes after the shutdown immediate command was issued. What would you do?

An error appears about a table that can’t extend due to insufficient space in the tablespace. What would you do?

You have been asked to write the requeriments for an automatic “health check” system for a database. What things would you include?

You have been presented with a UNIX database server, and have been logged in as the root user. How would you find the databases and obtain access to them?

Can you audit SELECT statements?

What does DBMS_FGA package do?

You want users to change their passwords every 2 months. How do you enforce this?

Can you disable and enable Primary key? 

Monday, October 21, 2019

OMF (Oracle Manage Files)

OMF (Oracle Manage Files) is a database feature, that simplifies tablespace creation. Using OMF, Oracle will automatically assign datafile names to tablespaces and automatically create and delete the required datafiles the operating-system level. It is introduced in Oracle 9i.

These are the parameter for this feature:

db_create_file_dest and db_create_online_log_dest_n.

The db_create_file_dest specifies, where the datafiles will be placed.
The db_create_online_log_dest_n specifies, where the online logs will be placed. Use 1,2,3 instead of n, when you want to have 3 members in each group.

Example:
db_create_file_dest='/oracle/oradata'
db_create_online_log_dest_1='/oracle/oradata'
db_create_online_log_dest_2='/oracle/oradata

How to convert NON-OMF to OMF files ( Oracle Managed File conversion - NON-ASM ) (Doc ID 2182089.1)

Wednesday, October 16, 2019

demilitarized zones (DMZ)

A DMZ is simply a place that is under your control but outside of your internal network and it a place to put servers that you want outsiders to reach but you don't want them to get to your internal network. By limiting access, you can also fine tune monitoring.

However, you would want to expose a portion of these internal applications outside your enterprise for your non-employee users like customers and vendors. For example, Oracle E-Business Suite (flagship Oracle ERP application) has some of the modules like iRecruitment, iSupplier, iStore etc.


In Oracle Application how the nodes (FND_NODES) is to exposed:

>To expose to public use the profile “Node Trust Level”

>Set node to Public/Private (Normal -> private, External -> public)

>Set "Responsibility Trust Level" profile to decide whether to expose Application Responsibility to inside or outside firewall.

Internal Applications Tier
The internal applications tier is the server configured for internal users to access Oracle E-Business Suite. It runs the following major application services:

Web and Forms Services
WebLogic Administration service, Node Manager, Oracle HTTP Server, OPMN, WebLogic Managed Servers
Concurrent Manager Services
Reports and Discoverer Services
External Applications Tier
The external applications tier is the server configured for external users for accessing Oracle E-Business Suite. It runs the following application service:

Oracle HTTP Server
WebLogic components like node manager, managed servers etc.

OPMN
Oracle Process Manager and Notification Server (OPMN) is installed and configured on every tier designated to run the web application.  OPMN provides an integrated way to manage all Oracle Application Server components.  OPMN consists of two main pieces:  the Process Manager and the Notification Server. The Process manager (PM) is the centralized process management mechanism in Oracle Application Server and is used to manage the Oracle HTTP Server. The PM starts, restarts, stops, and monitors every process it manages. It also performs death-detection and automatic restart of the processes. Oracle Notification Server (ONS) is the transport mechanism for failure, recovery, startup, and other related notifications between components in Oracle Application Server.

OHS
Oracle HTTP Server (OHS) is installed and configured on every tier that is designated to run the web application . It provides the key infrastructure required for serving the static and dynamic content generated by Oracle E Business Suite products.

Webgate is an out-of-box client which enforces OAM policies on HTTP resources. Typically, it is installed on the webserver like apache and traps all incoming http traffic before it hits core apache. In this fashion, webgate can enforce OAM policies on the http resources residing on the http server.

Oracle id related reference:
Oracle E-Business Suite 11i Configuration in a DMZ (Metalink Note 287176.1)
Oracle E-Business Suite Release 12 Configuration in a DMZ (Metalink Note 380490.1)
Oracle E-Business Suite Release 12.2 Configuration in a DMZ (Doc ID 1375670.1)


Sunday, October 6, 2019

Recover a table:

In previous releases point in time recovery of a table or table partition was only possible by manually creating a point in time clone of the database.

For TSPITR, we need to specify the auxiliary destination where RMAN would create a temporary database by restoring the controlfile, SYSTEM
tablespace, UNDO tablespace, SYSAUX tablespace and finally the tablespace that needs to be recovered.

Once the temporary database is restored and recovered, RMAN automatically exports the contents of tablespace to be recovered from the
temproary tablespace to a dump file and imports this dump into the Target(Main) database where the table was dropped.


rman target /
RMAN> run
2> {
3> recover tablespace MYTS until time "to_date('2013-05-04:11:38:00','YYYY-MM:DD:HH24:MI:SS')" auxiliary destination '+FRA';
4> }


Once the import is done successfully, RMAN automatically deletes the temporary database that it had created earlier.

Now, lets connect to the main database and check if we are able to access the dropped table. But, before that, you need to bring the tablespace online.

recovering a dropped table with flashback:The recyclebin feature introduced in Oracle 10g allows you to recover dropped tables using the flashback table...to before drop command. With recyclebin, Oracle does not automatically delete dropped tables. Instead, Oracle renames dropped and their associated objects, giving them system-generated recyclebin names that begin with BIN$.

SQL> select object_name, original_name, type from recyclebin;
 OBJECT_NAME                       ORIGINAL_NAME              TYPE   
  ------------------------------    -----------------------    -------
BIN$ABCSD5   TESTING                    TABLE
select * from "BIN$ABCSD5" ;
SQL> flashback table testing to before drop;
SQL> select index_name from user_indexes where table_name = 'testing';
 alter index "recyclebin_name" rename to original_name;


New in Oracle 12c is the Multitenant Architecture, as well as the ability to restore one table from an RMAN backup.


RECOVER TABLE TEST.T1 UNTIL SCN 1853267 AUXILIARY DESTINATION '/u01/aux' REMAP TABLE 'TEST'.'T1':'T1_PREV';
recover table hari.emp until time "to_date('2018-08-14 21:53:42','yyyy-mm-dd:hh24:mi:ss')" auxiliary destination '/u01/fra';
recover table hari.emp until time "to_date('2018-08-14 21:53:42','yyyy-mm-dd:hh24:mi:ss')" auxiliary destination '/u01/fra';

run {

recover table example.test_restore of pluggable database PDB2

until time "to_date('12-01-2016 17:16:00','mm/dd/yyyy hh24:mi:ss')"

auxiliary destination '/oradata/CDB2/aux'

;

}

RMAN> run {

recover table example.test_restore of pluggable database PDB2

until time "to_date('12-01-2016 17:16:00','mm/dd/yyyy hh24:mi:ss')"

auxiliary destination '/oradata/CDB2/aux'

;

}

=================

1.Flashback features:

SHOW RECYCLEBIN;
flashback table <table> to before drop;
FLASHBACK TABLE HR.REG_HIST TO BEFORE DROP;

2.Database Point-in-Time Recovery:

3. Tablespace point-in-time recovery (TSPITR) :

4. Oracle 12c Table Point-in-time Recovery:

rman target /

RMAN> recover table emp.emp until time "to_date('2010-05-16 02:21:27','yyyy-mm-dd:hh24:mi:ss')" auxiliary destination '/u02/RC_BKP';

RMAN> recover table mm.test until time "to_date('00/17/2013 21:01:15','mm/dd/yyyy hh24:mi:ss')" auxiliary destination '/u03/app/oracle/ux' remap table mm.test:test_temp;

Redo logs Details:

Oracle writes to only one online redo log group at a time.Once the online redo log(s) in that group are filled then Oracle will switch to writing the next online redo log group, and so on in a circular fashion.

Each online redo log is assigned a unique sequence number.

We can multiplex each online redo log group. This means that each redo log group can consist of more than one online redo log file. Each file is known as a member. Each member should be located on a different disk, to protect the group from losing all of its members in the event a disk failure should occur. Oracle writes to those members in parallel, to ensure that the database is always recoverable while maintaining performance.

The online redo logs are first created when the database is created, and the database cannot live without them.If all members of the active redo log group are lost, the database crashes, and worse yet, there will be data loss.

Redo logs can have following STATUS’s (select status from v$log;):-

CURRENT – Current redo log. This implies that the redo log is active. The redo log could be open or closed.
ACTIVE – Log is active but is not the current log. It is needed for crash recovery. It may be in use for block recovery. It may or may not be archived.
INACTIVE – Log is no longer needed for instance recovery. It may be in use for media recovery. It might or might not be archived.
CLEARING – Log is being re-created as an empty log after an ALTER DATABASE CLEAR LOGFILE statement. After the log is cleared, the status changes to UNUSED.
CLEARING_CURRENT – Current log is being cleared of a closed thread. The log can stay in this status if there is some failure in the switch such as an I/O error writing the new log header.
UNUSED – Online redo log has never been written to. This is the state of a redo log that was just added, or just after a RESETLOGS, when it is not the current redo log.

Alter system switch logfile;

To switch the logfile to change the STATUS of the logfile from CURRENT to ACTIVE or INACTIVE.

"alter system checkpoint" causes the current active redo log file members inactive
by flushing dirty buffers to data files and recording SCN to data file header and control file,
so that we could drop this group.

ALTER SYSTEM SWITCH LOGFILE vs ALTER SYSTEM ARCHIVE LOG CURRENT:

Both will force a log switch but  will be different way.
ALTER SYSTEM ARCHIVELOG CURRENT is the preferred one that should be used for backup scripts.

The reason being,when you do an 'ALTER SYSTEM SWITCH LOGFILE', you get the prompt back immediately. This command performs a database checkpoint, switches to the next log, In background signals the  archiver to copy the logfile to the archive destination, and returns the prompt.This does not wait for the archive to complete.

Whereas, an 'ALTER SYSTEM ARCHIVE LOG CURRENT' does all of the above, but does not return back to the prompt until the archive is complete.This command is safer because it waits for the OS to acknowledge(ACK) that the redo log has been successfully written.

If you are running RAC, the ALTER SYSTEM ARCHIVE LOG CURRENTwill switch the logs on all RAC nodes (instances), whereas ALTER SYSTEM SWITCH LOGFILE will only switch he logfile on the instance where you issue the switch command.  Hence, ALTER SYSTEM ARCHIVE LOG CURRENT is a best practice for RAC systems.

Alter system switch logfile, tells the background processes to perform the archiving of the current redo log and returns control immediately to the session issuing the statement.

Alter system archive log current, means your server/shadow process performs the arhciving and control is only returned to the session once the archiving is complete.

It issues database checkpoint
It immediately starts writing to the next redo log
In the background, the "switch logfile" command tells the ARCH background process to copy the "old" redo log file to the redo log filesystem. 

ARCHIVE LOG CURRENT waits for the ARCHiver to complete its writing.

SWITCH LOGFILE will not wait for the ARCHiver to complete writing online redo logs to archivelog log filesystem. This is fast.

Few related queries:

set lines 200 pages 200
col GROUP for a10
col TYPE for a20
col MEMBER for a100
col IS_ for a70
select * from v$log; 1:00 PM 
select GROUP#,THREAD#,MEMBERS,STATUS from v$log;

col status for a10;
col GROUP# for 9999;
col type for a10;
col member for a85;
col IS_RECOVERY_DEST_FILE for a5;
set lines 300

select * from v$logfile; 

col status for a10;
col GROUP# for 9999;
col type for a10;
col member for a85;
col IS_RECOVERY_DEST_FILE for a5;
set lines 300
select group#,thread#,members,status,bytes from v$log; 
select * from v$logfile; 

set lines 200 pages 200
col GROUP for a10
col TYPE for a20
col MEMBER for a100
col IS_ for a70
select * from v$log;  
select GROUP#,THREAD#,MEMBERS,STATUS,MEMBERS from v$log;
select * from v$logfile; 


select member, a.group#, a.status, b.status from v$log a, v$logfile b where  a.group# = b.group#  order by a.group#, member;

https://databaseinternalmechanism.com/oracle-backup-recovery/recovering-from-the-loss-of-redo-log-files/


Recovering from the LOSS of REDO log files >>https://databaseinternalmechanism.com/oracle-backup-recovery/recovering-from-the-loss-of-redo-log-files/


Friday, October 4, 2019

Interview Questions and Answers -ASM

What are different types of redundancies in ASM & explain?
When you will use external redundancy and what are the advantages of it?

How to copy file to/from ASM from/to filesystem?

By using ASMCMD cp command

How to find out the databases, which are using the ASM instance?

ASMCMD> lsct
SQL> select DB_NAME from V$ASM_CLIENT;

What is Automatic Storage Management (ASM)?

Automatic Storage Management (ASM) is a new feature that has be introduced in Oracle 10g to simplify the storage of Oracle datafiles, controlfiles and logfiles.Automatic Storage Management (ASM) is an integrated file system and volume manager expressly built for Oracle database files.An Oracle ASM instance is built on the same technology as an Oracle Database instance. An Oracle ASM instance has a System Global Area (SGA) and background processes that are similar to those of Oracle Database. However, because Oracle ASM performs fewer tasks than a database, an Oracle ASM SGA is much smaller than a database SGA. In addition, Oracle ASM has a minimal performance effect on a server. Oracle ASM instances mount disk groups to make Oracle ASM files available to database instances; Oracle ASM instances do not mount databases. 

What are ASM Background Processes in Oracle?

ASMB – Oracle background process. In an DB instance, keeps a (bequeath) persistent DB connection to the local ASM instance. Provides heart-beat and ASM statistics. During a diskgroups rebalancing operation ASM communicates to the DB AU changes via this connection.
RBAL – Re-Balance - RBAL is the ASM related process that performs rebalancing of disk resources controlled by ASM. In a DB instance, opens and mount diskgroups from the local ASM instance.
ARBx – Actual Rebalance - ARBx is configured by ASM_POWER_LIMIT, a slave for rebalancing operations.
PSPx – Oracle backgroud processes. In an ASM instance, Process Spawners
GMON – Oracle backgroud processes. In an ASM instance, diskgroup monitor.

O00x – Oracle backgroud processes. Slaves used to connected from the DB to the ASM instance for ‘short operations’.

What are ASM instance initialization parameters?

INSTANCE_TYPE – Set to ASM or RDBMS depending on the instance type. The default is RDBMS.

DB_UNIQUE_NAME – Specifies a globally unique name for the database. This defaults to +ASM but must be altered if you intend to run multiple ASM instances.

ASM_POWER_LIMIT -The maximum power for a rebalancing operation on an ASM instance. The valid values range from 1 to 11, with 1 being the default. The higher the limit the more resources are allocated resulting in faster rebalancing operations. This value is also used as the default when the POWER clause is omitted from a rebalance operation.

ASM_DISKGROUPS – The list of disk groups that should be mounted by an ASM instance during instance startup, or by the ALTER DISKGROUP ALL MOUNT statement. ASM configuration changes are automatically reflected in this parameter.

ASM_DISKSTRING – Specifies a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed of disk group mount time and the speed of adding a disk to a disk group. Changing the parameter to a value which prevents the discovery of already mounted disks results in an error. The default value is NULL allowing all suitable disks to be considered.

What is ASM Disks ?
The physical disks are known as ASM disks, while the files that reside on the disks are know as ASM files.The disks in a disk group are referred to as ASM disks. On Windows operating systems, an ASM disk is always a partition.

When an ASM instance starts, it automatically discovers all available ASM disks.Discovery is the process of determining every disk device to which the ASM instance has been given I/O permissions (by some operating system mechanism), and of examining the contents of the first block of such disks to see if they are recognized as belonging to a disk group. ASM discovers disks in the paths that are listed in an initialization parameter, or if the parameter is NULL, in an operating system–dependent default path.

What is ASM diskgroups?
The primary component of ASM is the disk group. A disk group consists of a grouping of disks that are managed together as a unit.

What is Failure groups?

They are defined within a disk group to support the required level of redundancy. For two-way mirroring you would expect a disk group to contain two failure groups so individual files are written to two locations.

A Failure Group is a subset of the disks in a Diskgroup, which could fail at the same time because they
share a common piece of hardware. The failure of that common piece of hardware must be tolerated.

**A Failure Group is a subset of the disks in a Diskgroup.They are defined within a disk group to support the required level of redundancy.Failure Groups are used to place mirror copies of data.

Failure groups are used to determine which ASM disks to use for storing redundant copies of data. For example, if 2-way mirroring is specified for a file, ASM automatically stores redundant copies of file extents in separate failure groups. Failure groups apply only to normal and high redundancy disk groups. You define the failure groups in a disk group when you create or alter the disk group.

What is stripping and mirroring?

Striping is spreading data across multiple disks so that IO is spread across multiple disks and hence increase in throughput. It provides read/write performance but fail over support.
ASM offers two types of striping, with the choice depending on the type of database file. Coarse striping uses a stripe size of 1MB, and you can use coarse striping for every file in your database, except for the control files, online redo log files, and flashback files. Fine striping uses a stripe size of 128KB. You can use fine striping for control files, online redo log files, and flashback files.

Mirroring means redundancy. It may add performance benefit for read operations but overhead for write operations. It’s basic purpose is to provide fail over support.
If you specify mirroring for a file, then Oracle ASM automatically stores redundant copies of the file extents in separate failure groups.

There are three ASM mirroring options/There are three types of ASM disk groups:

High Redundancy – In this configuration, for each primary extent, there are two mirrored extents. For Oracle Database Appliance this means, during normal operations there would be three extents (one primary and two secondary) containing the same data, thus providing “high” level of protection. Since ASM distributes the partnering extents in a way that prevents all extents to be unable due to a component failure in the IO path, this configuration can sustain at least two simultaneous disk failures on Oracle Database Appliance (which should be rare but is possible).

******
Oracle ASM provides triple mirroring by default. A loss of two Oracle ASM disks in different failure groups is tolerated.

A high redundancy disk group can tolerate the failure of two failure groups. If one or two failure groups fail, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks to restore redundancy for the data in the failed disks. If more than two failure groups fail, ASM dismounts the disk group.

Normal Redundancy – In this configuration, for each primary extent, there is one mirrored (secondary) extent. This configuration protects against at least one disk failure. Note that in the event a disk fails in this configuration, although there is typically no outage or data loss, the system operates in a vulnerable state, should a second disk fail while the old failed disk replacement has not completed. Many Oracle Database Appliance customers thus prefer the High Redundancy configuration to mitigate the lack of additional protection during this time.

**********
Oracle ASM provides two-way mirroring by default, which means that all files are mirrored so that there are two copies of every extentOpens a new window. A loss of one Oracle ASM disk is tolerated. You can optionally choose three-way or unprotected mirroring.

A file specified with HIGH redundancy (three-way mirroring) in a NORMAL redundancy disk group provides additional protection from a bad disk sector, not protection from a disk failure.

A normal redundancy disk group can tolerate the failure of one failure group. If only one failure group fails, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks (including the surviving disks in the failed failure group) to restore redundancy for the data in the failed disks. If more than one failure group fails, ASM dismounts the disk group.

External Redundancy – In this configuration there are only primary extents and no mirrored extents. This option is typically used in traditional non-appliance environments when the storage sub-system may have existing redundancy such as hardware mirroring or other types of third-party mirroring in place. Oracle Database Appliance does not support External Redundancy.8. What is a diskgroup?
A disk group consists of multiple disks and is the fundamental object that ASM manages. Each disk group contains the metadata that is required for the management of space in the disk group. The ASM instance manages the metadata about the files in a Disk Group in the same way that a file system manages metadata about its files. However, the vast majority of I/O operations do not pass through the ASM instance. In a moment we will look at how file
I/O works with respect to the ASM instance.

*****
Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error causes a forced dismount of the disk group. All disks must be located to successfully mount the disk group.

An external redundancy disk group cannot tolerate the failure of any disks in the disk group. Any kind of disk failure causes ASM to dismount the disk group.

What are different types of redundancies in ASM & explain?

* Normal redundancy

* High redundancy

* External redundancy

The redundancy level controls how many disk failures are tolerated without dismounting the disk group or losing data.

External redundancy

Oracle ASM does not provide mirroring redundancy and relies on the storage system to provide RAID functionality. Any write error causes a forced dismount of the disk group. All disks must be located to successfully mount the disk group.

An external redundancy disk group cannot tolerate the failure of any disks in the disk group. Any kind of disk failure causes ASM to dismount the disk group.

Normal redundancy

Oracle ASM provides two-way mirroring by default, which means that all files are mirrored so that there are two copies of every extentOpens a new window. A loss of one Oracle ASM disk is tolerated. You can optionally choose three-way or unprotected mirroring.

A file specified with HIGH redundancy (three-way mirroring) in a NORMAL redundancy disk group provides additional protection from a bad disk sector, not protection from a disk failure.

A normal redundancy disk group can tolerate the failure of one failure group. If only one failure group fails, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks (including the surviving disks in the failed failure group) to restore redundancy for the data in the failed disks. If more than one failure group fails, ASM dismounts the disk group.

High redundancy

Oracle ASM provides triple mirroring by default. A loss of two Oracle ASM disks in different failure groups is tolerated.

A high redundancy disk group can tolerate the failure of two failure groups. If one or two failure groups fail, the disk group remains mounted and serviceable, and ASM performs a rebalance of the surviving disks to restore redundancy for the data in the failed disks. If more than two failure groups fail, ASM dismounts the disk group.

******
Normal redundancy - for 2-way mirroring, requiring two failure groups, when ASM allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy.
High redundancy - for 3-way mirroring, requiring three failure groups, in this case the extent is mirrored across 3 disks.
External redundancy - to not use ASM mirroring. This is used if you are using hardware mirroring or third party redundancy mechanism like RAID, Storage arrays.

What is ASM Rebalancing?

Any change in the storage configuration like disk addition or dropping will trigger a oracle ASM rebalance operation  The main objective of the oracle ASM rebalance operation is to always provide an even distribution of file extents and space usage across all disks in the diskgroup. Rebalancing is performed on all database files on a per file basis.
RBAL will calculate estimation time and work required to perform the task and then message the
ARBx processes to actually handle the request. The number of ARBx processes invoked is
directly determined by the asm_power_limit.

Select * from v$asm_operation

****************
ASM enables you to change the storage configuration without having to take the database offline. It automatically rebalances—redistributes file data evenly across all the disks of the disk group—after you add disks to or drop disks from a disk group.

Effects of Adding and Dropping Disks from a Disk Group

ASM automatically rebalances whenever disks are added or dropped. For a normal drop operation (without the FORCE option), a disk is not released from a disk group until data is moved off of the disk through rebalancing. Likewise, a newly added disk cannot support its share of the I/O workload until rebalancing completes. It is more efficient to add or drop multiple disks at the same time so that they are rebalanced as a single operation. This avoids unnecessary movement of data.

For a drop operation, when rebalance is complete, ASM takes the disk offline momentarily, and then drops it, setting disk header status to FORMER.

You can add or drop disks without shutting down the database. However, a performance impact on I/O activity may result.

Tuning Rebalance Operations

The V$ASM_OPERATION view provides information that can be used for adjusting ASM_POWER_LIMIT and the resulting power of rebalance operations. The V$ASM_OPERATION view also gives an estimate in the EST_MINUTES column of the amount of time remaining for the rebalance operation to complete. You can see the effect of changing the rebalance power by observing the change in the time estimate.

SET pagesize 299
SET lines 2999
SELECT GROUP_NUMBER,
       OPERATION,
       STATE,
       POWER,
       ACTUAL,
       ACTUAL,
       EST_MINUTES
FROM gv$asm_operation;

What processes does the rebalancing?


RBAL, ARBx

What is the use of ASM_POWER_LIMIT?

ASM_POWER_LIMIT is dynamic parameter, which will be useful for rebalancing the data across disks.Value can be 1(lowest) to 11 (highest).

What is an incarnation number?

An incarnation number is a part of ASM filename syntax. It is derived from the time stamp. Once the file is created, its incarnation number doesn’t change.

What is the use of an incarnation number in Oracle ASM filename?

Incarnation number distinguishes between a new file that has been created using the same file number and another file that has been deleted

What is an oracle flex ASM?
Answer Oracle flex ASM is a feature that enables an ASM instance to run on separate physical servers from the database servers

What is the use of asmadmin?
Answer asadmin is the operating system group that holds users who have sysasm database privilege. This privilege is needed for operations like mounting disk group, dismounting disk group, storage administration

What is the purpose of asmoper operating system group?
asmoper operating system group is used for users that have the privilege to startup and stop the oracle ASM instance. The database privilege for these users will be sysoper for asm

What is the difference between asmdba and asmoper?
The users belonging to asmdba group have sysdba database privilege at ASM level. This is the highest administrative privilege needed for oracle ASM. Whereas, asmoper is given sysoper privilege which is less than asmdba

What are different types of stripings in ASM & their differences?

Fine-grained striping
Coarse-grained striping

What are different types of stripings in ASM & their differences?

Fine-grained striping
Coarse-grained striping

What happen if you miss "+" sign while adding datafile in ASM disk group ?

What is allocation unit and what is default value of au_size and how to change?

Every ASM disk is divided into allocation units (AU). An AU is the fundamental unit of allocation within a disk group. A file extent consists of one or more AU. An ASM file consists of one or more file extents.
CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY DISK '/dev/sde1' ATRRIBUTE 'au_size' = '32M';
lsdg
select NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;

What are different types of redundancies in ASM & explain?


Normal redundancy - for 2-way mirroring, requiring two failure groups, when ASM allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy.
High redundancy - for 3-way mirroring, requiring three failure groups, in this case the extent is mirrored across 3 disks.
External redundancy - to not use ASM mirroring. This is used if you are using hardware mirroring or third party redundancy mechanism like RAID, Storage arrays.

When you will use external redundancy and what are the advantages of it?

What is allocation unit and what is default value of au_size and how to change?

Every ASM disk is divided into allocation units (AU). An AU is the fundamental unit of allocation within a disk group. A file extent consists of one or more AU. An ASM file consists of one or more file extents.
CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY DISK '/dev/sde1' ATRRIBUTE 'au_size' = '32M';
lsdg
select NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;

What are different types of redundancies in ASM & explain?


Normal redundancy - for 2-way mirroring, requiring two failure groups, when ASM allocates an extent for a normal redundancy file, ASM allocates a primary copy and a secondary copy. ASM chooses the disk on which to store the secondary copy in a different failure group other than the primary copy.
High redundancy - for 3-way mirroring, requiring three failure groups, in this case the extent is mirrored across 3 disks.
External redundancy - to not use ASM mirroring. This is used if you are using hardware mirroring or third party redundancy mechanism like RAID, Storage arrays.

When you will use external redundancy and what are the advantages of it?

What can be the various header status that an ASM disk can assume?

MEMBER :

Disks that belong to a disk group, that is, disks that have a disk group name in the disk header, show a header status of MEMBER.

CANDIDATE :

Disks that were discovered, but that have not yet been assigned to a disk group, have a status of CANDIDATE

PROVISIONED :

PROVISIONED status implies that an additional platform-specific action has been taken by an administrator to make the disk available for Oracle ASM. These disks are discovered by ASM

FORMER :

Disks that previously belonged to a disk group and were dropped cleanly from the disk group have a status of FORMER.

FOREIGN :

When adding a disk, the FORCE option must be used if Oracle ASM recognizes that the disk was managed by Oracle. Such a disk appears in the V$ASM_DISK view with a status of FOREIGN. In this case, you can only add the disk to a disk group by using the FORCE keyword.

What is  ASM: Backup and restore DG metadata?

Overview – ASMCMD md_backup and md_restore

ASMCMD 11g  is extended to include ASM disk group metadata backup and restore functionality.
This provides the ability to recreate a pre-existing ASM disk group with the same disk paths,disk names,failure groups, attributes, templates and alias directory structure
In 10g you have to manually recreate the ASM disk group and any required user directories/templates.
In 11g we can take backup of ASM diskgroup metadata
The md_backup command creates a backup file containing metadata for one or more disk groups. i
By default all the mounted disk groups are included in the backup file which is saved in the current working directory.

ASM diskgroup metadata backup is performed using md_backup command. If no diskgroup specified ASM md_backup backs up all the mounted diskgroup metadata.
ASMCMD 11g  is extended to include ASM disk group metadata backup and restore functionality.
Backup All Mounted Diskgroup
ASMCMD> md_backup /home/oragrid/dg_bkp
Backup Specific Diskgroup
ASMCMD> md_backup /home/oragrid/dg_bkp_DG_TST2_DATA -G DG_TST2_DATA
ASMCMD> md_backup /home/oragrid/dg_bkp_DG_FLEX_DATA -G DG_FLEX_DATA
Disk group metadata to be backed up: DG_FLEX_DATA

How to Restore ASM DiskGroup Metadata
Backup taken using md_backup can only be stored using md_restore
Restoring Diskgroup with Same Name as Before
ASMCMD> md_restore /home/oragrid/dg_bkp_DG_FLEX_DATA --full -G DG_FLEX_DATA
Restoring Diskgroup with Different Name
Restoring the diskgroup DG_FLEX_DATA as DG_FLEX_FRA
ASMCMD>md_restore /home/oragrid/dg_bkp_DG_FLEX_DATA --newdg -o 'DG_FLEX_DATA:DG_FLEX_FRA'

**ASMCMD – New commands in 11gR1 (Doc ID 451900.1)

Adding New Disk To ASM Diskgroup >>https://dbaclass.com/article/adding-new-disk-asm-diskgroup/
http://www.nazmulhuda.info/how-to-add-disk-in-asm-disk-group
https://dbaclass.com/article/adding-new-disk-asm-diskgroup/
https://www.thegeekdiary.com/how-to-add-new-disk-to-an-existing-diskgroup-on-rac-cluster-or-standalone-asm-configuration/



Thursday, September 12, 2019

Cloud Service Models:

OCI has three Service Models mainly.

Infrastructure as a Service (IaaS) :(Sometimes it’s called Hardware-as-a-Service).
IaaS companies provide off-site server, storage, and networking hardware, which you rent and access over the Internet.
IaaS gives a virtual machine which we have to maintain. The difference between IaaS and having a physical server room is we don’t have to buy any physical computers and we can have servers in various parts of the world.

Platform as a Service (PaaS):sometimes called middleware.
In PaaS, the cloud service provider hosts and manages a platform (or framework) on which the customer can develop, build, test, and deploy their own custom applications.




Software as a Service (SaaS):
It refers to cloud-based software that is hosted online by a company and is available for purchase on a subscription basis and is delivered via the internet.

Point to remember:

 >>IaaS is the most basic, PaaS builds on IaaS and finally SaaS is being built on top of PaaS.

>>IaaS helps build the infrastructure of a cloud-based technology. PaaS helps developers build custom apps via an API that can be delivered over the cloud. And SaaS is cloud-based software companies can sell and use.

Oracle E-Business Suite Cloud Manager:

EBS Cloud admin Tool is now replaced with EBS Cloud Manager tool.

It is a tool for managing Oracle E-Business Suite environments on Oracle Cloud Infrastructure through a graphical user interface.

It is used to provision a new environment, provision an environment from a backup of an on-premises Oracle E-Business Suite instance as part of a lift and shift process, back up a previously provisioned Cloud environment and provision a new environment from that backup, or clone a previously provisioned environment and delete environments that you no longer need.

Saturday, August 31, 2019

Inactive Account Lock

In oracle 12.2 database, parameter INACTIVE_ACCOUNT_TIME introduced for user profiles.
The INACTIVE_ACCOUNT_TIME profile parameter locks a user account if that has not logged in to the database  in  specified number of days. Default value for INACTIVE_ACCOUNT_TIME is 35. The minimum setting is 15 and the maximum is 24855.

Thursday, August 29, 2019

Last Login Time of User:

In release previous to 12c, to get last login time details of the user, we have to enable “AUDIT SESSION” and then query the AUD$ table but in case of 12c we can get from table USER$ and in SPARE6 column.

USER$ table was in 11gR2 :

CTIME: Date & Time when user was created

PTIME: Date & Time when user password was last changed

LTIME: Date & Time when the user account was locked

LCOUNT: Number of failed login attempts

Below column added in 12c:

SPARE6: Last login time of the user.

Oracle Database 12c Feature:

1. Last Login Time of User:

2.Inactive Account Lock:

3.Invisible Columns in Oracle Database 12c:

4.Parameter MAX_STRING_SIZE to EXTENDED :Store More in VARCHAR2.

5.READ Privilege:

6.Unified Auditing :

7.ASMCMD Commands for Password File Management:

8.Exporting View Data:

9.No Logging Option in Data Pump Import:
A new feature has been added in datapump of oracle 12c. We can import data with nologgin option i.e without generating logs. We sometimes face issue while importing big tables, as it generates lot of archive logs.
TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y – This parameter can be used to make the impdp nologging.

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:TABLE

TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y:INDEX

impdp dumpfile=dbaclass.dmp logfile=dbaclass.log directory=DUMP tables=test.TEST TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y

Previously, If we are loading data to an existing partitioned table using impdp, then despite mentioning parallel option, partitions were getting loaded one by one, Which slowdown the import process.

TRUST_EXISTING_TABLE_PARTITIONS — tells Data Pump to load partition data in parallel into existing tables. You should use this option when you are using Data Pump to create the table from the definition in the export database before the table data import is started. This is done as part of a migration when the metadata is static and can be moved before the databases are taken off line in order to migrate the data. Moving the metadata separately minimizes downtime. If you use this option and if other attributes of the database are the same (for example, character set), then the data from the export database goes to the same partitions in the import database.

impdp dumpfile=test1_%U.dmp
logfile=imp_test1.log
directory=EXPDP
table_exists_action=APPEND
data_options=TRUST_EXISTING_TABLE_PARTITIONS
parallel=8

10.Rename and Replace ASM Disk(s) in Diskgroup:

11.Added Administrative Privileges:

12.Adding, Dropping, Truncating Multiple Partitions:

13.Merging and Splitting Multiple Partitions:

14.Use of Sequence Value in Column DEFAULT:

15.Cascaded Truncate:

16.New Row Limiting Clause in SELECT:

17.


R12.2 Apache and Weblogic log locations:

Apache Logs

cd $IAS_ORACLE_HOME/instances/*/diagnostics/logs/OHS/EBS_web_*/

OPMN Log

cd $IAS_ORACLE_HOME/instances/*/diagnostics/logs/OPMN/opmn/

Weblogic Logs

cd $IAS_ORACLE_HOME/../wlserver_10.3/common/nodemanager $EBS_DOMAIN_HOME/servers/oa*/logs/

 $EBS_DOMAIN_HOME/servers/forms*/logs/
 $EBS_DOMAIN_HOME/servers/AdminServer/logs/
 $EBS_DOMAIN_HOME/sysman/log/

Wednesday, August 28, 2019

ORA-04031 unable to allocate XXX bytes of shared memory

The error ORA-4031 on the shared pool can indicate one of two things:

– insufficient space for the shared pool
OR
– although there is enough memory, it is fragmented and no contiguous chunk can be allocated to satisfy the latest memory request.

SQL> alter system set “_enable_shared_pool_durations” = false scope=spfile;
System altered.

Tuesday, August 27, 2019

DUALFS

DUALFS –A new feature introduced in the latest of R12.2 with  AD-TXK Delta 7. This will create both the filesystems fs1 and fs2 during the clonning process.

perl adcfgclone.pl appsTier dualfs

RAC_OFF:


cd $ORACLE_HOME/rdbms/lib

make -f ins_rdbms.mk rac_off
make -f ins_rdbms.mk ioracle

As oracle user, run the following. The output should be ‘0’ in case of Non-RAC.

cd $ORACLE_HOME/rdbms/lib

nm -r libknlopt.a | grep -c kcsm.o

Saturday, August 24, 2019

Index Unique Scans:

Whenever "WHERE" condition clause guarantees that only one row would be returned Oracle optimizer chooses Index Unique Scans.

Index Unique Scans returns only 1 rowid.

Index Unique Scan is used when all columns of a unique (B-tree) index or an index created as a result of a primary key constraint are specified with equality conditions.

Index Fast Full Scans:

It is similar to Index Full Scans.
It read index blocks in unsorted order as it is exist on disk.
This kind of scan does not use index to probe table, but it reads the index instead of the table.

Thursday, August 8, 2019

fs_clone phase details

Recreates the patch edition file system as an exact copy of the run edition file system.
It is a stand-alone command used for file system cloning.
It can be used to synchronize the run and patch file systems.
It must be run from run file system and before the next prepare phase is run.
It is useful if the APPL_TOPs have become very unsynchronized.
It is a heavyweight process, taking a backup of the entire current patch APPL_TOP and then cloning the run APPL_TOP to create a new patch APPL_TOP.
As well as being resource-intensive, this method is very time-consuming and requires several manual actions by the user.therefore be used only when necessary.
It can be re-run with option force=yes to restart from begining with same session id and force=no , startr from where it left.
It is an alternate to PREPARE phase.

PREPARE  Vs fs_clone

fs_clone from Run File System(or) phase=apply from patch File system , is both does the same task?

PREPARE phase only synchronizes APPL_TOP or application stack so if you have applied any patches using adpatch or adop then PREPARE will synchronize that as well. But the important point to note is that PREPARE phase does not synchronize Technology Stack so if you have applied any Tech stack patches using OPatch or Smart update then PREPARE phase will not sync that.  So please note that if you applied tech stack patches on run file system instead of patch file system and if you start another online patching cycle without doing fs_clone then all your technology patches could be lost as your patch file system did not contain these patches and once you do cutover and fs_clone in new patching cycle, it will overwrite old run file system with patch file system.

>>Prepare  – Starts a new patching cycle.

>>Prepare phase internally runs fs_clone if it is not run in the previous patching cycle.

>>FS clone mode can be only executed when there is no active patching cycle going on in the system.

FS Clone might be needed to be executed some of the below scenarios.

The previous Online Patching cycle was aborted.
If we make any manual changes to the run edition file system
FMW or  Middle-tier technology components are patched.
After applying a big application RUP patch.
Loss of patch edition file system or corruption.

Restart failed fs_clone from the beginning:

adop phase=fs_clone force=yes


Oracle E-Business Suite Release 12.2: Online Patching FAQ (Doc ID 1583902.1)
How To Automatically Set the Current Run or Patch Edition / File System for EBS 12.2 (Doc ID 1545584.1)

Wednesday, July 24, 2019

Huge Archive/Redo data generated

Redo log store all changes made to the database. Every instance of an Oracle Database has an associated redo log to protect the database in case of an instance failure.

Archive is generated due to log switches.In fact dependent on the redo generated. Cause of redo generation is any change in the database like, insert, delete or update query.

If you commit frequently, the amount of redo you generate goes UP as well (not only that, but it runs slower overall).

If we have issue of huge generation of archive log as immediately we can create enough room in the mount point to accommodate archive logs by compressing or moving the files.

Parallely we can search for the session generating high redo.

Query to find sessions when session is connected:

select s.username, s.osuser, s.status,s.sql_id,  sr.*  from
  (select sid, round(value/1024/1024) as "RedoSize(MB)"
          from v$statname sn, v$sesstat ss
          where sn.name = 'redo size'
                and ss.statistic# = sn.statistic#
          order by value desc) sr,
   v$session s
where sr.sid = s.sid
and   rownum <= 10;

select sid,value from v$sesstat natural join v$statname where name='redo size' order by 2;

If session is not connected at present and wanted to know the details of session which generated more redo we can reach as follows:

1. Take AWR report,check in session "Segments by DB Blocks Changes" for the segment having more db_block_changes.Now search for sql text in session 'Complete List of SQL Text' for corresponding segments.
or
2. Below query can be helpful:

SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
        dhsso.object_name,
        SUM(db_block_changes_delta)
  FROM dba_hist_seg_stat dhss,
         dba_hist_seg_stat_obj dhsso,
         dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhsso.obj#
    AND dhss.dataobj# = dhsso.dataobj#
    AND begin_interval_time BETWEEN to_date('2018_06_04 12','YYYY_MM_DD HH24')
                                           AND to_date('2018_04_04 12','YYYY_MM_DD HH24')
  GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
           dhsso.object_name

 
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
        SUM(db_block_changes_delta)
  FROM dba_hist_seg_stat dhss,
         dba_hist_seg_stat_obj dhsso,
         dba_hist_snapshot dhs
  WHERE dhs.snap_id = dhss.snap_id
    AND dhs.instance_number = dhss.instance_number
    AND dhss.obj# = dhsso.obj#
    AND dhss.dataobj# = dhsso.dataobj#
    AND dhsso.object_name = 'OWN_OBJECT'
  GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24:MI')
 
 
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
         dbms_lob.substr(sql_text,4000,1),
         dhss.instance_number,
         dhss.sql_id,executions_delta,rows_processed_delta
  FROM dba_hist_sqlstat dhss,
         dba_hist_snapshot dhs,
         dba_hist_sqltext dhst
  WHERE UPPER(dhst.sql_text) LIKE '%OWN_OBJECT%'
    AND dhss.snap_id=dhs.snap_id
    AND dhss.instance_Number=dhs.instance_number
    AND dhss.sql_id = dhst.sql_id

SELECT instance_number, to_char(sample_time,'yyyy_mm_dd hh24:mi:ss'),
         user_id,
         program
  FROM dba_hist_active_sess_history
  WHERE sql_id = 'jy5i4p6t05v'
    AND snap_id BETWEEN 53234 AND 53245

Related doc id:

Diagnosing excessive redo generation [ID 199298.1]
Excessive Archives / Redo Logs Generation Troubleshooting [ID 832504.1]
Troubleshooting High Redo Generation Issues [ID 782935.1]
How to Disable (Temporary) Generation of Archive Redo Log Files [ID 177218.1]

Excessive Archives / Redo Logs Generation Troubleshooting [ID 832504.1]
NOTE:1035935.6 - Example of How To Resize the Online Redo Logfiles
NOTE:111886.1 - How to Setup LogMiner
NOTE:167492.1 - SQL: How to Find Sessions Generating Lots of Redo or Archive logs
NOTE:188691.1 - How to Avoid Generation of Redolog Entries
NOTE:199298.1 - Diagnosing excessive redo generation
NOTE:300395.1 - Using LogMiner, How to determine the cause of lots of redo generation.
NOTE:45042.1 - Archiver Best Practices
NOTE:461100.1 - STARTUP ERRORS ora-00824 cannot set sga_target with statistics_level=BASIC

Master Note: Troubleshooting Redo Logs and Archiving[Article ID 1507157.1]
SQL: How to Find Sessions Generating Lots of Redo or Archive logs [Article ID 167492.1]
Troubleshooting High Redo Generation Issues[Article ID 782935.1]
=========================================================================High redo generation is always a consequence of certain activity in the database.
The main cause of high redo generation is usually a high DML activity during a certain period of time and it’s a good practice to first examine modifications on either database level (parameters, any maintenance operations,…) and application level (deployment of new application, modification in the code, increase in the users,..).

What we need to examine:
Is supplemental logging enabled? The amount of redo generated when supplemental logging is enabled is quite high when compared to when supplemental logging is disabled.
What Causes High Redo When Supplemental Logging is Enabled (Doc ID 1349037.1)

Are a lot of indexes being used?, reducing the number of indexes or using the attribute NOLOGGING will reduce the redo considerably.

Are all the operation really in need of the use of LOGGING? From application we can reduce redo by making use of the clause NOLOGGING. Note that only the following operations can make use of NOLOGGING mode:
- direct load (SQL*Loader)
- direct-load INSERT
- CREATE TABLE ... AS SELECT
- CREATE INDEX
- ALTER TABLE ... MOVE PARTITION
- ALTER TABLE ... SPLIT PARTITION
- ALTER INDEX ... SPLIT PARTITION
- ALTER INDEX ... REBUILD
- ALTER INDEX ... REBUILD PARTITION
- INSERT, UPDATE, and DELETE on LOBs in NOCACHE NOLOGGING mode stored out of line
To confirm if the table or index has "NOLOGGING" set.
Issue the following statement.
select table_name,logging from all_tables where table_name = <table name>;
-or-
select table_name,logging from all_indexes where index_name = <index name>;

Do tables have triggers that might cause some indirect DML on other tables?

Is Auditing enabled the contributor for this excessive redo generation?

Are tablespaces in hot backup mode?

select a.tablespace_name
from sys.dba_data_files a, sys.v_$backup b
where b.status = ‘ACTIVE’
and b.file# = a.file_id
group by a.tablespace_name;

Examine the log switches:

select lg.group#,lg.bytes/1024/1024 mb, lg.status, lg.archived,lf.member
from v$logfile lf, v$log lg where lg.group# = lf.group# order by 1, 2;
select to_char(first_time,'YYYY-MON-DD') "Date", to_char(first_time,'DY') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'999') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'999') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'999') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'999') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'999') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'999') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'999') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'999') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'999') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'999') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'999') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'999') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'999') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'999') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'999') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'999') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'999') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'999') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'999') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'999') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'999') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'999') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'999') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'999') "23" ,
count(*) Total from v$log_history group by to_char(first_time,'YYYY-MON-DD'), to_char(first_time,'DY')
order by to_date(to_char(first_time,'YYYY-MON-DD'),'YYYY-MON-DD')
This will give us an idea of the times when the high peaks of redo are happening
 Examine AWR report:
Next step will be examining the AWR from the hour where we have had the highest number of log switches, and confirm with the redo size that these log switches are actually caused by a lot of redo generation.
In the AWR we can also see the sql with most of the gets/executions to have an idea of the activity that is happening in the database and generating redo and we can also see the segments with the biggest number of block changes and the sessions performing these changes.Another way to find these sessions is described in SQL: How to Find Sessions Generating Lots of Redo or Archive logs (Doc ID 167492.1)
To find these segments we can also use queries:
SELECT to_char(begin_interval_time,'YY-MM-DD HH24') snap_time,
dhso.object_name,
sum(db_block_changes_delta) BLOCK_CHANGED
FROM dba_hist_seg_stat dhss,
dba_hist_seg_stat_obj dhso,
dba_hist_snapshot dhs
WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhso.obj#
AND dhss.dataobj# = dhso.dataobj#
AND begin_interval_time BETWEEN to_date('11-01-28 13:00','YY-MM-DD HH24:MI') <<<<<<<<<<<< Need to modify the time as per the above query where more redo log switch happened (keep it for 1 hour)
AND to_date('11-01-28 14:00','YY-MM-DD HH24:MI') <<<<<<<<<<<< Need to modify the time as per the above query where more redo log switch happened (interval shld be only 1 hour)
GROUP BY to_char(begin_interval_time,'YY-MM-DD HH24'),
dhso.object_name
HAVING sum(db_block_changes_delta) > 0
ORDER BY sum(db_block_changes_delta) desc ;
-- Then : What SQL was causing redo log generation :
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24') WHEN,
dbms_lob.substr(sql_text,4000,1) SQL,
dhss.instance_number INST_ID,
dhss.sql_id,
executions_delta exec_delta,
rows_processed_delta rows_proc_delta
FROM dba_hist_sqlstat dhss,
dba_hist_snapshot dhs,
dba_hist_sqltext dhst
WHERE upper(dhst.sql_text) LIKE '%<segment_name>%' >>>>>>>>>>>>>>>>>> Update the segment name as per the result of previous query result
AND ltrim(upper(dhst.sql_text)) NOT LIKE 'SELECT%'
AND dhss.snap_id=dhs.snap_id
AND dhss.instance_number=dhs.instance_number
AND dhss.sql_id=dhst.sql_id
AND begin_interval_time BETWEEN to_date('11-01-28 13:00','YY-MM-DD HH24:MI') >>>>>>>>>>>> Update time frame as required
AND to_date('11-01-28 14:00','YY-MM-DD HH24:MI') >>>>>>>>>>>> Update time frame as required
 Finally, to troubleshoot further the issue and know the exact commands are being recorded at that particular time frame we can use log miner and mine the archivelog from the concerned time frame. We can look on v$archived_log and find the archived log generated at that particular time frame.
How To Determine The Cause Of Lots Of Redo Generation Using LogMiner (Doc ID 300395.1)

1) Query V$SESS_IO
This view contains the column BLOCK_CHANGES which indicates how much blocks have been changed by the session.
High values indicate a session generating lots of redo. The query you can use is:

select s.sid, s.serial#, s.username, s.program, i.block_changes
from v$session s, v$sess_io i
where s.sid = i.sid
order by 5 desc, 1,2,3,4;

Run the query multiple times and examine the delta between each occurrence of BLOCK_CHANGES.
Large deltas indicate high redo generation by the session.

2) Query V$TRANSACTION
This view contains information about the amount of undo blocks and undo records accessed by the transaction
(as found in the USED_UBLK and USED_UREC columns). The query you can use is:

SELECT s.sid, s.serial#, s.username, s.program, t.used_ublk, t.used_urec
FROM v$session s, v$transaction t
WHERE s.taddr = t.addr
ORDER BY 5 desc, 6 desc, 1, 2, 3, 4;

Run the query multiple times and examine the delta between each occurrence of USED_UBLK and USED_UREC.
Large deltas indicate high redo generation by the session.

Check which session is generating more redo.

set pages 1000
set lines 140
SELECT s.sid, s.serial#, s.username, s.program,
i.block_changes
FROM v$session s, v$sess_io i
WHERE s.sid = i.sid
ORDER BY 5 desc, 1, 2, 3, 4;