Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Saturday, June 29, 2019

To Enable FND debug:

1.Navigate Responsibility: System Administrator > Profile => System
a. Find the FND profile options for User who is submitting the process to debug.
User: User submitting the process
Profile: FND:%Debug%
b. Set the following Profile values at the user level
FND: Debug Log Enabled :Yes
FND: Debug Log Filename : NULL
FND: Debug Log Level : STATEMENT
FND: Debug Log Module : %
c. Save.

2.Get the current log sequence in the FND table.
select max(log_sequence) from fnd_log_messages ; -- start

3.Reproduce the issue be sure you get the "FRM-40735: PRE-FORM TRIGGER RAISED Unhandled Exception ORA-04063" error message.

4.Get the sequence in the FND log
select max(log_sequence) from fnd_log_messages ; -- end

5.Disable FND Debug by resetting the above profiles.

6.Spool the output of the following query to an excel file which gives FND debug log:
Run this script and replace start and end with the sequences you obtained from steps 2 and 4.

set linesize 4000
set verify off
set feedback off
set pagesize 999
set markup html on entmap on spool on preformat off

spool val_excess_report.xls

select module, message_text
from fnd_log_messages
where log_sequence between <start> and <end>;

spool off;





Friday, June 28, 2019

SQL*Plus not showing headings when running query:

SQL*Plus do not show the heading of the columns when running query.Even when setting “set heading on” not changed anything

Result as below:

SQL> select instance_name,status from v$instance;
mydb          OPEN

Change file in below file:
$ORACLE_HOME/sqlplus/admin/glogin.sql, Basically SQL*Plus runs this file on startup.

Add these lines:
SET LINESIZE 150
SET PAGESIZE 0

Difference between count(*) and count(col)

select count(*) from hrtable;
select count(name) from hrtable;

count(*),operation counts all rows in the table, regardless of the values in the rows.
count(name), the count operation counts all rows where “name” is not NULL.

Oracle Version:

Oracle Release Number Format:

12.1.0.1.0
 ┬ ┬ ┬ ┬ ┬
 │ │ │ │ └───── Platform-Specific Release Number
 │ │ │ └────────── Component-Specific Release Number
 │ │ └─────────────── Fusion Middleware Release Number
 │ └──────────────────── Database Maintenance Release Number
 └───────────────────────── Major Database Release Number

To know database version query:
seelct version from v$instance;
select banner from v$version;
select comp_name,version,status from dba_registry;

Workflow:
select * from WF_RESOURCES where NAME='WF_VERSION';

Oracle applications:
select release_name from fnd_product_group;

Forms, sql, pls, reports:

adident Header 'filename'

strings file_name |grep $Header

strings -a aprcvmtb.pls | grep 'CREATE'

or use this:
SELECT    'strings $'
       || DECODE (app_short_name,
                  'OFA', 'FA',
                  'SQLGL', 'GL',
                  'SQLAP', 'AP',
                  app_short_name
                 )
       || '_TOP/'
       || subdir
       || '/'
       || filename
       || ' | grep Header'
  FROM ad_files
 WHERE filename = '<your file name>';

Database objects
set long 10000
select text from user_source where name='&package_name' and text like '%$Header%';

SELECT text FROM dba_source WHERE name = 'ARPT_SQL_FUNC_UTIL' AND text LIKE '%Header:%';

select name, text from dba_source where text like '%.pls%' and line < 10;

select VIEW_NAME, TEXT from USER_VIEWS where VIEW_NAME = '&VIEW_NAME';


PDB Start Automatically:

PDB Start Automatically:

In oracle 12c , On startup of the CDB, the PDBS will be in MOUNTED and need to open all pdb manually:

To open pdb:

alter pluggable database <pdb name> open;


As of PSU 12.1.0.2, Oracle introduced “PDB State Management Across CDB Restart”, which will automatically put all pluggable database in a previously defined state on database startup.

Query to check current saved state of the PDBs:

SQL> SELECT con_name, instance_name, state FROM cdb_pdb_saved_states;

Command to save the states:

alter pluggable database <pdb name> save state;

To discard the saved state:

alter pluggable database <pdb name> discard state;

We can also create trigger in the CDB to automatically open the pluggable on startup:

CREATE TRIGGER open_all_pdbs
AFTER STARTUP ON DATABASE
BEGIN
     EXECUTE IMMEDIATE 'ALTER PLUGGABLE DATABASE ALL OPEN';
END open_all_pdbs;
/

Wednesday, June 26, 2019

ORA-19815: WARNING: db_recovery_file_dest_size

Error:ORA-19815: WARNING: db_recovery_file_dest_size of 429496729600 bytes is 100.00% used, and has 0 remaining bytes available.

Solution:
col name     format a32
col size_mb  format 999,999,999
col used_mb  format 999,999,999
col pct_used format 999

select
  name,
  ceil( space_limit / 1024 / 1024) size_mb,
  ceil( space_used / 1024 / 1024) used_mb,
  decode( nvl( space_used, 0),0, 0,
  ceil ( ( space_used / space_limit) * 100) ) pct_used
from
   v$recovery_file_dest
order by
 name desc;

ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE =600G SCOPE=BOTH;
====
Set linesize 300
col name for a32
col size_m for 999,999,999
col used_m for 999,999,999
col pct_used for 999

SELECT name
, ceil( space_limit / 1024 / 1024) SIZE_M
, ceil( space_used / 1024 / 1024) USED_M
, decode( nvl( space_used, 0),
0, 0
, ceil ( ( space_used / space_limit) * 100) ) PCT_USED,
(space_limit + ((space_limit*20)/100))/1024/1024 as "Next space to ADD(MB)"
FROM v$recovery_file_dest
ORDER BY name;

alter system set db_recovery_file_dest_size=737280M scope=both sid='*';

Tuesday, June 18, 2019

PMON Process related to oracle home:

Login to db host :
[hostname ~]$ ps -ef|grep -i pmon
oracle 21681     1  0 Jun13 ?        00:00:30 ora_pmon_SID


ORACLE_SID is SID

$ ls -lrt /proc/21681/cwd

lr-x------   2 oracle  dba  0 Jun 13 19:31 cwd -> /u01/opt/app/product/10.2.0.4/db_1/dbs/

ORACLE_HOME is /u01/opt/app/product/10.2.0.4/db_1

apachectl startssl: execing httpd

While starting Apache services were getting below error :


$ ./adapcctl.sh start

You are running adapcctl.sh version 120.7.12010000.2

Starting OPMN managed Oracle HTTP Server (OHS) instance ...
opmnctl: opmn is already running.
opmnctl: starting opmn managed processes...
================================================================================
opmn id=hostname:10525
    0 of 1 processes started.

ias-instance id=SID_hostname.hostname.domanin name.com
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
--------------------------------------------------------------------------------
ias-component/process-type/process-set:
    HTTP_Server/HTTP_Server/HTTP_Server/

Error
--> Process (index=2,uid=18683685,pid=2255)
    failed to start a managed process after the maximum retry limit
    Log:
    /SID/mtlog/SID_hostname/logs/ora/10.1.3/opmn/HTTP_Server~1.log


adapcctl.sh: exiting with status 0


When checked log file : /SID/mtlog/SID_hostname/logs/ora/10.1.3/opmn/HTTP_Server~1.log

-------
19/06/17 13:42:39 Start process
--------
/SID/inst/apps/SID_hostname/ora/10.1.3/Apache/Apache/bin/apachectl startssl: execing httpd
Syntax error on line 17 of /SID/inst/apps/SID_hostname/ora/10.1.3/Apache/Apache/conf/mod_osso.conf:
Unable to open /SID/inst/apps/SID_hostname/ora/10.1.3/Apache/Apache/conf/osso/SID_hostname_osso.conf.

Solution:

Disable sso, commented the below parameter in hhpd.conf file.

$ grep -i sso httpd.conf

# Include the configuration files needed for mod_osso
#include "/SID/inst/apps/SID_hostname/ora/10.1.3/Apache/Apache/conf/mod_osso.conf"

$

Next solution:

1. Backup the $CONTEXT_FILE
2. Edit the parameter  "s_mod_osso_conf_comment" in the $CONTEXT_FILE using editContext utility,  or manually using a text editor
3. Set the value for the above parameter to "#"
4. Save the changes and run Autoconfig
5. Start the Apache service to re-test the issue

start http component:

adopmnctl.sh startproc process-type=HTTP_Server

Saturday, June 15, 2019

Oracle Basic commands

Check satus of cluster resources:

crsctl stat res -t
crsctl status resource -t

Check status of local RAC Background Processes:

crsctl stat res -t -init

Check the status of the OCR:
[root@Rac1 ~]# cd /u01/app/11.2.0/grid/bin/
[root@Rac1 bin]# ./orcrcheck

[root@Rac1 bin]# cat /etc/oracle/ocr.loc

 Get information about the voting disk:
 [root@Rac1 bin]# ./crsctl query css votedisk

 Find the active and software version of the Grid Install:
[root@Rac1 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.2.0.1.0]
[root@Rac1 bin]# ./crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [11.2.0.1.0]

Check nodes in cluster:
[root@Rac1 bin]# olsnodes -n
rac1 1
rac2 2

CRS Status:
cd $ORA_CRS_HOME/bin
./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

Add database:
srvctl add database -d <DBNAME> -o <ORACLE_HOME>

Add instance:
srvctl add instance -d <DBNAME> -i INSTANCE1 -n <NODE1>
srvctl add instance -d <DBNAME> -i INSTANCE2 -n <NODE2>

Add Spfile:
srvctl modify database -d <DBNAME> -spfile +DATA/<DBNAME>/PARAMETERFILE/spfile<DBNAME>.ora

Add Listener:
srvctl add listener -l <LISTENER_NAME> -p <PORT> -o <ORACLE_HOME>

Config database:
srvctl config database -d <DBNAME>







Thursday, June 13, 2019

Use of the V$PWFILE_USERS:

$pwfile_users view contains a list of all users who have been granted the SYSDBA or SYSOPER privilege.V$PWFILE_USERS columns  with details :

Column                       Description
USERNAME::This column contains the name of the user that is recognized by the password file.
SYSDBA::If the value of this column is TRUE, then the user can log on with SYSDBA system privilege.
SYSOPER ::If the value of this column is TRUE, then the user can log on with SYSOPER system privilege.
SYSASM::If the value of this column is TRUE, then the user can log on with SYSASM system privilege.

From 11g there is a new column in v$pwfile_users for the SYSASM privilege.

select * from v$pwfile_user;

*Use this query to check users for which password change to copy password file.
*Any user that is not appearing in above query o/p , its password change will automatically replicate.

SYS password change in Oracle 12cR2 and 18c:

Oracle 12cR2 own wards,need to configure password file to change SYS account password.

Following error will be thrown while attempt to change the SYS password:

SQL> alter user sys identified by welcome ;

alter user sys identified by welcome
*
ERROR at line 1:
ORA-01994: Password file missing or disabled

Solution:

1. Set parameter in parameter file:

remote_login_passwordfile=exclusive

2.Create a password file:

orapwd file=orapwSID password=welcome1 entries=3

3.verification query:
col username for a10
select username,sysdba,sysoper from v$pwfile_users;

4. Now again change password:

SQL> alter user sys identified by welcome1 ;

5.Remove or rename the password file later.

6.Change parameter back to remote_login_passwordfile=none and restart the database.

Index Full Scans:

An index full scan reads the entire index in order. An index full scan can eliminate a separate sorting operation because the data in the index is ordered by index key.

The database uses single-block I/O rather than multiblock I/O.

Situations where optimizer chooses Index Full Scans:

  • A predicate references a column in the index. This column need not be the leading column.
  • No predicate is specified and all columns in the table and in the query are in the index.
  • No Predicate is specified and at least one indexed column is not null.
  • A query includes an ORDER BY on indexed non-nullable columns.
Example:

SELECT department_id, department_name FROM departments ORDER BY department_id;

SQL_ID  94t4a20h8what, child number 0
-------------------------------------
select department_id, department_name from departments order by department_id

Plan hash value: 4179022242

--------------------------------------------------------------------------------
|Id | Operation                   | Name        |Rows|Bytes|Cost(%CPU)|Time    |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT            |             |    |     | 2 (100)|          |
| 1 |  TABLE ACCESS BY INDEX ROWID| DEPARTMENTS | 27 | 432 | 2   (0)| 00:00:01 |
| 2 |   INDEX FULL SCAN           | DEPT_ID_PK  | 27 |     | 1   (0)| 00:00:01 |
--------------------------------------------------------------------------------

For each index entry, the database obtains the rowid from the entry, and then retrieves the table row specified by the rowid.

Table Access by Rowid:

A rowid is an internal representation of the storage location of data.The rowid of a row specifies the data file and data block containing the row and the location of the row in that block.Locating a row by specifying its rowid is the fastest way to retrieve a single row because it specifies the exact location of the row in the database.

To access a table by rowid, database performs the following steps:

1.Obtains the rowids of the selected rows, either from the statement WHERE clause or through an index scan of one or more indexes
Table access may be needed for columns in the statement not present in the index.

2.Locates each selected row in the table based on its rowid
Full Table Scan (FTS):

In this scan all formatted blocks of a table which is below High Water Mark (HWM) is scanned sequentially and every row is examined to determine if it satisfies the query’s "where' clause. Since the blocks are scanned sequentially, to speed up the whole scanning process I/O read calls larger than a single block. The size of I/O read call is determined byDB_FILE_MULTIBLOCK_READ_COUNT parameter.

Just because during accessing large number of blocks with FTS Oracle can use multiblock I/O read call, FTS is sometimes better than index range scans.

Situations where optimizer chooses Full Table Scans:

Lack of Index.
The query predicate applies a function to the indexed column.
Select COUNT(*) with B-Tree indexes & Nulls.
Large Amount of Data
Small Table
High Degree of Parallelism
FTS Hint
The table statistics are stale.
The query predicate does not use the leading edge of a B-tree index..
The query is unselective.

Explain Plan Interpretation:

Explain Plan :
An explain plan is nothing but a representation of the considered access path when query is being executed.When a query processed, it go through as below phases:

1.User and server process created once established connection.
2.Syntactic – syntax checks of query.
3.Semantic – checks for objects existence and are accessible, object mentioned in query.
4.View Merging – rewrites query as per the join on base tables as opposed to using views.
5.Statement Transformation – rewrites query transforming some complex constructs into simpler ones where appropriate (e.g. subquery unnesting, in/or transformation). Some transformations use rules while others are costed based upon statistics.
6.Optimization – determines the optimal access path for the query to execute. The Cost Based Optimizer (CBO) uses statistics to analyze the relative costs of accessing objects.
7.Query Evaluation Plan(QEP) Generation.
8.QEP Execution.

Notes:
Parsing steps: 1 to 7.
Access path available in step 7 is the explain plan only.

Once got exact access path,the same stored in the library cache based upon a hashed representation of the query. We are looking for statement in library cache as per hashing algorithm and then look for this hash value in the library cache. the same access path will be used until the query is re-parsed.

Term to understand:

Row Source – A row source is a software function that implements specific operations (such as a table scan or a hash join) and returns a set of rows.
Predicates – The where clause of a query
Tuples – rows
Driving Table – This is the row source that we use to seed the query. If this returns a lot of rows then this can have a negative effect on all subsequent operations
Probed Table  – This is the object we look-up data in after we have retrieved relevant key data from the driving table.

Optimizer Access path:

Oracle reads blocks of data which is either single Oracle block(smallest amount) or multi-block i/o (constrained by operating system limits). Logically Oracle read the data using the following access:

Full Table Scan (FTS)
Table Access by Rowid
Index Full Scan
Index Fast Full Scan
Index Look-up (unique & non-unique)

Wednesday, June 12, 2019

Lock, Block and deadlock

Lock:
Lock is acquired when any process accesses a piece of data where there is a chance that another concurrent process will need this piece of data as well at the same time. By locking the piece of data we ensure that we are able to action on that data the way we expect.
For example, if we read the data, we usually like to ensure that we read the latest data. If we update the data, we need to ensure no other process is updating it at the same time, etc.
Locking is the mechanism that SQL Server uses in order to protect data integrity during transactions.

Block:
Block (or blocking lock) occurs when two processes need access to same piece of data concurrently so one process locks the data and the other one needs to wait for the other one to complete and release the lock.As soon as the first process is complete, the blocked process resumes operation. The blocking chain is like a queue: once the blocking process is complete, the next processes can continue. In a normal server environment, infrequent blocking locks are acceptable.
Deadlock:
Deadlock occurs when one process is blocked and waiting for a second process to complete its work and release locks, while the second process at the same time is blocked and waiting for the first process to release the lock.
In a deadlock situation, the processes are already blocking each other so there needs to be an external intervention to resolve the deadlock. For that reason, SQL Server has a deadlock detection and resolution mechanism where one process needs to be chosen as the “deadlock victim” and killed so that the other process can continue working. The victim process receives a very specific error message indicating that it was chosen as a deadlock victim and therefore the process can be restarted via code based on that error message.
Deadlocks are resolved by SQL Server and do not need manual intervention.

Tuesday, June 11, 2019

Difference d/w Physical Standby and Logical Standby Database:

Physical standby:

Physical standby schema matches exactly the source database. It is block-for-block identical copy of the primary database.


Archived redo logs and FTP'ed directly to the standby database which is always running in "recover" mode.  Upon arrival, the archived redo logs are applied directly to the standby database.

It uses standard recovery techniques of an Oracle database to apply redo data.


Logical standby:

Logical standby database does not have to match the schema structure of the source database.

It uses SQL Apply engine, SQL Apply Engine transforms redo data received from the primary into logical SQL statements and then executes those SQL statements against the standby database.

Logical standby uses LogMiner techniques to transform the archived redo logs into native DML statements (insert, update, delete).  This DML is transported and applied to the standby database.

Logical standby tables can be open for SQL queries (read only), and all other standby tables can be open for updates.


Logical standby database can have additional materialized views and indexes added for faster performance.

Analyzing AWR Report in Oracle:

AWR - Automatic workload repository:
It is a collection of persistent system performance statistics owned by SYS. It resides in SYSAUX tablespace. Bydefault snapshot are generated once every 60 min and maintained for 7 days. Each snapshot has a unique ID know as "snap_id". Snapshot detail can be found in "dba_hist_snapshot" .

If we have Database performance issue and not the Database machine, then AWR Report is the place to look at. AWR is not used for real-time performance monitoring like the v$ tables. It is used for historical analysis of performance. AWR complements, but doesnot replace real-time monitoring.

Once AWR Report is generated in Oracle, the next task is to analyze it. By going through the AWR Report we can easily solve issues like slow database, high wait events, slow query and many more issues. Even though the report is lengthy, Analyzing or Reading relevant part of AWR Report can help to troubleshoot issues in easy and fast manner.


In case if a particular query is not performing well, look at execution plan of the query, stats of underlying table etc. In this case AWR won't help much.

General Tips before getting an AWR Report:

1. Collect Multiple AWR Reports: It's always suggested to have two AWR Reports, one for good time (when database was performing well), second when performance is poor. This way we can easily compare good and bad report to find out the culprit.

2. Stick to Particular Time: Always take the report during the time when the Database is performing slow. We have to have a specific time like Database was slow today at 10Am and continued till 2Pm. Here, we can get a report for these four hours.

3. Split Large AWR Report into Smaller Reports: Instead of having one report for long time like one report for 4hrs. it's is better to have four reports each for one hour. This will help to isolate the problem.

Now, lets analyze the generated report. Here is the way and few tips to analyze the same.
Analyze AWR Report Steps:

Note: The Report is obtained using OEM with Oracle Release 12.1.0.1.0

1. Database Details:

Here just do cross check for database and instance and and database version with the Database having performance issue. RAC = NO if it is not a RAC DB.

2. Host Configuration:

This will give you name, platform CUP, socket and RAM etc.
Important thing to notice is that the configuration like CPU and Memory has not changed when the performance is degraded.

3. Snap Shot Detail:

This are the detail about snap shot taken, Snap start time and end time. Difference between them is as "Elapsed".
DB Time= session time spent in database.

DB Time= CPU Time + Non IDLE wait time.

In Systems with multiple concurrent active sessions, DB time can be larger than elapsed time. if DB time is very large as compared to Elapse time, which is not a concern.

4. Load Profile:

Here are few important stats to look into.

First is "DB CPU(s)" per second. Before that let's understand how DB CPU's work. Suppose you have 8 cores into the system. So, per wall clock second you have 8 seconds to work on CPU.

So, if "DB CPU(s)" per second in this report > cores in (Host Configuration (#2)) means env is CPU bound and either need more CPU's or need to further check is this happening all the time or just for a fraction of time.

In this case, machine has 8 cores and DB CPU(s) per second is 0.1. So, this is not a CPU bound case.

Next look at Parses and Hard parses. If the ratio of Hard parses to Parses is high, this means Database is performing more Hard parses. In that case we needs to look at parameters like cursor_sharing and application level for bind variables etc.

In the above we have no issue with parses.
Parse issues usually occur as a result of:
           Bad Bind variables usage
           Insufficient memory
           Will also be co-indicated by low percentage of memory for multiple SQL execution.


5. Instance Efficiency Percentages:

Note: DB Tuning must never be driven by the Hit ratios. These only provide additional information for help to understand how the instance is operating.

Meanings of particular Hit Ratios:

a.       Buffer Nowait%: Shows the % of times when data buffers were accessed directly without any wait time.

b.       Buffer Hit Ratio : Measures how many times a  required block was found in memory rather than having to execute an expensive read operation on disk to get the block.

c.        Library Hit%: Shows the % of times when SQL statements and PL/SQL packages were found in the shared pool.

d.       Execute to Parse %: Shows how often parsed SQL statements are reused without re-parsing.

e.        Parse CPU to Parse Elapsed %: Gives the ratio of CPU time spent to parse SQL statements.

f.        Redo NoWait %: Shows whether the redo log buffer has sufficient size.

g.        In-memory Sort %: Shows the percentage of times when sorts are performed in memory instead of using temporary tablespaces.

h.       Soft Parse % : Shows how often sessions issued a SQL statements that is already in the shared pool and how it can use an existing version of the statement.

i.         Latch Hit %: Shows how often latches were acquired without having to wait.

j.         % Non-Parse CPU : Shows the percentage of how much CPU resources were spent on the actual SQL execution.

In these statistics, you have to look at

Soft Parse % ~ 100% indicates that the SQL statements are actively re-used.

"% Non-Parse CPU". ~ 100% means most of the CPU resources are used into operations other than parsing, which is good for database health.

Parse CPU to parse Elapsed % : It is very low, it reveals that oracle waits for some resources during parsing of SQL statements. To be investigated further.

Rule of thumb: Always minimize the number of Hard parses. This reduction yields the benefits of minimizing CPU overhead spent performing costly parse work.

6. Top 10 Foreground Events by Total Wait Time:

This is another most important stats to consider while looking at AWR Report for any database performance related issue.  This report is critical because it shows those database events that might constitute the bottleneck for the system. This has a list of top 10 foreground events arranged by Total Wait Time.




Here, first of all check for wait class if wait class is  User I/O , System I/O,  Others etc this could be fine but if wait class has value "Concurrency" then there could be some serious problem. Next to look at is Total Wait Time (sec) which show how many times DB was waiting in this class and then Wait Avg (ms). If Total Wait Time(sec) are high but Wait Avg(ms) is low then you can ignore this. If both are high or Wait Avg(ms) is high then this has to further investigate.

In the above screen shot, most of the resource are taken by Log file sync = 91.9% DB time and the wait class is “Commit”

In this report the event is "log file sync" which has high Waits, huge % DB time and large values in Total Wait Time (sec)  and Wait  Avg(ms) with wait class as commit. So, here you have to investigate further.

Note : Use highest Total Wait Time(sec) to guide investigation.

Next are Wait Classes by Total Wait Time, Host CPU, Instance CPU, IO Profile, Memory Statistics, Cache Sizes are self-explanatory.
Point to remember while checking these results :
Ø  Watch for number of CPUs
Ø  Pay attention to changes in Memory size
Ø  An idle CPU can be a bad thing
Ø  Always look at Wait IO verses CPU usage
Ø  If the system is IO bound CPU will be idle!

7. Shared Pool Statistics:



In general, Memory usage % statistics should be ~70% after the DB has been running a long time. If its quite low, memory is being wasted. Here in our report we have Memory Usage % ~50% which is good.

In case if we have this to ~90% or above as shared pool consumed. This could indicate that the system experiences some overhead while aging out old shared memory structures like cursors, PL/SQL programs, and so on. This will place additional overhead on the CPU to perform reparsing aging-out. The size of the shared pool should be increased appropriately to eliminate such overhead.

The % SQL with executions >1 statistics indicate how many SQL statements are executed more than one time. This measures how well the applications are tuned and how well they make use of Bind variables.


8. Time Model Statistics:

This is a detailed explanations of system resource consumptions. Stats are order by Time (s) and % of DB Time.




Some times Sum of all  % of DB time may be > 100%. why is this ?

Because this is cumulative time i.e. SQL execute elapsed time may show more DB time, which includes it sub parts like parse time elapsed, hard parse elapsed time etc. So, if you find Hard parse time elapsed is taking more %. So investigate further so on and so forth.

We have to look for stat which is taking abnormal % of DB time. 

9. Operating System Statistics - Detail:

This is the information related to OS, what is the load status on System shown here.




This report shows, system is 97 to 98% idle at time of report taken, So, there is no resource crunch at system level. But if, you found very high busy, user or sys % and indeed this will led to low idle %. Investigate what is causing this.

Next, very crucial part of AWR report is SQL Statistics which has all sql query details executed during report time interval.



We will explore few of them, To understand, how to analyzed these reports. Let's start with

10. SQL Ordered by Elapsed Time:

As explained by name itself, this lists SQL queries ordered by Elapsed time into reported time interval.




In this report, look for query which has low executions and high Elapsed time per Exec (s) and this query could be a candidate for troubleshooting or optimizations.

Important point: some times executions may show 0, it doesn't mean query is not executing, this might be the case when query was still executing and you took AWR report. That's why query completion was not covered in Report.

11. SQL Ordered by CUP Time:

In this report, SQL queries are listed on the basis of CPU taken by the query i.e. queries causing high load on the system. The top few queries could be the candidate query for optimization.

From above stat, look for queries using highest CPU Times,

However, there are so many other stats in AWR Report which needs to be considered; here I have listed the most commonly used stats for any performance related information.



TKProf output understanding:

TKProf output file has mainly following sections:-

1. Tabular Statistics
2. Row Source Operations
3. Wait Event Information

1. Tabular Statistics in details:


Before understanding these statistics we should understand following terms:

All row names and meaning of Tabular Statistics:

PARSE: Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects.
EXECUTE: Actual execution of the statement by Oracle. For INSERT, UPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.
FETCH: Retrieves rows returned by a query. Fetches are only performed for SELECT statements.

All column names and meaning of Tabular Statistics:

count: It is the number of times a call (parse/execute/fetch) was performed.
cpu: Total CPU time in seconds
elapsed: Total elapsed time in seconds
disk: Total number of data blocks physically read from the data files on disk.
query: Total number of buffers retrieved in consistent mode.This column is incremented if a buffer is read in Consistent mode.This will include counts of blocks read from the rollback segment in order to 'rollback' a block.
current: Total number of buffers retrieved in current mode.
rows: Total number of rows processed by the SQL statement.

***Sum of query & current columns is the total number of buffers accessed i.e. LIOs.

In this section,first we should look for if any operation that is taking high number of cpu times or if we have high number of block reads i.e. query column, specially if the block read LIOs are very high compared to number of rows fetched. Another important thing is to see if there is lot of hard parsing which can be seen from count of parse call and the number of times it was hard parsed i.e. misses in library cache. All these things give a sense if there is something wrong with query or not and if there is a problem then where ? I would try to include some examples of Tabular Statistics in subsequent posts for scenarios where we have some problem with library cache or buffer cache etc.

2. Row Source Operations:
In Tabular Statistics,we understand if there is any problem with the query and then we can refer to Row Source Operations to see what operations are costly.


Where cr is consistent reads, pr is physical reads, pw is physical writes, time is time in microseconds.

3. Wait Event Information:




This is another important section of TKProf output file as it gives us the details on the wait events on which the query is waiting.From this information we can either try to reduce the cause for the waits or from row source information see if any other alternate access method can be used.

We know if there is problem with a hint about the problem area from tabular statistics and the most costly operation from row source operation, now we can get the details where this query is waiting from wait event information.

TKProf Interpretation (9i and above) (Doc ID 760786.1)
https://expertoracle.com/2017/11/24/db-tuning-basics-6-trace-and-tkprof-part-2-generating-tkprof/

Weblogic job interview questions and answers:

What is Weblogic?

WebLogic is a J2EE application server and also an HTTP web server by Oracle , for Unix, Linux, Microsoft Windows, and other platforms. WebLogic supports Oracle, DB2, Microsoft SQL Server, and other JDBC-compliant databases.

What is the basic components of Weblogic Server?

Basic weblogic components as follows:
1)Domains
2)Admin Server
3) Managed Server
4) Node Manager
5) Weblogic Server Cluster

What is the Domain in Weblogic Server?

What is the Server?

What is the Admin Server?

What is the Managed Server?

How Admin Server And Managed Servers  Interacts?

The Administration Server stores the master copy of the domain configuration, including the configuration for all Managed Servers in the domain. Each Managed Server stores a local copy of the domain configuration file. When a Managed Server starts, it connects to the Administration Server to synchronize the configuration. When the configuration is changed, the Administration Server sends the changed configuration to the Managed Servers.

What is the Msi Mode in Weblogic? How can You Enable And Disable this Option?

MSI is Managed Server Independence. By default, Managed Servers can function independently of the Administration Server. A Managed Server instance can start in MSI mode if the Administration Server is unavailable. Configure MSI mode from the Administration Console.To start a Managed Server in MSI mode, perform as below:
Ensure that the Managed Server’s root directory contains the config subdirectory. If the config subdirectory does not exist, copy it from the Administration Server’s root directory.Start the Managed Server at the command line or by using a script.
Environment > Servers > Server_Name > Tuning > Advanced > Managed Server Independence Enabled check box

Difference between Weblogic Development and Production Mode?

Development Mode:
1) The default JDK for development domain is Sun Hotspot
2) You can use the demo certificates for SSL
3) Auto deployment is enabled
4) Server instances rotate their log files on startup
5) Admin Server uses an automatically created boot.properties during startup
6) The default maximum capacity for JDBC Datasource is 15
Production Mode:
1) The default JDK for production domain is JRockit
2) If you use the demo certificates for SSL a warning is displayed
3) Auto deployment is disabled
4) Server instances rotate their log files when it reaches 5MB
5) Admin Server prompts for username and password during startup
6) The default maximum capacity for JDBC Datasource is 25

How to change to Production Mode from Development Mode  In Weblogic 10.3?

To change Weblogic start up mode from DEV to production. One way to change it is, by simply editing setDomainEnv.cmd which resides in $root_domain/bin folder.
1. Look for the line that sets the PRODUCTION_MODE script variable: set PRODUCTION_MODE
Add false to the value of the PRODUCTION_MODE variable to ensure the server starts in development
Set true for starting in prod mode.
set PRODUCTION_MODE=false
2. Save your changes and exit the text editor.

What Is Boot.proerties File In Weblogic Server?

boot.properties is the file used by admin or managed server during startup for username and password. It exist under  domain_home/servers/server_name/security dir.

What are Installation modes for Weblogic Server?

There are 3 types of weblogic installation.
1)Graphical mode
2)console mode
3)silent mode

What Is Config.xml?

It is the central configuration repository for a domain.it contains the configuration of the domain like entries of the servers, clusters, data-sources etc. It is under domain/config.

How do we provide User Credentials while Starting Weblogic Server?

When creating a domain, the Configuration Wizard prompts to provide the username and password for an initial administrative user. If domain created in development mode, the wizard saves the username and encrypted password in a boot identity file.
A WebLogic Server instance can refer to a boot identity file during its startup process. If a server instance does not find such a file,it prompts you to enter credentials.
If you create a domain in production mode, or if you want to change user credentials in an existing boot identity file, you can create a new boot identity file.

Can we start a Managed Server if  Administration Server is not available?


What is a Weblogic Server Cluster?


Group of WebLogic Managed Server Instances that work together to provide high availability and scalability for applications is called cluster. WebLogic Servers with in cluster can run on same machine or different machines. These are also called as managed Server cluster.All the servers in a cluster must be in same domain. All servers in a cluster must be same version level, though they can run on different machines.

Advantage Of Clustering?

WebLogic clustering offers following important benefits:
Scalability :  The capacity of an application deployed on a WebLogic Server cluster can be increased dynamically to meet demand. You can add server instances to a cluster without interruption of service — the application continues to run without impact to clients and end users.
Load balancing : The ability to distribute requests across all members of the cluster, according tothe workload on each server.
High availability : A mix of features that ensure applications and services are available even if aserver or machine fails.Clients can continue to work with little or no disruption ina highly available environment. WebLogic achieves high availability using acombination of features: replication, failover, and migratable services.

Difference between Multicast And Unicast in Weblogic?

Multicast : Multicast is easier to explain over Unicast. Multicast is a broadcast UDP option for sending a packet/announcement over to a group that is listening on a specific multicast address and port over which the announcement is sent.There is a defined range for valid Multicast address (224.0.0.1 to 239.255.255.255). Everyone listening on the given address hears the announcement just like following a Twitter post. Some limitations with Multicast is the TTL (time to live) across machines/subnets/routers needs to be adjusted and the routers configured to retransmit the multicast packet across subnets.

Multicast: It is a kind of clustering system where there is no cluster master and each and every server needs to ping each other for informing their existence. Multicasting has many messages to be sent in the form of ping as each server needs to inform all others about its existence. This condition creates much complexity associated with the method compared to unicast.

Unicast : Unicast is more of a point to point UDP option to send the packet to a specific member and not everyone. That way, unicast is more of a private conversation between two individuals while multicast is more of a shout to a group or room. Both are UDP based, so there can be losses unlike TCP that handles retransmissions on message loss. But Unicast can span across routers and does not have to worry about TTL without the everyone hearing the announcement. So, Network Admins in general prefer to go with Unicast over Multicast for these reasons.

Unicast is the method used in the clustering technique where there are cluster master and each server should ping to this cluster master for informing that the server is alive.











Monday, June 10, 2019

Reset the AdminServer Password in WebLogic 11g and 12c:

IF YOU KNOW CURRENT PASSWORD

Start the Admin Server and log into /console.
Go to page: Home > Summary of Security Realms > myrealm > Users and Groups > weblogic. and click on tab Passwords.
Enter the new Password.
Restart the server.
If you get a weblogic.security.SecurityInitializationException error, perform these additional steps on every Managed Server (or eventually the Admin Server, too):

Go to folder <DOMAIN_HOME>/servers/AdminServer/security
Edit the boot.properties file and change the password to the value already entered on the Admin Console. Do this for all the servers in the domain.
Start the Admin Server (Weblogic Server will encrypt the password for you).
Optionally, you can force a Managed Server to connect to the embedded LDAP server on the Administration Server, instead of connecting to the local replicated LDAP server. Follow these steps:

Go to page: Domain > Security > Embedded LDAP page on the Admin Console.
Enable MasterFirst.
Restart the server.


IF YOU DON'T KNOW CURRENT PASSWORD


To reset the password, follow these steps:


Make sure Weblogic Server instance is stopped.

Make a backup of the LDAP folder of the admin server as well as managed servers (you may rename those folders):

<WL_HOME>/user_projects/domains/<DOMAIN_NAME>/servers/<SERVER_NAME>/data/ldap

Set your environment variables by running setDomainEnv.sh (UNIX) or setDomainEnv.cmd (Windows). For example, on UNIX:
. ./setDomainEnv.sh (Notice the space between the dots)


Create a new initialization file for the default authenticator by running the following command that creates a new DefaultAuthenticatorInit.ldift file in the $DOMAIN_HOME/security subdirectory:
java weblogic.security.utils.AdminAccount <ADMIN-USERNAME> <ADMIN-PASSWORD> <DOMAIN_HOME>/security

Note: AdminAccount should be run on the Admin Server, not one of the Managed Servers.


Remove the initialized status file DefaultAuthenticatormyrealmInit.initialized from the <DOMAIN_HOME>/servers/AdminServer/data/ldap/ subdirectory:
cd <WL_HOME>/user_projects/domains/<DOMAINNAME>/servers/AdminServer/data/ldap
rm DefaultAuthenticatormyrealmInit.initialized

NOTE: In some cases, it has been necessary to delete ldap directory for this process to work.

Go to folder <DOMAIN_HOME>/servers/AdminServer/security

Edit the boot.properties file and change the password to the value already used on the previous step. Do this for all the servers in the domain.

Start Weblogic Server (Weblogic Server will encrypt the password for you).


How to Change the WebLogic Server Administrator Password [ID 1082299.1]

Rman Related query

1.Query to check rman backup details:

col STATUS format a9
col hrs format 999.99
select
SESSION_KEY, INPUT_TYPE, STATUS,
to_char(START_TIME,'mm/dd/yy hh24:mi') start_time,
to_char(END_TIME,'mm/dd/yy hh24:mi')   end_time,
elapsed_seconds/3600                   hrs
from V$RMAN_BACKUP_JOB_DETAILS
order by session_key;

2.

oacore services down and not starting:

oacore was down with error:
./adopmnctl.sh status -l

OC4JGroup:default_group          | OC4J:oacore        |     N/A | Down     |        N/A |      N/A |       N/A | N/A

Solution:
1.Start component invidually.

./adopmnctl.sh startproc ias-component=OC4JGroup:default_group

2.Kill the precess relevent to corresponding jvm process and re-start individually:

ps ux

./adopmnctl.sh startproc ias-component=OC4JGroup:default_group

3.Stop and re-start:

./adopmnctl.sh stopall
./adopmnctl.sh startall