Drop Down MenusCSS Drop Down MenuPure CSS Dropdown Menu

Friday, April 8, 2022

Parallel concurrent Processing (PCP) in EBS R12

 What is PCP?

PCP allows running the concurrent managers across multiple node.

How Parallel Concurrent Processing Works?

How to Configure Parallel Concurrent Processing (PCP)

Backup all .ora files

Take backup of all .ora files such as tnsnames.ora, listener.ora and sqlnet.ora files, where they exist, under the 10.1.2 and 10.1.3 ORACLE_HOME locations on the each node.

Edit Context file

Edit the applications context file for following. Stop Application:

Set the value of the variable 

> APPLDCP to ON

> s_applcsf to <shared location between apps servers>

> s_appltmp to <shared location between apps servers>

Edit parameters in spfile (for the transaction manager)

By user having sysdba privilege

alter system set “_lm_global_posts”=true scope=spfile;

alter system set “_immediate_commit_propagation”=true scope=spfile;

Edit utl_file_dir variable

From database node as sysdba

ALTER SYSTEM SET utl_file_dir='<shared temp location’ SCOPE=BOTH sid=’*’;

Bounce database to reflect the changes made.

Execute AutoConfig

Execute AutoConfig on all concurrent processing nodes:

Check the tnsnames.ora and listener.ora configuration files

Check the tnsnames.ora and listener.ora configuration files, located in $INST_TOP/ora/10.1.2/network/admin. Ensure that the required FNDSM and FNDFS entries are present for all other concurrent nodes.

Start the Applications

Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator Responsibility.

• Navigate to Install > Nodes screen, and ensure that each node in the cluster is registered.

•Set up the primary and secondary node names

Navigate to Concurrent > Manager > Define, and set up the primary and secondary node names for all the concurrent managers according to the desired configuration for each node workload.

Verify that the Internal Monitor for each node is defined properly, with correct primary node specification, and work shift details. For example, Internal Monitor: Host1 must have primary node as host1. Also ensure that the Internal Monitor manager is activated: this can be done from Concurrent > Manager > Administrator. 

Set profile option 'Concurrent: PCP Instance Check' to OFF if database instance-sensitive failover is not required (In case of Non RAC Database). By setting it to 'ON', a concurrent manager will fail over to a secondary Application tier node if the database instance to which it is connected becomes unavailable for some reason.

Set Up Transaction Managers  (Only R12)

If you are already using the transnational managers and If you wish to have transnational managers fail over, Perform the below steps

  - Shut down the application services (servers) on all nodes

  - Shut down all the database instances cleanly in the Oracle RAC environment, using the command: 

 - SQL>shutdown immediate;

 - Edit the $ORACLE_HOME/dbs/<context_name>_ifile.ora and add the following parameters:

        _lm_global_posts=TRUE

        _immediate_commit_propagation=TRUE

 - Start the instances on all database nodes.

 - Start up the application services (servers) on all nodes.

 - Log on to Oracle E-Business Suite Release 12 using the SYSADMIN account, and choose the System Administrator responsibility. Navigate to Profile > System, change the profile option ‘Concurrent: TM Transport Type' to ‘QUEUE', and verify that the transaction manager works across the Oracle RAC instance.

- Navigate to Concurrent > Manager > Define screen, and set up the primary and secondary node names for transaction managers.

- Restart the concurrent managers.

- If any of the transaction managers are in a deactivated status, activate them from Concurrent > Manager > Administrator.

 Set Up Load Balancing on Concurrent Processing Nodes (Only Applicable in case of RAC)

If you wish to have PCP to use the load balancing capability of RAC, You can perform the below, Connections will load balanced using SID_BALANCE value and they will connect to all the RAC nodes.

   - Edit the applications context file through the Oracle Applications Manager interface, and set the value of Concurrent Manager TWO_TASK (s_cp_twotask) to the load balancing alias (<service_name>_balance>).

- Execute AutoConfig by running $INST_TOP/admin/scripts/adautocfg.sh on all concurrent nodes.

Is RAC Mandatory to Implement PCP?

  - No, RAC is not manadatory for PCP, If you have two or more applications nodes, You can enable PCP, But PCP works better in conjunction with RAC to handle all the failover scenarious.

How PCP Works with RAC?

 - In RAC Enabled env, PCP uses cp_two_task env variable to connect to DB RAC node, This can be set one CM node to one RAC node or you can set to connect to all the RAC nodes in the cluster.

What happens when one of the RAC node goes down when PCP enabled?

 - When Concurrent: PCP Instance Check is set to ON and cp_two_task value set to SID (i.e One CM node connects to only one RAC node always), If one DB node goes down, PCP identifies the DB failure and shifts all the CM managers to other applications node where Database is available.

What happen when one of the PCP node goes down?

 - IMON identifies the failure and through FNDSM (service Manager) It initiates ICM to start in surviving node (If ICM is is running on Failed node), ICM will start all the managers.

What is primary and Secondary Nodes in PCP?

 - It is requirement to define the primary and secondary node to distribute load on the servers, If this is not defined,All the managers will start on the node where ICM is running by default.

How Fail Back happens in PCP?

 - Once failed node comes online, IMON detects and ICM will fail back all the managers defined on that node. 

What happens to requests running during failover in PCP?

 - It is important to note RAC and PCP does not support any DML commands and TAF and FAN are not supported with E-Bussiness Suite.

 - When a request is running, If CM goes down it is having status running normal and it will not have any associated process ID, When ICM start in other node, It verifies for all the running normal requests and verifies the OS process ID, If it did not find the process ID, It will resubmit the request to start.

 -  This behavior is normal even in NON PCP env.

 - The Internal Concurrent Manager (ICM) will only restart a request if the following conditions are met

 The ICM got the manager's database lock for the manager that was running the request

 The phase of the request is "running" (phase_code = 'R')

 The program for this request is set to "restart on failure"

 All of the above requirements have been met AND at least one of the following:

           a.  The ICM is just starting up, (ie. it has just spawned on a given node and going through initial code before the main loop)

           b.  The node of the concurrent manager for which we got the lock is down

           c.  The database instance (TWO_TASK) defined for the node of that concurrent  manager is down (this is not applicable if one is using some "balance" @ TWO_TASK on that node)

How PCP identifies when node goes down?

  - There are two types of failures that PCP recognizes.

 Is the node pingable ? 

 Issues an operating system ping on the machine name - timeout or available.


 Is the database available? 

 Query on V$threads and V$instance for value of open or close.

 - When any of the two above failures occur, the following example will illustrate the failover and failback of managers.

 Primary node = HOST1 - Managers assigned to primary node are ICM (FNDLIBR-cpmgr) , FNDCRM

 Secondary node = HOST2 - Manager assigned to secondary node is STandard Manager (FNDLIBR)

 When HOST1 becomes unavailable, both ICM and FNDCRM are migrated over to HOST2.

 This is viewable from Administer Concurrent Manager form in System Administrator Responsibility.

 The $APPLCSF/log/.mgr logfile will also reflect that HOST1 is being added to unavailable list.

 On HOST2, after pmon cycle, FNDICM, FNDCRM, and FNDLIBR are now migrated and running.

 (Note: FNDIMON and FNDSM run independently on each concurrent processing node. FNDSM

 is not a persistent process, and FNDIMON is a persistent process local to each node)

 Once HOST1 becomes available, FNDICM and FNDCRM are migrated back to the original primary 

 node for successful failback.

In summary, in a successful fail over and failback scenario, all managers should failover to their secondary node, and once node or instance becomes available; then all managers should failback to primary node.

How  PCP works  internally

1) ICM contacts TNS Listener . TNS listener must be started on all the CM nodes

2) TNS Listener spawns Service Manager( FNDSM).Each CM node will have Service Manager( FNDSM) started

3) ICM communicates with Service Manager(FNDSM)

4) Service Manager spawns various Manager and Service processes

5) If ICM crashes due to node failures

6) Internal Monitor will spawn ICM locally when it detects ICM is down

No comments:

Post a Comment