We recently had very challenging issue after we upgraded our EBS 12.1.3 database from 11.2.0.3 to 12.1.0.2. On random occasions threads on output post processor started to hang / wait for something and eventually this caused concurrent managers to get completely stuck without anything being processed anymore.
Only way to recover from this was to bounce concurrent managers during daytime! Later we figured out that bouncing only Output Post Processor was enough which made the situation bit easier to handle.
Output Post Processor (OPP) is the process in EBS which handles document creation to PDFs. So as most of our user facing documents are PDF’s they all go through XML Publisher and OPP.
After the upgrade we started seeing randomly one or two requests getting stuck on OPP phase and if we would let them run long enough it would fill out the OPP threads available and all the other requests would get stuck waiting on OPP for new available threads.
Our OPP setup at time was 2 processes with 10 threads each. We have had this setup already for around 10 years so we didn’t believe this was an issue. Also the requests could start to wait on any given time of the day under no particular load on the system.
Also at the time of the issue we couldn’t notice any blocking sessions or similar. All the sessions on database were basically waiting on SQL ID fnpyvpk41nd5s which had wait event “Streams AQ: waiting for messages in the queue“. In the server we could see XML file was created without issues and no errors were in the OPP log.
Because of the wait event we tried following actions (and lot of others too):
-Purged and rebuild AQ OPP tables: How To Purge FND_AQ Tables ( Doc ID 1156523.1 )
-Rebuild FND objects
-Reduce amount of OPP process threads from 10 to 5 (this actually made the problem worse)
-We enabled XDO debug to see more what was happening inside the OPP but as the requests never started to create the document we didn’t see anything in the log files too
At the same time we were manually terminating requests which started to hang to prevent whole OPP getting stuck. As we run over 100 000 requests per day this came to be quite time consuming task to stay on top of the issue. In the end support recommended to apply patch 18329573 from “How to upgrade the JDBC driver to 11.2.0.3 in EBS R12.1.3?” (Doc ID 2196404.1) which upgrades the JDBC drivers on the EBS application server to version 11.2.0.3.
After applying this patch and restarting OPP issue was gone! This patch doesn’t seem to be on any checklist when you upgrade EBS database to 12.1.0.2 but it definitely should be!
Also I’m wondering as drivers are being upgraded to 11.2.0.3 so what were the previous drivers on EBS 12.1.3 application server?
Since there wasn’t much information on MOS about the issue I’m putting it here if it helps somebody someday it will be hopefully easier for you to get on top of the issue! For us it took over two weeks to figure this out.
I recently came across requirement to get OCI Oracle Autonomous Database audit logs to OCI…
Last time I showed how to provision Autonomous Database Serverless (ADB-S) on Google Cloud. This…
I bet few years back folks didn't expect that by 2024 we would be able…
This will NOT be a technical walkthrough on Oracle Database@Azure but rather my opinions and…
View Comments
thanks for this note - we've been seeing a similar OPP issue for some time, after similar upgrade path. I also found MOS notes relating to JDK 1.7.0 upgrade and requirement for JDBC driver upgrade to 11.2.0.3. Definitley some holes in the documentation for upgrades!