Micro Focus Application Performance Management – Tips And Tricks

Table of Contents

5. Application Performance Management – Tips And Tricks – Apr 2021

1. Issues due to configuration of BSM / OMi Load Balancer

We are facing issues with the OV policy which needs to be updated to BSM / OMi 9.26. Presently, it is undergoing the process of updating. Our monitoring environment per the Micro Focus recommended changes for filing season 2019. We are currently working in our pre-production environment prior to promotion to production.

We have configured load balancing for BSM per the installation guide. When we reconfigure our data collectors (SiS, OML) to use the load balancer we cannot confirm the load balancer is actually being used.

Our expectation is, when we shut down one of the gateways we should not experience a loss of event flow, however when we shutdown one of the gateways we are no longer able to see an event flow, until we re-enable the gateway. 


I found that the root cause of the issue is in fact an OV policy. After the configuration of the Load Balancer we would also need to update the information of the Operation Agent. Here is the command for that.


MANAGER= -> Load balancer FQDN

MANAGER_ID= ->ovcoreid -ovrg server

We can run the following command in order to know which is the Manager ID that is need to be used on the Agent configuration: ovcoreid -ovrg servern

The DPS and the GW verify that the ID is the same on both, and different from the one in the current configuration. Once we have this information we can change the ID and restart the agent to take the change.

2. Headers to HTTP requests by BPM

While working in BPM, I have noticed that when a BPM replays a script, additional headers are added to the HTTP request. For example,mercuryorganizationname.

What are these headers? When are these headers added? How many headers are added? Is it possible to configure the headers used? Can individual headers be limited or excluded? A lot of questions are now pondering in my head.


After thinking for a while, I got a hint that when an application in BSM has “Enable Diagnostics / TV Breakdown” checked in Admin->End User Management, additional headers are added to the HTTP request so that the Diagnostics/TV probe can recognize that the request is coming from the BPM. The headers added are as follows:





















These headers cannot be modified.  If “Diagnostics/TransactionVision breakdown” is selected, all of these headers will be added. There are 20 headers added to the HTTP request.

3. BSM  about Partitioning and Purging Manager (PM)

In this solution, we will see when to exactly overview about Partitioning and Purging Manager (PM) from the databases.


Here is a summary, as far as my understanding and working experience goes.

You know that we do have tables and views in the profile database. A lot of tables among them have a “partner” view, except for all. For example

  1. BPM_TRANS – points to one or more tables in the BSM profile database.

2. BPM_TRANS_10000

 The view gets adjusted any time due to the underlying database table(s) change, so later it might point to BPM_TRANS_20000 or to  BPM_TRANS_10000 AND BPM_TRANS_20000.

Note: Whenever you read from the table, use the view, and get to the current and active data that we are using a DBMS which allows native partition (for example MS SQL Enterprise Edition).

Each database table, for example

  1. BPM_TRANS_10000 can have one or more native partitions, which in the end contain the data.
  2. BPM_TRANS_90000.P228 – each profile database has two tables maintaining this, PM_CATALOG and PM_NATIVE_CATALOG. It keeps track of the tables (in BAC 8 I think this was called a view partition) which make up for BPM_TRANS and BPM_TRANS_10000.

But you do need the follow the below-written command.

Select * FROM PM_CATALOG where PM_ENTITY='BPM_TRANS'(output modified in split into three lines)
PM_START_DATE            PM_END_DATE              PM_STATE  PM_START_NUMBER       PM_END_NUMBER        
2015-08-16 01:00:00.000  2015-11-18 01:00:00.000  ACTIVE 1439679600000.000000  1447804800000.000000 
(output modified)
BPM_TRANS_10000  P228                  ACTIVE 1439679600000.000000  1439852400000.000000  NUMERIC
BPM_TRANS_10000  P229                  ACTIVE 1439852400000.000000  1440025200000.000000  NUMERIC
BPM_TRANS_10000  P273                  ACTIVE 1447459200000.000000  1447632000000.000000  NUMERIC
BPM_TRANS_10000  P274                  ACTIVE 1447632000000.000000  1447804800000.000000  NUMERIC

When now data is added, BSM checks what partition to use based on the PMN_START_DATE and PMN_END_DATE or based on the PM_START_NUMBER and PMN_END_NUMBER (this is decided based on the PMN_COLUMN_TYPE) and adds it there, end of story.

If BSM tries to add a really old entry, for example dated 2015-06-01 to BPM_TRANS_10000,it will fail with the ugly message: No partition found that may hold data for the specified date: (2015-06-01 …), table: BPM_TRANS DB: …

We are aware that the table only starts at 2015-08-16 01:00:00.000.This also is the case if BSM tries to insert based on the number, here the number we use is the EPOCH time,for example

PM_START_DATE 2015-08-16 01:00:00.000 == PM_START_NUMBER 1439679600000.000000.Use the link below for more help.


 1439679600 = 2015-08-16 01:00:00 GMT+2:00 DST

As you can see, the native partitions are consecutive, P228 ends at number   1439852400000.000000

 P229 starts at number 1439852400000.000000and so on.

Did I miss something?

Oh yes. The above should explain a little bit how data is put into the DB. Now PM comes into the game i.e. PM checks if the oldest native partition is older than the “Keep Data For” configuration for the table in question. If it is, it drops this very native partition and updates the table, for example BPM_TRANS_10000 to make it aware of the removed native partition it also updates PM_CATALOG / PM_NATIVE_CATALOG (this will increase the start date and/or start number)

After that PM creates native partitions in advance, usually 6 hours ahead of time, and updates the table, for example BPM_TRANS_10000 to make it aware of this new native partition, it also updates PM_CATALOG / PM_NATIVE_CATALOG

 (this will increase the end date and/or end number)

As an example for the deletion:

on my test system BPM_TRANS “Keep Data For” configuration is set to

BSM -> Admin -> Platform ->  Setup and Maintenance -> Data Partitioning and Purging

Business Process Monitor

 BPM_TRANS  Raw transaction response time and availability data  3 months

PM checks if the end date / end number of the oldest partition

 BPM_TRANS_10000 – P228 – 1439852400000.000000

is older than 3 months

 1439852400 = 2015-08-18 01:00:00 GMT+2:00 DST

now is

 1447749228 = 2015-08-17 09:33:48 GMT+2:00 DST

so the end date is not reached yet, and thus the native partition is NOT deleted,

but tomorrow it will …

If you check on the DPS the PM logs (<HPBSM>\log\pmanager), for example pmanager.log,

you can follow the flow:

2015-11-17 09:45:20,655 [PMCycleSchedulerTimer] (PMEngine.java:160) INFO  – Starting new cycle

-> PM wakes up

2015-11-17 09:45:20,795 [PMCycleSchedulerTimer] (PMEngine.java:133) INFO  – PM is handling database: ‘VM402_BSM_PROFDEF’;

-> it works on profile db VM402_BSM_PROFDEF

2015-11-17 09:45:20,796 [PMCycleSchedulerTimer] (PMEngine.java:134) INFO  – Server time at the start of the cycle: Tue Nov 17 09:45:20 CET 2015

-> current time to compare all DB times with

2015-11-17 09:45:22,460 [PMCycleSchedulerTimer] (PMHideNativePartitionAction.java:49) INFO  – Hiding partition (in DB: VM402_BSM_PROFDEF, DBHOST: swvm402.bgr.hp.com)

2015-11-17 09:45:22,461 [PMCycleSchedulerTimer] (PMHideNativePartitionAction.java:51) INFO  – TABLE: M_HR01F1_F_90000


 ID:     12


 CREATED: 1447710321000 (Mon Nov 16 22:45:21 CET 2015)

 START:  1447732800000 (Tue Nov 17 05:00:00 CET 2015)

 END:    1447740000000 (Tue Nov 17 07:00:00 CET 2015)


-> this native partition has reached its “end of life”, the dtaa in it is older than the keep data for allows for this table,

the native partition status is changed from ACTIVE to HIDDEN so that it can be dropped / deleted in the next cycle

2015-11-17 09:45:22,471 [PMCycleSchedulerTimer] (PMDropPartitionAction.java:32) INFO  – Dropping partition:

2015-11-17 09:45:22,471 [PMCycleSchedulerTimer] (PMDropPartitionAction.java:33) INFO  – TABLE: RUM_TCP_APP_STAT_20000


 ID:     529


 CREATED: 1445002372000 (Fri Oct 16 15:32:52 CEST 2015)

 START:  1445025600000 (Fri Oct 16 22:00:00 CEST 2015)

 END:    1445061600000 (Sat Oct 17 08:00:00 CEST 2015)


-> this native partition was in state HIDDEN already, and now is dropped / deleted

2015-11-17 09:45:23,576 [PMCycleSchedulerTimer] (PMAddNativePartitionAction.java:63) INFO  – Creating new native partition (in DB: VM402_BSM_PROFDEF, DBHOST: swvm402.bgr.hp.com)

2015-11-17 09:45:23,577 [PMCycleSchedulerTimer] (PMAddNativePartitionAction.java:65) INFO  – TABLE: HI_STATUS_CHANGE_60000


 ID:     356


 CREATED: 1447749920000 (Tue Nov 17 09:45:20 CET 2015)

 START:  1447772400000 (Tue Nov 17 16:00:00 CET 2015)

 END:    1447783200000 (Tue Nov 17 19:00:00 CET 2015)


-> a new native partition is added to a table

2015-11-17 09:45:42,332 [PMCycleSchedulerTimer] (PMEngine.java:310) INFO  – Done

2015-11-17 09:45:42,333 [PMCycleSchedulerTimer] (PManager.java:204) INFO  – Going to sleep until the next cycle.

PM completed its task and now pauses for an hour. Now it should be possible to answer the initial question i.e. I want to know when exactly the pmanager should purge data from the database. If you do have “Keep Data For” set to something other than “Infinite”, PM deletes the oldest native partition based on that date as explained above. The smallest entity the PM can delete is a native partition.

Note: The PM configuration like “Keep Data For” and such is kept in the Management DB table PM_CONFIG

4. SiteScope 11.40 / APM 9.40 – SiteScope cannot sync topology to APM 9.40

SiteScope is not able to sync topology to APM 9.40. This is because APM 9.30 has been upgraded to APM 9.40. QCCR1I125008 Discovery Scripts are not getting downloaded in SiteScope 11.33 IP1. The error shows that the uCMDB server response is a login page because mam-collectors servlet is configured as HTTPS but the probe uses HTTP.

Starting with uCMDB version 10.30, by default the HTTPS protocol is enabled for UCMDB server, with the HTTP protocol being disabled. APM 9.40 comes with uCMDB version 10.32.130.

But SiteScope 11.40 with APM works fine. However SiteScope is unable to synchronize topology with APM.

 <SiteScope>\log\discovery.log shows

2017-10-04 12:53:02,176 [AsyncConfigurationSender] (DiscoveryClient.java:785) ERROR – Error Downloading domain scope from server. the server response:

B Server</title> <link rel="stylesheet" type="text/css" href="./ucmdb-ui/static/act/stylesheets/login_hp.css"/>    <link rel="stylesheet" type="text/css" href="./ucmdb-ui/static/act/stylesheets/Properties.css"/>    <link rel="stylesheet" type="text/css" href="./ucmdb-ui/static/CMS/css/CMSGeneralStyle.css"/>    <meta http-equiv="Content-type" content="text/html;charset=UTF-8" /> <meta http-equiv="cache-control" content="no-cache"> <meta http-equiv="pragma" content="no-cache"> <meta http-equiv="expires" content="-1"></head><body class="login_hp"><table class="full_HV" width="180" border="0" cellspacing="0" cellpadding="0">    <tr>    <td class="login_top_left" colspan="2">            <table class="full_HV" border="0" cellspacing="0" cellpadding="0">                <tr>                    <td class="login_logo">                        <div class="combo" dir="ltr">                            <img src="./ucmdb-ui/static/CMS/images/login/login_logo.gif" alt='' border='0'>                        </div>                    </td>                    <td/>                </tr>                <tr>                    <td/>                    <td class="login_appname">                        <div class="version">HPE Universal CMDB 10.32 <br/>                        <!--DO NOT REMOVE THIS LINE--><!--UCMDB_CUSTOM_TITLE-->                        </div>                    </td>                </tr>            </table>        </td>    <td class="login_top_right">            <br/>        </td> </tr> <tr valign="top">    <td colspan="3">            <div id="statusBox" class="bottomBox">                <table width="100%" height="100%" border="0" cellspacing="0" cellpadding="0">                    <tr>                        <td class="login_mid_left"/>                        <td class="login_mid_center">                            <table border="0" cellspacing="0" cellpadding="0">                                <tr>                                    <td>                                        <strong>                                            <div id="statusBoxMessage">                                                <!--DO NOT REMOVE THIS LINE--><!--LICENSE_PLACEHOLDER-->                                            </div>                                        </strong>                                    </td>                                </tr>                            </table>                        </td>                        <td class="login_mid_right">                            <table class="full_H" border="0" cellspacing="0" cellpadding="0">                                <tr>                                    <td>          <ul class="list"><li><a href="./ucmdb-ui/">UCMDB</a></li><li><a href="./ucmdb-ui/cms/directAppletLogin.action?cmd=SetMode&ApplicationMode=USER_MANAGER&navigation=true&interfaceVersion=9.0.0">User Management</a></li><li><u style='color: #7F7F7F' title="UCMDB Configuration Manager is not installed">UCMDB Configuration Manager</u></li><li><a href="/ucmdb-browser">UCMDB Browser</a></li>                                            <!--<li><a href="./ucmdb-ui/loadClassModelJavadoc.jsp">UCMDB Class Model</a></li>--><li><a href="./status/">Server Status</a></li>           <li><a href="./jmx-console/">JMX Console</a></li>           <li><a href="./ucmdb-api/connect">API Connection Test</a></li>           <li><a href="./ucmdb-api/download">API Client Download</a></li>                                            <li><a href="./ucmdb-docs/docs/eng/APIs/UCMDB_JavaAPI/">API Reference</a></li>          </ul>                                    </td>                    
            </tr>                            </table>                        </td>                    </tr>                    <tr>                        <td/><td/>                        <!-- TODO The strings has to be moved to resources   -->                        <td class="copyright">                            <span>© Copyright 1998-2017  Hewlett Packard Enterprise Development LP</span>                        </td>                    </tr>                </table>            </div>        </td>    </tr></table></body></html>

The folder <SiteScope>\discovery\scripts\ is empty except of the directory custom , which contains the file placeholder.txt only.


Actually we need to set mam-collectors and cm servlet to HTTP. Here it is not a plain uCMDB server and an UD probe, but APM 9.40 with RTSM (based on uCMDB 10.32) and SiteScope 11.40, the steps are slightly different then described in the service follow the below written steps and also pay attention to the notes.

Step 1:In <HPBSM>\odb\conf\settings.override.properties

  change # jetty.connections.http.enabled=true to  jetty.connections.http.enabled=true and save the file.

Note: Step 1 seems not to be required in all installations.

Step 2:Access the UCMDB JMX console: ucmdb machine name or IP address>:8443/jmx-console. Select the service: Ports Management Services.Invoke the HTTPSetEnablemethod.The result should be Mbean: UCMDB:service=Ports Management Services. Method: HTTPSetEnable operation succeeded. Will take effect after restart

Invoke the PortsDetailsmethod,and ensure that the value in the Is Enabled column for the HTTP protocol is True. The result should be  Mbean: UCMDB:service=Ports Management Services. Method: PortsDetails Port Protocol Port number Is Enabled

HTTP 21212 true

HTTPS 8443 true

HTTPS – Requires client authentication 8444 true

Return to Ports Management Services. To map the Data Flow Probe connector to server authenticationmode, invoke the mapComponentToConnectors method with the following parameters componentName: mam-collectors is HTTP: true. All other flags: false the result should ne.Mbean: UCMDB:service=Ports Management Services. Method: mapComponentToConnectors operation succeeded. Component mam-collectors is now mapped to: HTTP ports.

Return to Ports Management Services.

To map the Confidential Manager connector to server authenticationmode, invoke the mapComponentToConnectors method with the following parameters: componentName: cm isHTTP: true.All other flags: false the result should be Mbean: UCMDB:service=Ports Management Services. Method: mapComponentToConnectors operation succeeded. Component cm is now mapped to: HTTP ports.

Note: If you want to use multiple authentication methods, make sure you check the ports used by each of them and set them to true (when mapping both cm and mam-collectors). 

Step 3:In<SiteScope>\discovery\discovery_agent.properties change appilog.agent.probe.protocol  to appilog.agent.probe.protocol=http and save the file

  (Note: the line might not exist, so it needs to be added)

Note: Step 3 seems not to be required in all installations. Restart SiteScope and restart APM in a web browser access uCMDB Server / RTSM with HTTP protocol and port 8080.

For example:  http://<server name or IP address>.<domain name>:8080 where <server name or IP address>.<domain name> represents the fully qualified domain name (FQDN) of the DPS

After SiteScope and APM are up and running, one should see 962 items in the folder <SiteScope>\discovery\scripts\ (more or less). There are no errors in discovery.log and in bac_integration.log. A hard-Sync from SiteScope might be required to trigger the re-synchronisation

Note: This is nothing that happens on all APM 9.40 installations. It seems that these combinations work without any problems:

1)fresh APM 9.40 + fresh SiteScope 11.40

2)fresh APM 9.40 + SiteScope 11.40 with SiteScope being updated from SiteScope 11.33

While this combination fails:

– APM 9.40 with APM being updated from APM 9.30 + SiteScope 11.40

– APM 9.40 with APM being updated from APM 9.30 + SiteScope 11.40 with SiteScope being updated from SiteScope 11.33

Note: Another, more easy way – to resolve issue is to access the APM GUI, go to Administration -> Platform -> Infrastructure Settings select Foundations : RTSM

Under RTSM – Web Components to Connectors Mapping Settings > change the setting for CM Port Mapping> Mam-Collectors Port Mapping from  “HTTP” to  “HTTP,HTTPS”.

After saving the changes, one needs to restart APM.

This enables APM to also listen to https traffic for CM and Mam.Collectors.

5. APM  9.40 – SLA Report shows “No SLAs exist for this Provider”

The SLA Report shows “No SLAs exist for this Provider” in APM 9.40.APM Performance Management -> Service Level Management -> SLA reports -> SLA Status shows info message:  “No SLAs exist for this provider”.

However, when you query the database using other OOTB reports it shows it has collected the data for those time periods. The raw data reports show the data, but the status is not updated.


After analyzing well, we found that this is a known issue, which was resolved by hotfix. The hotfix can be requested via Support.

6. Can’t configure Secure LDAP and find valid cert path in APM 9.40

We can’t figure out about Secured LDAP and find valid cert path in APM 9.40.New APM 9.40 installation on Windows 2012 R2 with IIS 8.x 2 GW & 2 DPS no load balancer yet.

We are able to access APM from primary GW using HTTP(port 80) browser IE11.

But unfortunately,we are unable to configure LDAP using url:  ldaps://<ldapURL>:3269/DC=<value>,DC=<value>??sub without getting an error. Here is the error: – sun.security.validator.ValidatorException: PKIX path building failed:

sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target. Even after importing LDAP certificates into APM per Platform Admin steps


First we need to improve the LDAP certificates for auth management, this cert has no valid path. The access was specified on Infrastructure Settings. Since the access has to be carried on over LoadBalancer and since that has been done the direct access to Gateways is possible.

The reverse proxy config was enabled and this is what has made them have the access blocked. When removing the Reverse Proxy settings access through Gateways is possible but no access throughout LoadBalancer.

HTTPS is not set up on BSM Gateways.

When running configuration wizard we have managed to solve the issue as we were able to access the environment again.

7. BSM  9.26 – DBVerify is not working – Verifier found failed Permission checks

We came across that BSM 9.26 – DBVerify is not working. The verifier found failed Permission checks. BSM 9.26 have checked DBverify.log and found that “Management” admin permission issue and Oracle Max pooled statment =0 into jdbc.driver.properties which is needed to rectify asap.


Here is the Error:

<date / time>  [main] (PermissionCheckTask.java:105) INFO – Verifier found failed Permission checks:

Problems for user = ” ,database = ” ,host = ‘DB_HOST’ :


Missing roles (you must have at least one of the following):



Assign the SELECT_CATALOG_ROLE otherwise it won’t pass

8. Issues while trying to execute a perl script in APM 9.30

There is an issue in APM 9.30 i.e. issue with EUM alert when trying to execute a perl script.APM 9.30 on Linux. Tying to run a perl script when an alert is triggered, but it doesn’t work.


You need to follow the below-written steps.

Step 1:.30ert configuration user should remove all spaces between parameters and use double quotes 

It will look like:/apps/KPscripts/AutoTicketServiceNow.pl”Monitoring_Engineering””Tivoli_-_IBM_Infrastructure_Monitoring””<>”

* no space, even after pl script name.

Step 2: Ensure that the user which is executing the above perl script has permissions to execute, please try to execute the above command (with spaces) with the same user and ensure it executes fine.

Step 3:  Verify if the problematic perl command is executed successfully.

It might be required to remove some libraries in the script (like LibXML) to allow the script to execute correctly.

9. APM 9.30 – BPM Event forwarding issues during Staging

We have come across issues during the APM 9.30 – BPM Event forwarding Staging upgrade procedure.


The issue was boiled down to the APM 9.30 still having active connections on the 9.26 BUS and this caused the 9.26 BUS cluster to believe the 9.30 cluster also being part of its own cluster and hence data being routed to this system.

So to forcefully not let the APM 9.30 HornetQ to make connections to 9.26, we manually removed any 9.26 HornetQ details from the PROPERTIES TABLE under the MANAGEMENT DB of 9.30

(These old hornetQ details are present due to the data replication process during upgrade and to be used in the SDR process) along with the removal of seed.properties, topazinfra.ini, encryption.properties from /SDR/conf in APM GW.

 Now after removing those 9.26 HornetQ entries from the 9.30 DB, we restarted the 9.30 and checked the BUS console in 9.26 again to find there were no active connections to 9.30 like before.

10. APM 9.40 – SNMP alerts can be sent out from primary or secondary DPS

Do you know that we can send out SNP alerts from primary or secondary DPS.Yes,it’s completely our choice. APM EUM SNMP alerts are sent to ServiceNow. We are receiving events, but for some alerts we receive events with source ID as primary DPS and for some other alerts we receive alerts in snow with source ID as secondary DPS.

If the HAC runs on primary DPS then the source should be primary DPS in all the events as per our understanding.

Please let us know why we are getting events with different sources each time.


The alerts engine does not rely on a specific DPS, nor does it depend on where the HAC (High Availability Controller) is located. The fact that HAC is running on the primary DPS doesn’t necessarily mean that all functionalities will be performed only on the primary.

Alert engine is not a HAC service, and is present on both DPS, even if there is no fail-over transition to the secondary.

It is considered an expected behavior to have alerts be sent from the secondary DPS to compensate for errors or high load on the primary DPS, and we do not consider that this behavior is wrong. That’s one of the goals of the High Availability system.

If by any chance the primary DPS is really loaded in terms of a number of alert notifications (we only support up to 12000 notifications per day), sometimes it has to generate and send 2000 alert notifications within 2 hours, he will need an extra hand on processing thus, during this period we see alerts being sent from the secondary DPS as well.