Micro Focus Server Automation – Tips and Tricks

Table of Contents

Micro Focus Server Automation – Tips and Tricks – May 2021

1. Questions on Build of old 10.20 to new 2018.08.

We are currently dealing with the HPSA 10.20 on old Red Hat 5 equipment. Also, the hardware (build new hardware Red Hat 7), needs to be upgraded soon. For this, you must install the SA 2018.08 “NOT” an upgrade on it.

Apart from this users also have questions like

1) Do we need to do anything with old and new words after scanning devices into a new environment?

2) What about our current library “/var/opt/opsware/word”?

3) How do we get that over to the new Core from 10.20?

4) Can I resync the old SA 10.20 Word to the new 2018.08 Word?

Solution

Actually seeing this problem, we could not come to any conclusion. Whether any steps or reference cases are needed. Then I got to the backline team to ask this with, to give you a better answer, then I discussed with them to find an alternative way to find a solution since there are no substantial documents that support what you want to do, therefore, we talked about the CBT process as an alternative, but it has never been used in this way, and we do not know the huge files stored in the software repository (word) could impact the new environment, as it would sound like customization, for example, ask for PSO service, however, they pointed me out that it would not be a good idea to do this, so at this point, the best recommendation is to start from scratch and follow the guide as well. I understand that it was not the answer you wanted, but it is to avoid any problem/inconvenience in the future with something that is not supported and we can not go further.

the user wanted to know if there was any step/instruction to move one component from the environment to a new one (Word – Model Repository). I talked with the backline in question to know if possible to do it but, these actions could get in trouble and could affect the SA performance.

So, the answer is no.

2. Software Policy questions

There were several agents that needed to be re-installed from scratch. So, we completely removed them from the core server and uninstalled the agents on the endpoints, and then re-installed them. However, some of the agents had software policies attached to them which have now been removed. Can those be re-attached without remediation? Or if they are remediated, will that affect the current installs on the systems.

Solution

The user’s SA version was unsupported, but we took it an extra mile. Based on the guides there’s nothing about what the user was asking, so I talked with the backline, the feedback I received from them was the following.

Please take into account that this is a suggestion since we do not know how it can impact an old version (10.01/unsupported) than a new one.

So the user agreed with the following answer about it:

1. If his server kept the same mind and registered itself correctly, it should show as attached, after you run bs_software on the managed server.

2. They recommend that he attaches software policy without remediation, then he needs to run a compliance check, in order to see if remediation can impact the SA environment. If it is compliant, no impact. If it is not there may be an impact and also it depends on the software too. Some software detects their installed version and ignores some software just to replace existing installation. It all depends on what is there in the policy.

3. Account Lockout Threshold capabilities – 64256 #jm#.

Users are facing problems regarding the account Lockout Threshold. It does not check for pam_tally2.so. Is there an ability to use pam_tally2.so or somehow edit pam_tally.so?

Solution

I immediately started researching after receiving your complaint. I do not know whether it will help you complete or not but if this happens again, take this as a reference. Perhaps I will need an Exchange Idea to go further, maybe it will need to point out where the pam configuration is related to SA.

pam_tally2 comes in two parts: pam_tally2.so and pam_tally2. The former is the PAM module and the latter, a stand-alone program. pam_tally2 is an (optional) application that can be used to interrogate and manipulate the counter file.

It can display users’ counts, set individual counts, or clear all counts. Setting artificially high counts may be useful for blocking users without changing their passwords. For example, one might find it useful to clear all counts every midnight from a cron job.

4. Error executing cora command in SA 10.60

If you have a SA 10.60 server running on a Linux 5.9 server, along with a database. Using the command rpm -UvH, we update the RPM package named OPSWtools-1.0.37-1.noarch.rpm. OPSWtools-1.0.37-1.noarch.rpm

The installation went smoothly, but when I went to make the cora file, I got the following result…

[root @ mz-fl-ap-042 cora] # / opt / opsware / support / bin / coraGathering … PreAmble – 0ms … CustomConfFiles – 49ms … Rollup – 64ms … MeshState – 101ms … WindowsUtilities – 168ms … HPLNSettings – 230ms … LDAP – 364ms … AutoCommJobStatistics – 527ms … VaultLag – 509ms … VaultTxnRcvMetrics – 501ms … AuditParamContent – 548ms … DBInitOra – 569ms … SoftwareRepositoryMirroring – 597ms … AgentCertExpiry – 677ms … MeshGatewayTest – 773ms … DBNLSParameters – 774ms … DBOSStats – 788ms … DBTableSpace – 784ms … FacilityParameters – 1005ms … TNSNamesOra – 1072ms .. . WindowsPatchDBVersion – 1121ms … VaultTxnSendMetrics – 1144ms … MeshGateways – 1379ms … CapacityPlanning – 1361ms … SessionStats – 1606ms … RealmParameters – 1738ms … ServerReachability – 2 s … MeshHardware – 2s … DBInfo – 2s … SAVersion – 2sTraceback (most recent call last): File “”, line 1, in File “py_loader0”, line 9, in File “py_loader1”, line 44 , in File “py_loader1”, line 29, in load_mod File “cora.py”, line 204, in File “cora.py”, line 190, in main File “model.py”, line 2355, in gather File “model .py “, line 124, in run File” model.py “, line 2256, in gatherWorker File” model.py “, line 649, in gather File” ./crypto/crypto.py “, line 32, in load_cert File “./crypto/_keystore.py”, line 225, in load_cert File “/opt/opsware/pylibs2/M2Crypto/X509.py”, line 663, in load_cert_pkcs12 return m2.x509_read_from_pkcs12 (pkcs_path, pass_salt, 0) [0] Exception: Thread ‘SAVersion’ threw an exception: Error parsing PKCS # 12 file: 47377917901120: error: 23076071: PKCS12 routines: PKCS12_parse: mac verify failure: p12_kiss.c: 119: … ServiceLevelParameters – 6s … VaultStats – 8s [root @ mz-fl-ap-042 cora]

Solution

The issue will be solved in 1.0.38

5. Problems with HPSA Jobs after Upgrade

Users have been facing issues with their health check script running always Monday 5am for years. This week it failed with the following error message.

ID:   HPSA-1620

Code:   wayscripts.proxyFail

Details:   The Command Engine cannot proxy a command from one facility to another.

Parameters:

dc_ids: {0}

Action:   This may represent a networking issue between the two facilities or SA may be misconfigured. Contact your SA administrator for further assistance.

Cause: When communicating with servers in another facility, the Command Engine proxies commands from the source facility to the servers’ facility. The command proxy failed.

Action: This may represent a networking issue between the two facilities or SA may be misconfigured. Contact your SA administrator for further assistance.

Solution

These errors arise when scripts are involved in the story. Also, since it appears every day at the same time, could you please do the following:

Step 1: Check Jobs and Sessions.

Step 2: Recurring Schedules and check what is the job that is running on Monday 5AM.

Step 3: Rescheduling the job would solve the problem.

Following these steps will definitely help you.

6. Agent install failed on the satellite crsauapz3pa0

There were problems during the installation of the agent. Agent install did not complete successfully on the satellite.

Solution

Please follow the below written steps to complete the installation:

Step 1: On the Satellite (crsauapz3pa0):

echo ‘opswgw.gw_list: 10.206.164.23:3001’ > /etc/opt/opsware/agent/opswgw.args

Step 2: Re-attempt the agent install:

/tmp/opsware-agent-75.0.79005.0-linux-7SERVER-X86_64 –coreinstall –no_opsw_gw –crypto_dir /tmp

Step 3: Now restart the SA services

/etc/init.d/opsware-sas start

7. How do I use “nohup” in an SA script with 10.60?

We wrote scripts in 10.21 which used the”nohup” to background long running processes and exit.

For example, you can run an ad hoc UNIX shell script like this: nohup sleep 600 &In 10.21 this script will exit immediately, leaving the sleep command in the background. With 10.60 the behavior is different: we wait the full 5 minutes! How can we tell SA to exit and leave the nohup’d command running like it did in 10.21?

Solution

Seeing your issue, I conducted a test in my lab to figure out and see the behaviour.

For example,A good script to test is this:!/bin/bash nohup bash -c “sleep 120” &exit 0

What it does is:

• Start and sleep for 2 minutes in nohup.

• Then return exit code 0.

10.20:

As we can see in the 2 screenshots below for script execution on 10.23 and second on 10.60 the behavior is different. In 10.20 it starts the nohup command and immediately goes to exit 0 step, so it finishes in 3 seconds.

10.60:

At 10.60 it waits for the nohup command to finish and then goes to exit 0 so it takes 2 minutes plus 2 seconds. I could also see a reference case on this behavior between 10.20 and 10.60 and I could obtain from that incident full and detailed feedback regarding our labs on this which was consulted internally It seems that this has already been an “Enhancement Request”, they also add an explanation about how to change scripts that really works for me (I tested in the labs).

8. Platform installer error

We failed to run the mktemp. Trying to create a temp dir in the user’s home directory.

Traceback (most recent call last):

File “./installer/composite_step.py”, line 66, in run

File “./installer/ttlg_step.py”, line 24, in run

File “./installer/remote_file_system.py”, line 38, in make_temp_folder

RuntimeError: Could not create a temporary folder on server with ID = 87230001

Solution

The installer here actually attempts to run “mktemp -d” remotely on the satellite as a root. For ex

[[email protected] ~]# mktemp -d

/tmp/tmp.nfSFytkHz5

[[email protected] ~]# ls -lstr /tmp/

total 0

0 drwx——. 2 root root 6 Nov 12 15:24 tmp.nfSFytkHz5

But this could also have problem. The satellite can’t execute the mktemp command if it is unreachable at the moment. That’s why we need to take one communication at once and check the results. Also, this satellite can have a duplicate ID. So before the working check for any difference with the ID, hostname etc.

9. Exadata support for HPSA/DMA

Will the EXADATA platform be supported for hPSA/DMA/ORACLE?

Solution

We can use the SA agent on the Exadata appliance since it is based on Oracle Linux 7. DMA Ultimate will be able to run the regular Oracle DB workflows against the Oracle DB hosted on Exadata. Depending on the Oracle DB version included in Exadata, the level of support from DMA Ultimate will vary. Details on the supported workflows for each version can be found in the public documentation at this link:

microfocus.com > itom > Database Middleware Automation:10.61 > Ultimate > OracleWF

If the Oracle DB version in the Exadata stack is 19c, I need to mention that it has been certified by Micro Focus only on RHEL 7. Considering that Oracle Linux is binary compatible with RHEL, we don’t expect issues. Having said that, we will want to run some sanity tests on Oracle 19c installed on Oracle Linux 7.

Support for Oracle 19c is certified over DMA version 10.61 and solution packs can be download from this location: marketplace.microfocus.com > itom > content > dma ultimate solution packs.” 

10. Support for HPE Proliant servers for intelligent provisioning

Is intelligent provision option will be still supported for the new generation of HP Proliant (Gen 10 and so on).

Based on the following info:

“1)Is intelligent provision option will be still supported for new generation of HP Proliant (Gen 10 and so on).I’m almost sure the answer for this is yes, I searched on manuals and they mention the next: Embedded OS booting: eliminates the need to configure an SA network boot infrastructure without sacrificing automation.

Only available for HPE ProLiant Gen8 or newer servers and is also known as Intelligent Provisioning. Even if not specified gen10 it does mention new servers. Also, I searched on similar cases and Gen10 has been used as well. We will search a little bit more just to double confirming you this answer.

Solution

Users should know that the support for Proliant Gen10 is still experimental. There is no documentation or support for it. Intelligent Provisioning is no longer part of Proliant Gen10, so all the boot infrastructure done in SA for Gen9 is useless for Gen10.

Support for ILO5 in NGUI->Unprovisioned Servers->”Add iLO Device”.