Document Properties
Kbid
2H8388
Last Modified
08-Dec-2022
Added to KB
26-Jul-2017
Public Access
Everyone
Status
Online
Doc Type
References
Product
  • IOM 3.0
  • IOM 3.1
  • IOM 3.2
  • IOM 3.3
  • IOM 3.4
  • IOM 3.5
  • IOM 3.6
  • IOM 3.7
  • IOM 4.0
  • IOM 4.1
  • IOM 4.2
  • IOM 4.3
  • IOM 4.4
Process - Update OMS Node 1.0

Table of Contents

Product Version

2.2

Product To Version


Status

New Labels

Process

Description

The process update_oms_node updates an IOM node to a newer IOM version. IOM updates mostly require database migrations, therefore the update process forces a downtime of the whole system. For the same reason, you should execute updates for the whole cluster only. The process update_oms_node is working for all server groups, which might be used for oms-nodes: oms_single_node, oms_ha_node and oms_azure_node.

Example: update oms nodes
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \
ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \
ansible-playbook -i <path to inventory file>/inventory <path to Ansible4IOM>/processes/setup_oms_node.yml

Steps

The process consists of the following steps:

  1. ftpd_cleanupservice
    1. Disable/stop pure-ftpd service (in order to avoid successful health-checks, for the case setup was called on a node belonging to a running installation of IOM).
  2. oms_cleanupservice
    1. Stop/disable IOM servers.
  3. gluster_detach or azurefile_detach or dummyfs_detach (depending on assigned server group).
    Only shared filesystems contain data created at runtime. In order to protect content of these filesystems, they are detached before cleaning up OMS.
    1. azurefile (oms_azure_node only)
      1. Execute pre_sharedfs_detach_hook.
      2. Unmount shares, remove entries from fstab.
    2. gluster (oms_ha_node only)
      1. Execute pre_sharedfs_detach_hook.
      2. Unmount gluster filesystems, remove entries from fstab.
    3. dummyfs (oms_single_node only)
      1. Execute pre_sharedfs_detach_hook.
  4. oms_cleanupcrontab
    1. Remove log-handling from crontab.
  5. oms_cleanupservice
    1. Stop/disable IOM service.
  6. wildfly_cleanup
    1. Remove the Wildfly installation directory.
  7. oms_cleanupdirs
    1. Execute pre_oms_cleanup_hook.
    2. Remove all IOM directories.
  8. java_install
  9. ntpd_install
    1. Install and setup ntpd service.
  10. pgrepo_install
    1. Install postgresql repository.
  11. pgclient_install
    1. Install pgclient yum package.
  12. gluster_attach or azurefile_attach or dummyfs_attach (depending on assigned server group)
    Some of the shared directories contain sub-directories, which are created during the installation process. Therefore shared file-systems have to be attached before installing IOM.
    1. azurefile (oms_azure_node only)
      1. Create mount points and mount shares.
      2. Execute post_sharedfs_attach_hook.
    2. gluster (oms_ha_node only)
      1. Create mount points and mount gluster volumes.
      2. Execute post_sharedfs_attach_hook.
    3. dummyfs (oms_single_node only)
      1. Execute post_sharedfs_attach_hook.
  13. oms_extract
    1. Create user and group (via meta dependency).
    2. Prepare filesystem, download IOM package.
    3. Execute pre_oms_extract_hook.
    4. Integrate IOM package into filesystem.
    5. Execute post_oms_extract_hook.

  14. wildfly_extract
    1. Stop/disable service.
    2. Cleanup a partly installed Wildfly.
    3. Download Wildfly package and extract it.
  15. oms_installation_properties
    1. Update installation.properties file with values found in inventory.
  16. oms_cluster_properties
    1. Update cluster.properties file with values found in inventory.
  17. oms_dump
    1. Execute pre_oms_dump_hook.
    2. Load initial dump from filesystem, migrate DB.
    3. Execute post_oms_dump_hook.
  18. oms_reconfig_for_detach_from_cluster (backend server of groups oms_ha_node and oms_azure_node only)
    1. Increment setting of port-offset in installation.properties by one.
    2. Set thread count to 0 in quartz-cluster.properties.
  19. oms_service
    1. Setup and start OMS service directly (not by watchdog)
  20. oms_initialization
    1. Execute pre_oms_initialization_hook.
    2. Create Wildfly admin user.
    3. Initial configuration of Wildfly (load initSystem.std.*.cli).
  21. oms_configuration
    1. Execute pre_oms_configuration_hook.
    2. Apply system.std.*.properties.
    3. Apply JMS load balancing.
    4. Apply cluster.properties.
    5. Execute post_oms_configuration_hook.
  22. oms_deploy
    1. Execute pre_oms_deploy_hook.
    2. Redeploy all artifacts and wait for Wildfly to get ready again.
    3. Execute post_oms_deploy_hook.
  23. oms_crontab
    1. Add log-handling to crontab.
  24. oms_reconfig_for_attach_to_cluster (backend server of groups oms_ha_node and oms_azure_node only)
    1. Restore original port-offset in installation.properties.
    2. Restore original thread count in quartz-cluster.properties.
  25. oms_watchdog
    1. Update watchdog.properties.
    2. Stop OMS service.
    3. Update systemd unit file of watchdog service.
    4. Reload, enable, start watchdog service.
  26. ftpd_service
    1. Execute pre_ftpd_start_hook
    2. Start/enable FTPd service.

Background Information

IOM does not support an in-place update process, which transforms a host running an older version of IOM into one running a newer version. Instead, the installation has to be replaced completely. The complete replacement of the installation has the following things to ensure:

  • The configuration before and after update has to be identical.
  • Runtime data must not be harmed. They have to be accessible after execution of the update process.
  • The database must not be harmed. The database content has to be migrated to the new version.

Additional Configuration Changes

Since Process - Reconfigure OMS Node 1.0 can only reconfigure a limited set of configuration settings, process update_oms_node can be used to update other configurations than OMS_VERSION too.

Process - Reconfigure OMS Node gives an overview about configuration changes supported by different processes.

Installation Process - Avoid Side Effects of Parallel Execution of Backend Servers

During the update process, the load balancer must not forward any requests to the IOM application servers. Since the load balancer uses health-check requests sent to the application servers to determine the application servers to use, the goal can be reached by marking all nodes as unhealthy. This is realized by stopping FTPd-service during update process.

Since the update process can run in parallel on different nodes, backend servers are not under control of the watchdog during this time. It has to be ensured, that the backend server must not execute jobs and must not receive and handle JMS messages.

Technically, this behavior is realized by roles oms_reconfig_for_attach_to_cluster and oms_reconfig_for_detach_from_cluster. When detached from the cluster, the backend server will not run any jobs and will not receive JMS messages. Even if it is running in parallel to other backend servers, the detached backend server will not influence the cluster at all.

Examples

Update to New Patch Version of IOM

Make sure your custom code and configuration are compatible to the new version of IOM to be installed. Just update the value of OMS_VERSION in your inventory file, e.g. from 2.2.0.0 to 2.2.1.0.

inventory file
...
[all:vars]

# information required to access repo to download IOM package
OMS_VERSION=2.2.1.0
...

Now the process update_oms_node can be executed.

Example: setup standalone OMS node
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \
ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \
ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/update_oms_node.yml

Change Password of Wildfly Admin User

According to Reference - Ansible4IOM Variables 1.0, a change of Wildflys admin user password is not supported by Process - Reconfigure OMS Node 1.0, but by the current update process. Since the variable JBOSS_ADMIN_PASSWD, defined in roles/oms_config/defaults, is a hash, the variable cannot be set in the inventory file directly. The inventory file supports setting of simple variables only. To set complex variables, you have to use files located in the directory group_vars. Depending on the installation type you are using, you have to create a file within group_vars, which is named exactly as the server group assignment, you are using (oms_single_node, oms_ha_node, oms_azure_node).

The following example shows the settings that are necessary to update an Azure-Cloud installation of IOM. Frontend- and backend server will use different passwords.

group_vars/oms_azure_node
JBOSS_ADMIN_PASSWD: {
  backend: "mySecretBackendPasswd",
  frontend: "mySecretFrontendPasswd"
}
 

Now the process update_oms_node can be executed.

Example: setup of OMS node
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \
ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \
ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml
Disclaimer

The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources
Tickets