Process - Setup OMS Node 1.0

1 Process

DescriptionThe process setup_oms_node installs a new IOM node. It works for all server-groups, which might be used for oms-nodes: oms_single_node, oms_ha_node and oms_azure_node.

Note

Before setting up a new IOM node, an according DB account has to be created.

Also see Process - Setup or Reconfigure Database Account 1.0.

Example: setup oms nodes
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ ansible-playbook -i <path to inventory file>/inventory <path to Ansible4IOM>/processes/setup_oms_node.yml

1.1 Steps

The process consists of the following steps:

  1. ftpd_cleanupservice
    1. Disable/stop pure-ftpd service in order to avoid successful health-checks, for the case setup was called on a node belonging to a running installation of IOM.
  2. azurefile_install or gluster_install (depending on assigned server-group)
    1. Azure file (oms_azure_node only):
      1. Install/update cifs_utils package.
    2. Gluster (oms_ha_node only):
      1. Install/update GlusterFS and LVM packages.
      2. Setup LVM to provide bricks for GlusterFS.
      3. Setup Gluster volumes.
  3. java_install
  4. ntpd_install
    1. Install and setup ntpd service.
  5. pgrepo_install
    1. Install postgresql repository.
  6. pgclient_install
    1. Install pgclient yum package.
  7. gluster_attach or azurefile_attach or dummyfs_attach (depending on assigned server-group)
    Some of the shared directories contain sub-directories, which are created during the installation process. Therefore shared file-systems have to be attached before installing IOM.
    1. Azurefile (oms_azure_node only)
      1. Create mount points and mount shares.
      2. Execute post_sharedfs_attach_hook.
    2. Gluster (oms_ha_node only)
      1. Create mount points and mount Gluster volumes.
      2. Execute post_sharedfs_attach_hook.
    3. dummyFS (oms_single_node only)
      1. Execute post_sharedfs_attach_hook.
  8. oms_extract
    1. Create user and group (via meta dependency).
    2. Prepare filesystem, download IOM package.
    3. Execute pre_oms_extract_hook.
    4. Integrate IOM package into filesystem.
    5. Execute post_oms_extract_hook.

  9. wildfly_extract
    1. Stop/disable service.
    2. Cleanup a partly installed Wildfly.
    3. Download Wildfly package and extract it.
  10. oms_installation_properties
    1. Update installation.properties file with values found in inventory.
  11. oms_cluster_properties
    1. Update cluster.properties file with values found in inventory.
  12. oms_dump
    1. Execute pre_oms_dump_hook.
    2. Load initial dump from filesystem, migrate DB.
    3. Execute post_oms_dump_hook.
  13. oms_reconfig_for_detach_from_cluster (backend server of groups oms_ha_node and oms_azure_node only)
    1. Increment setting of port-offset in installation.properties by one.
    2. Set thread count to 0 in quartz-cluster.properties.
  14. oms_service
    1. Setup and start OMS service directly (not by watchdog).
  15. oms_initialization
    1. Execute pre_oms_initialization_hook.
    2. Create Wildfly admin user.
    3. Initial configuration of Wildfly (load initSystem.std.*.cli).
  16. oms_configuration
    1. Execute pre_oms_configuration_hook.
    2. Apply system.std.*.properties.
    3. Apply JMS load balancing.
    4. Apply cluster.properties.
    5. Execute post_oms_configuration_hook.
  17. oms_deploy
    1. Execute pre_oms_deploy_hook.
    2. Redeploy all artifacts and wait for Wildfly to get ready again.
    3. Execute post_oms_deploy_hook.
  18. oms_crontab
    1. Add log-handling to crontab.
  19. ftpd_install
    1. Stop pure-ftpd service.
    2. Install and configure pure-ftpd.
    3. Execute pre_ftpd_start_hook
    4. Start/enable pure-ftpd service.
  20. ftp_virtual_user
    1. Setup virtual users {{is_oms_media_user}}, {{is_oms_pdf_user}}.
  21. oms_reconfig_for_attach_to_cluster (backend-server of groups oms_ha_node and oms_azure_node only)
    1. Restore original port-offset in installation.properties.
    2. Restore original thread count in quartz-cluster.properties.
  22. oms_watchdog
    1. Update watchdog.properties.
    2. Stop OMS service.
    3. Update systemd unit file of watchdog service.
    4. Reload, enable, start watchdog service.
  23. ftpd_service
    1. Execute pre_ftpd_start_hook
    2. Start/enable ftpd service.

1.2 Background Information

The current process automates the steps noted in Guide - Setup Intershop Order Management 2.2.

The setup process is mainly controlled by variables defined in roles/oms_config/defaults. Watchdog specific configurations are controlled by variables defined in roles/oms_watchdog/defaults. When installing distributed installations (on premise or in Azure cloud), the configuration of shared filesystems is essential. The shared filesystem is controlled by variables defined in roles/gluster_config/defaults (for distributed on premise installations) and roles/azurefile_config/defaults (for distributed Azure cloud installations). The following sections explain the configuration options in more detail.

1.2.1 Configuration Values at roles/oms_config/defaults

Configuration values defined at roles/oms_config/defaults are the most important ones to control the installation of IOM. The variables defined there cover the settings of installation.properties and cluster.properties completely. Additional, there are some more variables, which are not reflected by an IOM property (e.g. naming of services, IDs of users and groups, etc.).

Have a look at roles/oms_config/defaults to get information about available configuration options. Additionally Reference - Ansible4IOM Variables 1.0, gives an overview about available options/variables for each process.

To set up a working IOM installation, only a very few variables have to be overwritten in inventory:

  • is_oms_jms_hostlist - List of IP/hostname and port combinations of all backend-servers
  • OMS_VERSION - Version of IOM to be installed
  • OMS_REPO_URL - URL of maven repository to download IOM package from
  • OMS_REPO_USER - Name of repository user
  • OMS_REPO_PASSWD - Password of repository user
  • OMS_JAVA_HOME - Oracle JRE has to be installed in advance. Path to Java has to be set
  • is_oms_db_name - Name of database to be used
  • is_oms_db_user - Name of DB account to be used
  • is_oms_db_pass - Password of DB account
  • is_oms_db_hostlist - List of IP/hostname and port combinations of all DB server nodes
  • is_oms_smpt_host - IP/hostname of mail server
  • is_oms_mail_* - Addresses to be used, when sending mails

1.2.2 Configuration Values at roles/oms_watchdog/defaults

Most properties required to configure IOM-watchdog are mapped to Ansible variables, defined in roles/oms_watchdog/defaults. These variables can be changed and are applied during the process setup_oms_node.

If other settings have to be changed, the watchdog.properties file has to be modified directly within a hook. Please use pre_oms_configuration_hook or post_oms_configuration_hook. An overview about configuration options is given in Guide - IOM Watchdog 2.2 - 2.11.

Note

To setup a working IOM installation, no changes are required.

1.2.3 Controling deployment.properties

Changes at deployment.properties are not supported directly. To make changes to deployment.properties, you have to write custom Ansible-code to be executed in a hook, preferred is pre_oms_deploy_hook.

To setup a working IOM installation, without any customization artifacts, no changes are required.

1.2.4 Controling system.std.*.properties

Settings in system.std.*.properties are not reflected by according Ansible-variables. To make changes to system.std.*.properties, modify the properties file directly in hooks. Use pre_oms_configuration_hook, to make sure changes are applied automatically by setup-process.

Note

To setup a working IOM installation, no changes are required.

1.2.5 Configuration of Shared Filesystem

When setting up a distributed IOM installation, all application-servers and FTP servers have to be connected by a shared filesystem. More information can be found in Guide - Intershop Order Management - Technical Overview.

The following directories have to be shared, independently from the technology used for sharing.

  • $OMS_VAR/communication/messages
  • $OMS_VAR/importarticle
  • $OMS_VAR/jobs
  • $OMS_VAR/pdfhost
  • $OMS_VAR/mediahost

1.2.5.1 GlusterFS

When setting up a distributed IOM installation using an assignment to the server-group oms_ha_node, Gluster-FS will be used to share filesystems. The setup is controlled by variables defined in roles/gluster_config/defaults. You will need a separate disc-device to configure GlusterFS.

The following variables have to be set in the inventory in order to set up a working distributed IOM installation:

  • GLUSTER_HOST_LIST - list of all IPs/hostnames, which are part of the Gluster
  • GLUSTER_DEVICE - name of device to set up LVM and Gluster on it
  • GLUSTER_FILESYSTEMS - list of filesystems to set up

1.2.5.2 Azure File

When setting up a distributed IOM installation in Azure-Cloud using an assignment to the server-group oms_azure_node, Azure file will be used to share filesystems. The setup is controlled by variables defined in roles/azurefile_config/defaults. You have to create the shares in advance. The setup-process is only mounting the shares to the right places.

The following variables have to be set in the inventory in order to setup a working distributed IOM installation in Azure-Cloud:

  • AZUREFILE_ACCOUNT - Storage account name
  • AZUREFILE_ENDPOINT_DOMAIN - Domain part of Azure file service endpoint
  • AZUREFILE_ACCESSKEY - Access key of storage account
  • AZUREFILE_FILESYSTEMS - List of filesystems to setup

1.2.5.3 dummyFS

When setting up an IOM standalone installation by assigning the server to the group oms_single_node, dummyFS-roles are used instead of Gluster- or Azure File-roles. The only purpose of dummyFS-roles is to provide hooks, which will be called whenever detach or attach of shared filesystems is requested by processes.

This is very important when an IOM standalone installation has to be enabled for process update_oms_node. During the update process, all data created during runtime have to be protected against deletion. On distributed IOM installations this is realized by unmounting/mounting the shared filesystems. The according functionality is provided in azurefile/gluster_detach- and azurefile/gluster_attach-roles. These roles provide the hooks post_sharedfs_attach_hook and pre_sharedfs_detach_hook, which are also provided by roles dummyfs_detach and dummyfs_attach.

These hooks provide ability to set up an IOM standalone installation, which is able to protect runtime data during the update process. For example, it is possible to use separate filesystems for directories containing runtime data, which are mounted/unmounted in post_sharedfs_attach_hook and pre_sharedfs_detach_hook.

1.2.6 Installation Process Avoids Side Effects of Parallel Execution of Backend-Servers

Since the setup process can run in parallel on different nodes, or setup can be made in parallel to live nodes, the backend application server must not be connected to the cluster during the setup process. It must not execute jobs and it must not receive and handle JMS messages.

Technically this behavior is realized by roles oms_reconfig_for_attach_to_cluster and oms_reconfig_for_detach_from_cluster. When detached from the cluster, the backend server will not run any jobs and will not receive JMS messages. Even if it is running in parallel to other backend servers, the detached backend server will not influence the cluster at all.

2 Examples

2.1 Minimum Configuration of Standalone Installation

Precondition to setup a minimal standalone installation of IOM is the availability of a DB account prepared for usage by IOM. Also see Process - Setup or Reconfigure Database Account 1.0. Another precondition is an already installed Oracle JDK/JRE. According to section Background Information above, the inventory file has to have the following content.

inventory file
...
[all:vars]

# information required to access repo to download IOM package
OMS_VERSION=2.2.0.0
OMS_REPO_URL=https://repository.intershop.de/oms-releases/
OMS_REPO_USER=MyRepoAccount
OMS_REPO_PASSWD=MySecretPassword
 
# information required to access DB account
is_oms_db_name=OmsDB
is_oms_db_user=OmsDBUser
is_oms_db_pass=OmsDBUsersPassword
is_oms_db_hostlist=db.myprivate.net
 
# information about Java installation
OMS_JAVA_HOME=/opt/java
 
# information about JMS communication
# not required for standalone server
# is_oms_jms_hostlist=
 
# mail configuration
is_oms_smtp_host=smtp.myprivate.net
is_oms_mail_external_from=oms@mypublic.net
is_oms_mail_internal_from=oms@mypublic.net
is_oms_mail_internal_to=operations@mypublic.net
...

Now the process setup_oms_node can be executed.

Example: setup standalone OMS node
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \
ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \
ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml

2.2 Adding a Custom Deployment Artifact to Backend Server

Adding a custom deployment artifact requires two steps.

  1. Provide the artifact to the machine where it should be deployed
  2. Modify deployment.*.properties and the deployment process itself

A deployment artifact is mostly part of a bigger project containing more files, e.g. SQL-scripts, mail-templates, etc. Within the first step, the project package has to be downloaded and extracted. The deployment artifact has to be placed in a directory, where it is ready to be accessed by the second step.

The current section concentrates on the second step only. It is assumed, that the deployment artifact is located at $OMS_VAR/customization and named project-app.ear.

The variable OMS_APP in installation.properties defines the search path for deployment artifacts. According to roles/oms_config/defaults, the default value of OMS_APP already contains $OMS_VAR/customization. For this reason, no change of OMS_APP is necessary.

The only thing to do, is adding project-app.ear to deployment.*.properties file. Since changes of deployment.*.properties are not supported directly by Ansible4IOM, custom Ansible code has to be added to pre_oms_deploy_hook, to be executed right before the deployment process. Depending on the scope of the hook (project or installation), the file pre_oms_deploy_hook.yml has to be stored in the directory global_hooks or installation_hooks. Also see Concept - Ansible4IOM Server Configuration Management 1.0 - 1.1.

Note

It is important to use variables OMS_SERVER_TYPE, OMS_ETC[OMS_SERVER_TYPE] and OMS_USER (see roles/oms_config/defaults) for a robust implementation of the hook. This way, the custom code will still work, even if the installation of IOM will be customized far more.
pre_oms_deploy_hook.yml (valid for IOM >= v2.2.0.0)
- name: insert project-app.ear into deployment.{{OMS_SERVER_TYPE}}.properties
  lineinfile:
    dest: "{{OMS_ETC[OMS_SERVER_TYPE]}}/deployment.{{OMS_SERVER_TYPE}}.properties"
    regexp: '^[ \t]*project-app'
    insertafter: '^[ \t]*bakery.base-app'
    line: 'project-app.ear'
    state: present
  when:
    - ( OMS_SERVER_TYPE == "standalone" ) or ( OMS_SERVER_TYPE == "backend" )
  become: true
  become_user: "{{OMS_USER}}"

Now process setup_oms_node can be executed.

Example: setup of OMS node
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \
ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \
ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml

2.3 Basic Azure File Configuration

The following directories have to be shared between different Azure nodes:

  • $OMS_VAR/communication/messages
  • $OMS_VAR/importarticle
  • $OMS_VAR/jobs
  • $OMS_VAR/pdfhost
  • $OMS_VAR/mediahost

First thing to do, is setting up the according shares in Azure-Cloud. The setup of shares is not covered by Ansible4IOM. You might do this manually via Azure-Portal, or even better with an automated setup of the infrastructure. As result of this preparation, the following information are required to configure the setup-process of IOM:

  • Name of Azure file account -> AZUREFILE_ACCOUNT
  • Azure file access key -> AZUREFILE_ACCESSKEY
  • Domain part of service endpoint -> AZUREFILE_ENDPOINT_DOMAIN
  • Names of Azure file shares -> AZUREFILE_FILESYSTEMS

AZUREFILE_FILESYSTEMS is a list of hashes. Unfortunately complex variables cannot be defined in the inventory file directly. Instead, you have to do it in file oms_azure_node (named after the group used for IOM Azure nodes) in the directory group_vars. The other variables can be defined in the inventory file, but it is a good idea to define them in the oms_azure_node file too.

It is important to use the following variables for a robust configuration: OMS_VAR.frontend, OMS_VAR.backend, OMS_USER_ID, OMS_USER, OMS_GROUP_ID, OMS_GROUP (see roles/oms_config/defaults). This way, the configuration will still work, even if the installation of IOM will be customized far more.

group_vars/oms_azure_node
AZUREFILE_ACCOUNT: "iomha1"
AZUREFILE_ENDPOINT_DOMAIN: "iomha1.file.core.windows.net"
AZUREFILE_ACCESSKEY: "A49Lcp0obrL2XeU6JFhkq/AWcENeBzowbpEGH/CE1234567895ATTrl4z35UJOvNpLLcQ2Ypfz5lcCCReFpG4Q=="
AZUREFILE_FILESYSTEMS: [
  {          
    path: "{{OMS_VAR.frontend}}/mediahost",
    user_id: "{{OMS_USER_ID}}",
    user: "{{OMS_USER}}",
    group_id: "{{OMS_GROUP_ID}}",
    group: "{{OMS_GROUP}}",
    share: "mediahost"
  },
  {
    path: "{{OMS_VAR.frontend}}/pdfhost",
    user_id: "{{OMS_USER_ID}}",
    user: "{{OMS_USER}}",
    group_id: "{{OMS_GROUP_ID}}",
    group: "{{OMS_GROUP}}",
    share: "pdfhost"
  },
  {
    path: "{{OMS_VAR.backend}}/communication/messages/",
    user_id: "{{OMS_USER_ID}}",
    user: "{{OMS_USER}}",
    group_id: "{{OMS_GROUP_ID}}",
    group: "{{OMS_GROUP}}",
    share: "messages"
  },
  {
    path: "{{OMS_VAR.backend}}/importarticle/",
    user_id: "{{OMS_USER_ID}}",
    user: "{{OMS_USER}}",
    group_id: "{{OMS_GROUP_ID}}",
    group: "{{OMS_GROUP}}",
    share: "importarticle"
  },
  {
    path: "{{OMS_VAR.backend}}/jobs/",
    user_id: "{{OMS_USER_ID}}",
    user: "{{OMS_USER}}",
    group_id: "{{OMS_GROUP_ID}}",
    group: "{{OMS_GROUP}}",
    share: "jobs"
  }
]

Now process setup_oms_node can be executed.

Example: setup of OMS node
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \
ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \
ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml

Disclaimer

The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources
Tickets