Note
Also see Process - Setup or Reconfigure Database Account 1.0.
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ ansible-playbook -i <path to inventory file>/inventory <path to Ansible4IOM>/processes/setup_oms_node.yml
The process consists of the following steps:
Execute post_oms_extract_hook.
The current process automates the steps noted in Guide - Setup Intershop Order Management 2.2.
The setup process is mainly controlled by variables defined in roles/oms_config/defaults. Watchdog specific configurations are controlled by variables defined in roles/oms_watchdog/defaults. When installing distributed installations (on premise or in Azure cloud), the configuration of shared filesystems is essential. The shared filesystem is controlled by variables defined in roles/gluster_config/defaults (for distributed on premise installations) and roles/azurefile_config/defaults (for distributed Azure cloud installations). The following sections explain the configuration options in more detail.
Configuration values defined at roles/oms_config/defaults are the most important ones to control the installation of IOM. The variables defined there cover the settings of installation.properties and cluster.properties completely. Additional, there are some more variables, which are not reflected by an IOM property (e.g. naming of services, IDs of users and groups, etc.).
Have a look at roles/oms_config/defaults to get information about available configuration options. Additionally Reference - Ansible4IOM Variables 1.0, gives an overview about available options/variables for each process.
To set up a working IOM installation, only a very few variables have to be overwritten in inventory:
is_oms_jms_hostlist
- List of IP/hostname and port combinations of all backend-serversOMS_VERSION
- Version of IOM to be installedOMS_REPO_URL
- URL of maven repository to download IOM package fromOMS_REPO_USER
- Name of repository userOMS_REPO_PASSWD
- Password of repository userOMS_JAVA_HOME
- Oracle JRE has to be installed in advance. Path to Java has to be setis_oms_db_name
- Name of database to be usedis_oms_db_user
- Name of DB account to be usedis_oms_db_pass
- Password of DB accountis_oms_db_hostlist
- List of IP/hostname and port combinations of all DB server nodesis_oms_smpt_host
- IP/hostname of mail serveris_oms_mail_*
- Addresses to be used, when sending mailsMost properties required to configure IOM-watchdog are mapped to Ansible variables, defined in roles/oms_watchdog/defaults. These variables can be changed and are applied during the process setup_oms_node.
If other settings have to be changed, the watchdog.properties file has to be modified directly within a hook. Please use pre_oms_configuration_hook or post_oms_configuration_hook. An overview about configuration options is given in Guide - IOM Watchdog 2.2 - 2.11.
Note
Changes at deployment.properties are not supported directly. To make changes to deployment.properties, you have to write custom Ansible-code to be executed in a hook, preferred is pre_oms_deploy_hook.
To setup a working IOM installation, without any customization artifacts, no changes are required.
Settings in system.std.*.properties are not reflected by according Ansible-variables. To make changes to system.std.*.properties, modify the properties file directly in hooks. Use pre_oms_configuration_hook, to make sure changes are applied automatically by setup-process.
Note
When setting up a distributed IOM installation, all application-servers and FTP servers have to be connected by a shared filesystem. More information can be found in Guide - Intershop Order Management - Technical Overview.
The following directories have to be shared, independently from the technology used for sharing.
When setting up a distributed IOM installation using an assignment to the server-group oms_ha_node, Gluster-FS will be used to share filesystems. The setup is controlled by variables defined in roles/gluster_config/defaults. You will need a separate disc-device to configure GlusterFS.
The following variables have to be set in the inventory in order to set up a working distributed IOM installation:
GLUSTER_HOST_LIST
- list of all IPs/hostnames, which are part of the GlusterGLUSTER_DEVICE
- name of device to set up LVM and Gluster on itGLUSTER_FILESYSTEMS
- list of filesystems to set upWhen setting up a distributed IOM installation in Azure-Cloud using an assignment to the server-group oms_azure_node, Azure file will be used to share filesystems. The setup is controlled by variables defined in roles/azurefile_config/defaults. You have to create the shares in advance. The setup-process is only mounting the shares to the right places.
The following variables have to be set in the inventory in order to setup a working distributed IOM installation in Azure-Cloud:
AZUREFILE_ACCOUNT
- Storage account nameAZUREFILE_ENDPOINT_DOMAIN
- Domain part of Azure file service endpointAZUREFILE_ACCESSKEY
- Access key of storage accountAZUREFILE_FILESYSTEMS
- List of filesystems to setupWhen setting up an IOM standalone installation by assigning the server to the group oms_single_node, dummyFS-roles are used instead of Gluster- or Azure File-roles. The only purpose of dummyFS-roles is to provide hooks, which will be called whenever detach or attach of shared filesystems is requested by processes.
This is very important when an IOM standalone installation has to be enabled for process update_oms_node. During the update process, all data created during runtime have to be protected against deletion. On distributed IOM installations this is realized by unmounting/mounting the shared filesystems. The according functionality is provided in azurefile/gluster_detach- and azurefile/gluster_attach-roles. These roles provide the hooks post_sharedfs_attach_hook and pre_sharedfs_detach_hook, which are also provided by roles dummyfs_detach and dummyfs_attach.
These hooks provide ability to set up an IOM standalone installation, which is able to protect runtime data during the update process. For example, it is possible to use separate filesystems for directories containing runtime data, which are mounted/unmounted in post_sharedfs_attach_hook and pre_sharedfs_detach_hook.
Since the setup process can run in parallel on different nodes, or setup can be made in parallel to live nodes, the backend application server must not be connected to the cluster during the setup process. It must not execute jobs and it must not receive and handle JMS messages.
Technically this behavior is realized by roles oms_reconfig_for_attach_to_cluster and oms_reconfig_for_detach_from_cluster. When detached from the cluster, the backend server will not run any jobs and will not receive JMS messages. Even if it is running in parallel to other backend servers, the detached backend server will not influence the cluster at all.
Precondition to setup a minimal standalone installation of IOM is the availability of a DB account prepared for usage by IOM. Also see Process - Setup or Reconfigure Database Account 1.0. Another precondition is an already installed Oracle JDK/JRE. According to section Background Information above, the inventory file has to have the following content.
... [all:vars] # information required to access repo to download IOM package OMS_VERSION=2.2.0.0 OMS_REPO_URL=https://repository.intershop.de/oms-releases/ OMS_REPO_USER=MyRepoAccount OMS_REPO_PASSWD=MySecretPassword # information required to access DB account is_oms_db_name=OmsDB is_oms_db_user=OmsDBUser is_oms_db_pass=OmsDBUsersPassword is_oms_db_hostlist=db.myprivate.net # information about Java installation OMS_JAVA_HOME=/opt/java # information about JMS communication # not required for standalone server # is_oms_jms_hostlist= # mail configuration is_oms_smtp_host=smtp.myprivate.net is_oms_mail_external_from=oms@mypublic.net is_oms_mail_internal_from=oms@mypublic.net is_oms_mail_internal_to=operations@mypublic.net ...
Now the process setup_oms_node can be executed.
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \ ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \ ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml
Adding a custom deployment artifact requires two steps.
A deployment artifact is mostly part of a bigger project containing more files, e.g. SQL-scripts, mail-templates, etc. Within the first step, the project package has to be downloaded and extracted. The deployment artifact has to be placed in a directory, where it is ready to be accessed by the second step.
The current section concentrates on the second step only. It is assumed, that the deployment artifact is located at $OMS_VAR/customization and named project-app.ear.
The variable OMS_APP
in installation.properties defines the search path for deployment artifacts. According to roles/oms_config/defaults, the default value of OMS_APP
already contains $OMS_VAR/customization
. For this reason, no change of OMS_APP
is necessary.
The only thing to do, is adding project-app.ear to deployment.*.properties file. Since changes of deployment.*.properties are not supported directly by Ansible4IOM, custom Ansible code has to be added to pre_oms_deploy_hook, to be executed right before the deployment process. Depending on the scope of the hook (project or installation), the file pre_oms_deploy_hook.yml has to be stored in the directory global_hooks or installation_hooks. Also see Concept - Ansible4IOM Server Configuration Management 1.0 - 1.1.
Note
- name: insert project-app.ear into deployment.{{OMS_SERVER_TYPE}}.properties lineinfile: dest: "{{OMS_ETC[OMS_SERVER_TYPE]}}/deployment.{{OMS_SERVER_TYPE}}.properties" regexp: '^[ \t]*project-app' insertafter: '^[ \t]*bakery.base-app' line: 'project-app.ear' state: present when: - ( OMS_SERVER_TYPE == "standalone" ) or ( OMS_SERVER_TYPE == "backend" ) become: true become_user: "{{OMS_USER}}"
Now process setup_oms_node can be executed.
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \ ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \ ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml
The following directories have to be shared between different Azure nodes:
First thing to do, is setting up the according shares in Azure-Cloud. The setup of shares is not covered by Ansible4IOM. You might do this manually via Azure-Portal, or even better with an automated setup of the infrastructure. As result of this preparation, the following information are required to configure the setup-process of IOM:
AZUREFILE_ACCOUNT
AZUREFILE_ACCESSKEY
AZUREFILE_ENDPOINT_DOMAIN
AZUREFILE_FILESYSTEMS
AZUREFILE_FILESYSTEMS
is a list of hashes. Unfortunately complex variables cannot be defined in the inventory file directly. Instead, you have to do it in file oms_azure_node (named after the group used for IOM Azure nodes) in the directory group_vars. The other variables can be defined in the inventory file, but it is a good idea to define them in the oms_azure_node file too.
It is important to use the following variables for a robust configuration: OMS_VAR.frontend, OMS_VAR.backend, OMS_USER_ID, OMS_USER, OMS_GROUP_ID, OMS_GROUP
(see roles/oms_config/defaults). This way, the configuration will still work, even if the installation of IOM will be customized far more.
AZUREFILE_ACCOUNT: "iomha1" AZUREFILE_ENDPOINT_DOMAIN: "iomha1.file.core.windows.net" AZUREFILE_ACCESSKEY: "A49Lcp0obrL2XeU6JFhkq/AWcENeBzowbpEGH/CE1234567895ATTrl4z35UJOvNpLLcQ2Ypfz5lcCCReFpG4Q==" AZUREFILE_FILESYSTEMS: [ { path: "{{OMS_VAR.frontend}}/mediahost", user_id: "{{OMS_USER_ID}}", user: "{{OMS_USER}}", group_id: "{{OMS_GROUP_ID}}", group: "{{OMS_GROUP}}", share: "mediahost" }, { path: "{{OMS_VAR.frontend}}/pdfhost", user_id: "{{OMS_USER_ID}}", user: "{{OMS_USER}}", group_id: "{{OMS_GROUP_ID}}", group: "{{OMS_GROUP}}", share: "pdfhost" }, { path: "{{OMS_VAR.backend}}/communication/messages/", user_id: "{{OMS_USER_ID}}", user: "{{OMS_USER}}", group_id: "{{OMS_GROUP_ID}}", group: "{{OMS_GROUP}}", share: "messages" }, { path: "{{OMS_VAR.backend}}/importarticle/", user_id: "{{OMS_USER_ID}}", user: "{{OMS_USER}}", group_id: "{{OMS_GROUP_ID}}", group: "{{OMS_GROUP}}", share: "importarticle" }, { path: "{{OMS_VAR.backend}}/jobs/", user_id: "{{OMS_USER_ID}}", user: "{{OMS_USER}}", group_id: "{{OMS_GROUP_ID}}", group: "{{OMS_GROUP}}", share: "jobs" } ]
Now process setup_oms_node can be executed.
ANSIBLE_LIBRARY=<path to Ansible4IOM>/modules/ \ ANSIBLE_ROLES_PATH=<path to Ansible4IOM>/roles/ \ ansible-playbook -i <path to inventory file> <path to Ansible4IOM>/processes/setup_oms_node.yml
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.