Guide - Setup Intershop Order Management 2.16

1 Introduction

The present guide is addressed to administrators who want to install IOM 2.16 in their Linux-based infrastructure. It enables them to understand what artifacts are contained in the IOM 2.16 delivery and how they can be installed, configured and deployed.

The document describes how to install IOM 2.16.

For a technical overview of typical installation scenarios please see references.

1.1 Glossary

WordingDescription
CLICommand Line Interface, a tooling for WildFly management
FTPFile Transfer Protocol
HAHigh availability
ICIThe abbreviation for Intershop Commerce Insight, Intershop's reporting and analytics solution.
ICMThe abbreviation for Intershop Commerce Management
IOM

The abbreviation for Intershop Order Management

JBossSynonym for WildFly (former name of the WildFly application server)
JDBCJava Database Connectivity
JDKJava Development Kit
OLTP

Online transaction processing

OMSThe abbreviation for Order Management System, the technical name of IOM
OSOperating System
URLUniform Resource Locator
WildFlyThe application server that IOM runs on

1.2 References

1.3 1.3  Additional References

2 Prerequisites

2.1 Java Development Kit

The WildFly application server hosting and running IOM requires an installed Java development kit (JDK) of at least version 11.

The JAVA_HOME global environment variable has to point to the installation directory of the JDK.

Note

JAVA_HOME will be covered in installation.properties . The PATH will be set automatically by set_env.sh.

2.2 Mail Server

The IOM requires an existing mail server that processes internal and external e-mails sent from IOM via the SMTP protocol.

Server host and port need to be known as well for later configuration.

2.3 PostgreSQL Server

The IOM requires a PostgreSQL database hosted by a PostgreSQL database server and can reside on its own host.

To make the database server fit for IOM, certain configuration steps in a standard installation are necessary. For setup and initialization steps please refer to section Database Setup and Initialization.

2.3.1 Support

Intershop does not offer PostgreSQL support further than general recommendations for its use as relational database for the IOM.

A list of companies offering professional Postgres support can be found at PostgreSQL: Professional Services.

Also the PostgreSQL community has some excellent mailing lists, pgsql-general being the most active, see PostgreSQL: PostgreSQL Mailing List Archives.

2.3.2 Operating System

The IOM only supports PostgreSQL servers hosted on a Linux-based OS.

2.3.3 Version

We recommend PostgresSQL 11. The IOM version 2.16 is also compatible with PostgreSQL 9.5+.

3 Definition of Properties

The IOM uses four major .properties files which are explained below.

3.1 Installation Properties

The local environment of IOM is defined in $OMS_ETC/installation.properties.

installation.properties defines shell variables, which are read by $OMS_HOME/bin/set_env.sh to provide the environment for all scripts and programs belonging to the IOM system.

$OMS_HOME/bin/set_env.sh provides the content of installation.properties as simple shell variables. Additionally, it adds some exported variables, e.g., PATH and some variables required by 3rd party programs (e.g., the content of JBOSS_JAVA_OPTS will be exported as JAVA_OPTS to be available for standalone.sh).

Variable name
Description
Default/ Exemplary Value
OMS_USERThe OS user that installs and runs IOMoms
OMS_HOME

The base location of the extracted IOM release package. The default value makes it easy to run a frontend and backend server.

set_env.sh adds $OMS_HOME/bin to PATH.

OMS_HOME is exported by set_env.sh.

OMS_HOME is passed to WildFly and can be accessed there as ${installation.OMS_HOME}.

/opt/$OMS_USER.$OMS_SERVER_TYPE
OMS_ETC

OMS_ETC is set by set_env.sh implicitly to the directory where the installation.properties file is located.

OMS_ETC is exported by set_env.sh and is not listed within the installation.properties.

-
OMS_VAR

The location of operational data files for IOM

OMS_VAR is passed to WildFly and can be accessed there as ${installation.OMS_VAR}.

OMS_VAR is exported by set_env.sh.

/var/opt/$OMS_USER.$OMS_SERVER_TYPE
OMS_SHARE

The location of shared data file of IOM.

OMS_SHARE is passed to Wildfly and can be accessed there as ${installation.OMS_SHARE}.

OMS_SHARE is exported by set_env.sh.

/var/opt/$OMS_USER.share
OMS_LOG

The locations of logs written by Wildfly, IOM and scripts.

OMS_LOG is passed to Wildfly and can be accessed there as ${installation.OMS_LOG}.

OMS_LOG is exported by set_env.sh.

/var/opt/$OMS_USER.log
OMS_APPThe location of IOM artifacts deployable into the application server. A list of directories can be passed here, entries have to be separated by a colon ":".$OMS_HOME/application:$OMS_VAR/customization
SERVER_ID

Identifier of the current IOM application server. Must not be empty. It has to be unique for every application server of the IOM cluster.

SERVER_ID is used for the following purposes:

  • Added to log entries to become able to identify the application server that has written the entry
  • Added to log file name to become able to identify the application server, which has created the log file
  • Used to identify cache instances
  • Appended to session IDs to enable sticky sessions and session failover
  • Used to identify servers on cluster status
  • Used to identify servers to process failover from active to a standby server

If left empty, set_env.sh raises an error.

SERVER_ID is exported by set_env.sh.

SERVER_ID is passed to WildFly and can be accessed there as ${installation.SERVER_ID}. Additionally, it is used to initialize ${jboss.node.name}.

$(hostname)_$OMS_SERVER_TYPE
JAVA_HOME

The location of the JDK that Wildlfly uses to run

set_env.sh adds $JAVA_HOME/bin to PATH.

JAVA_HOME is exported by set_env.sh.

$OMS_HOME/java
JBOSS_HOME

The installation location of the WildFly application server that IOM uses to run. Every instance of IOM requires its own Wildfly installation. Intershop recommends to follow the naming pattern of OMS_HOME for Wildfly, in order to easily run frontend and backend server in parallel on a single machine.

set_env.sh adds $JBOSS_HOME/bin to PATH.

JBOSS_HOME is exported by set_env.sh.

/opt/wildfly.$OMS_SERVER_TYPE
JBOSS_BIND_ADDRESS

Bind address to be used for management and public interface.

Note

Change the IP if you do not want to bind JBoss on all interfaces.

0.0.0.0
JBOSS_BIND_ADDRESS_PRIVATE

Bind address to be used by WildFlys JGroups subsystem needed for cluster communication, see Guide - Intershop Order Management - Technical Overview. The IP address used for cluster communication must be a private interface.

Note

You need to change the default value if you want to set up a cluster of IOM server nodes.

127.0.0.1
JBOSS_PORT_OFFSET

When running more than one server on the same machine and the same bind-address, the listening ports of both servers have to differ.

To do so, JBOSS_PORT_OFFSET has to be set on one server to increase all port numbers by the defined offset.

set_env.sh provides the variable JBOSS_MGMT_PORT (not exported), which is set depending on the value of JBOSS_PORT_OFFSET.


JBOSS_JAVA_OPTS

These JAVA options are used when the WildFly application server is started and they are used by boss-cli.sh as well.
Configuration options could be used to configure memory usage, garbage collection, etc.

set_env.sh appends $JBOSS_JAVA_OPTS to predefined JAVA_OPTS.

JAVA_OPTS is exported by set_env.sh.

-Xms512M -Xmx2048M

JBOSS_ADMIN_USERThis is the name of the IOM WildFly user that will be created to manage the application server. Used to configure WildFly for IOM and for deployments of IOM artifacts.omsadmin
JBOSS_ADMIN_PASSWDThis is the password for the IOM WildFly user that is used to manage the application server. Please change the value.not_yet_a_secret
WATCHDOG_JAVA_OPTS

These JAVA options are applied to the Java-based Watchdog program.

WATCHDOG_JAVA_OPTS is not exported by set_env.sh.


3.2 Cluster Properties

Cluster properties are WildFly system properties, which define the machine-independent configuration of an IOM cluster.

These properties are located in $OMS_ETC/cluster.properties.

PostgreSQL-related properties are read by set_env.sh and exported as environment variables.

Adjust cluster.properties to the real values, used by your OMS cluster. For example, you have to enter the access information for the database so the IOM application server can access it.

Property
Description
Exemplary Value
is.oms.db.hostlist

A comma-separated list of database servers. Each server entry consists of hostname and port, separated by a colon. Setting the port is optional. If not set, standard port 5432 will be used (see Guide - Intershop Order Management - Technical Overview)

First hostname in list exported by set_env.sh as PGHOST

Port of the first entry in list exported by set_env.sh as PGPORT

localhost:5432
is.oms.db.name

The database name to connect at the PostgreSQL server

Exported by set_env.sh as PGDATABASE

oms_db
is.oms.db.user

PostgreSQL user name to connect as

Exported by set_env.sh as PGUSER

oms_user
is.oms.db.pass

The password to be used when connecting to the PostgreSQL server

Exported by set_env.sh as PGPASSWORD

OmsDB
is.oms.db.cacheEnable/disable database cache
Only the values enabled and disabled are allowed. A production system should always enable the use of the DB cache.
enabled
is.oms.xmlbinder.cacheUse caching for JAXB-context while un/marshaling or validating XML files.
Only the values enabled and disabled are allowed. A production system should always enable the use of the JAXB context cache.

enabled

is.oms.smtp.hostThe host of the mail server IOM uses to send e-mailslocalhost
is.oms.smtp.portThe port of the mail server IOM uses to send e-mails25
is.oms.smtp.user

OPTIONAL The user name for mail server authentication


is.oms.smtp.pass

OPTIONAL The user password for mail server authentication


is.oms.mail.external.fromThe sender address for external e-mails (e.g., e-mails sent to the shop customers)noreply@youraddress.com
is.oms.mail.internal.fromThe sender address for internal e-mails (e.g., to report errors via e-mail)noreply@youraddress.com
is.oms.mail.internal.toThe recipient for internal e-mailsoperations@youraddress.com
is.oms.mail.internal.ccThe carbon copy for internal e-mails
is.oms.mail.internal.bccThe blind carbon copy for internal e-mails
is.oms.mail.businessoperations.toThe recipient for business operations e-mailsbusinessoperations@youraddress .com
is.oms.mail.resources.base.url

OPTIONAL
The base path for e-mail resources that are loaded from the e-mail client, e.g., images or stylesheets. Also see Concept - IOM Customer E-mails.


is.oms.dir.var

The base path of the file system where IOM reads and writes its operational data. The default value references the value defined at installation.properties.
You must not change the value here, in order to have a consistent configuration.

${installation.OMS_VAR}

is.oms.jboss.base.url

The publicly accessible base URL of IOM which could be a DNS of the load balancer etc.

For ICM it is used at the IOM connector, e.g., for the return label service.

http://localhost:8080/
is.oms.validation.pattern.phoneValidation pattern for phone numbers. If not set, the default value will be used.(^$)|(^[+]?[0-9. ()/-]{8,25}$)
is.oms.validation.pattern.email

Validation pattern for e-mail addresses. If not set, the default value will be used.

Note

The character '\' in the regular expression requires an escaping (\ => \\). Otherwise, the property would not be set correctly.

Desired expression

^[A-Za-z0-9._%+-]+@[A-Za-z0-9][A-Za-z0-9.-]*\.[A-Za-z]{2,9}$

requires the following escaped expression

^[A-Za-z0-9._%+-]+@[A-Za-z0-9][A-Za-z0-9.-]*\\.[A-Za-z]{2,9}$

is.oms.validation.pattern.password

Validation pattern for passwords. If not set, the default value will be used.

Note

The character '\' in the regular expression requires an escaping (\ => \\). Otherwise, the property would not be set correctly.


Desired expression

^(?=[^\s]*[a-zA-Z])(?=[^\s]*[\d])[^\s]{8,}$

requires the following escaped expression

^(?=[^\\s]*[a-zA-Z])(?=[^\\s]*[\\d])[^\\s]{8,}$

is.oms.validation.pattern.password.hintThe displayed note where you can explain the password rules for OMT users can be customized.
If not set, the default value will be used.
The password must include a letter, a number and must contain at least 8 characters.
is.oms.healthcheck.enabled

OPTIONAL

Enable/ disable health check. It will always be activated, except when this parameter is set to "false".

true

is.oms.healthcheck.cachelivetime

OPTIONAL

Health checks are now performed using a Java timer, and no longer from the REST requests.

Maximum age in seconds for which a health check found within the cache is considered to be valid.
The server status will be returned as 503 after that, assuming that the health check timer has stopped or is hanging.

  • Default: 11 seconds
  • Minimum: is.oms.healthcheck.recurringtime + 6

10

is.oms.healthcheck.recurringtime

OPTIONAL

Health check recurring interval in seconds.

  • Default: 5 seconds
  • Minimum: 1 second

When using the Watchdog, this value should be less than the property watchdog.cycle.

5
is.oms.sharedfs.healthcheck

Enable/disable health check for shared file system.

Checks the ability to write/delete files in directory $OMS_SHARE/.healthcheck. This special directory inside $OMS_SHARE was chosen to have a real indicator for the shared file system. If you set up your system manually, you have to create the .healthcheck directory manually inside $OMS_SHARE.

If you do not set up a clustered IOM (single IOM node without shared file system), you have to disable this health check.

enabled
is.oms.jwt.secret

The shared secret for a JSON Web Tokens (JWT) creation/validation. JWTs will be generated with HMAC algorithm (HS256)

Note

Intershop strongly recommends to change the default shared secret used for the JSON Web Tokens creation/validation in the cluster properties.

To secure the JWT, a key of the same size as the hash output or larger must be used with the JWS HMAC SHA-2 algorithms (i.e, 256 bits for "HS256"), see JSON Web Algorithms (JWA) | 3.2. HMAC with SHA-2 Functions.

length_must_be_at_least_32_chars

3.3 Deployment Properties

Deployment properties define which artifacts of the IOM should be deployed to the WildFly application server.

The properties are located in $OMS_ETC/deployment.cluster.properties. The order of entries within this .properties file is important, as it reflects the order of deployments.

The table below shows the entries in $OMS_ETC/deployment.cluster.properties:

Cluster server
bakery.base-app-2.16.0.0.ear
process-app-2.16.0.0.ear
bakery.control-app-2.16.0.0.war
bakery.impex-app-2.16.0.0.war
bakery.communication-app-2.16.0.0.ear
bakery.omt-app-2.16.0.0.war
oms.rest.communication-app- 2.16.0.0.war
gdpr-app-2.16.0.0.war
rma-app-2.16.0.0.war
transmission-app-2.16.0.0.war
order-state-app-2.16.0.0.war
oms.monitoring-app-2.16.0.0.war

3.4 System Properties

System properties are defined in $OMS_ETC/system.std.cluster.properties. This file contains WildFly-specific configuration settings. Mostly there is no need to adapt any properties defined in this file, with one exception: the webservices subsystem of Wildfly is responsible for the delivery of wsdl files. For correct creation of links within wsdl-files, a proper configuration of this subsystem is required.

Adjust the following properties in system.std.cluster.properties to get properly working wsdl-requests.

Property

Description

Default/Exemplary Value

/subsystem=webservices:wsdl-host

Hostname to be used for links within wsdl-responses. The client has to be able to follow these links, hence the hostname configured here has to be the publicly visible hostname of your IOM system.

"${jboss.bind.address.unsecure:127.0.0.1}"

/subsystem=webservices:wsdl-port

Port number to be used for http-links within wsdl-responses. The client has to be able to follow these links, hence the port configured here has to be the publicly visible http-port of your IOM system.

"8080"
/subsystem=webservices:wsdl-secure-portPort number to be used for https-links within wsdl-responses. The client has to be able to follow these links, hence the port configured here has to be the publicly visible https-port of your IOM system."8443"
/subsystem=webservices:wsdl-uri-schemeURI scheme to be used for links within wsdl-responses. The client has to be able to follow these links, hence the URI scheme configured here has to be the publicly available scheme of your IOM system."http"

4 Database Setup and Initialization

Only for new installations

Database Setup and Initialization is only required for a new IOM installation. If you migrate the application from an older version, you can skip this section.

4.1 User

Define a dedicated OS user for the PostgreSQL service (referred to as the "postgres OS user" in the following descriptions).

4.2 Cluster Initialization

4.2.1 Data Directory

The PostgreSQL data directory contains all of the data files for the database. The variable PGDATA is used to reference this directory. It must be prepared prior to the initialization and belongs exclusively to the Postgres OS user.

You should not use this directory for private data, nor add symbolic links to it. However, you will probably want an extra file system built on RAID backed up by battery. Just make sure not to use its root folder for the data directory and have the major version within the path.

This will facilitate maintenance and Postgres major upgrades, e.g., /iomdata/pg_11/data.

This directory must belong to the Postgres OS user.

Example to prepare the PGDATA directory
# as root
mkdir /iomdata/pg_11/data
chown <postgres OS user>:<group> /iomdata/pg_11/data

# as the postgres OS user
# add PGDATA=/iomdata/pg_11/data to the user environment

4.2.2 Initialization (initdb)

The initdb of the standard installation process needs special consideration to work on one of the IOM's databases. Initdb will create a new PostgreSQL database cluster and its superuser, see PostgreSQL 11 | initdb.

It must be called as the Postgres OS user.

There are a few options to choose from during a Postgres initialization for IOM:

  1. Consider using data-checksums (PostgreSQL 11 | initdb | --data-checksums).
  2. Make sure to use an UTF8 encoding. Depending on your operating system, you may need to replace the string "UTF8" with "UTF-8" (all places).

    Note

    No change of encoding: This parameter cannot be changed after the cluster initialization.

    The command to perform initdb may change according to the OS, the Postgres version and to the way you installed it.
    For YUM installations refer to YUM_Installation.

    Examples:

    Example
    # Postgres 11, YUM installation on Red Hat 7
    # for more info type /usr/pgsql-11/bin/postgresql-11-setup --help 
    
    # as root
    export PGSETUP_INITDB_OPTIONS="--encoding=UTF8 --locale=en_US.UTF-8 --data-checksums -U postgres -W"
    /usr/pgsql-11/bin/postgresql-11-setup initdb postgresql-11
    
    # without YUM, as root
    ...../pgsql-11/bin/initdb --encoding UTF8 --locale=en_US.UTF-8 --data-checksums -D /iomdata/pg_11/data -U postgres -W
    

4.3 Further Recommendations

4.3.1 Client Authentication

The access permissions must be defined in $PGDATA/pg_hba.conf.

Use MD5 as auth-method to prevent passwords to be sent in clear text across the connection.

You cannot use ident for TCP/IP connections, otherwise, the JDBC driver connection from IOM to the database will not work. See PostgreSQL 11 | Chapter 20. Client Authentication for details.

4.3.2 Database Server Configuration

The ideal configuration mainly depends on the server resources and on the activity. Therefore we can only give a general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid-size system with about 32 GB RAM and 24 cores.

To achieve best performances, almost all of the data (tables and indexes) required for the ongoing workload should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.

The IOM is built with Hibernate as API between the application logic and the database. This mainly results in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.

The following main parameters in $PGDATA/postgresql.conf should be adapted, see PostgreSQL 11 | Chapter 19. Server Configuration.

You can consider PGConfig 2.0 as a guideline (using the OLTP Model).

Some aspects of data reliability are discussed here PostgreSQL 11 | Chapter 30. Reliability and the Write-Ahead Log. Understanding VACUUM is also essential when configuring/monitoring Postgres, see PostgreSQL 11 | Chapter 24. Routine Database Maintenance Tasks.

ParameterDescription
max_connections

The number of concurrent connections from the application is controlled by the xa-datasource configuration in WildFly.
Some connections will take place beside this pool, mainly for job tasks like import/export. Make sure that the max_connection is set higher here than in WildFly.
Also note that highly concurrent connections will negatively impact the performances. It is more efficient to queue the requests than to process them all in parallel.

max_prepared_transactions

Required for IOM installations. Set its value to about 150% of max_connections.
shared_buffersBetween 1/4 and 1/3 of the total RAM, but not more than about 8 GB, otherwise, the cache management will use too many resources. The remaining RAM is more valuable as file system cache.
work_memHigher work_mem can increase performance significantly. The default is way too low. Consider using 100-400 MB.
maintenance_work_memIncrease the default similar as with work_mem to favor quicker vacuums. With IOM this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem).
Consider something like 2% of your total RAM per autovacuum_max_workers. e.g., 32GB RAM * 2% * 3 workers = 2GB.
vacuum_cost_*The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load.
wal_levelDepends on your backup, recovery and failover strategy, should be at least archive.
wal_sync_methodDepends on your platform, check PostgreSQL 11 | 19.5. Write Ahead Log | wal_sync_method (enum).

max_wal_size

8 (small system) - 128 (large system)
max_parallel_workers
(since Postgres 9.6)
0
checkpoint_completion_targetUse 0.8 or 0.9.
archive_* and REPLICATIONDepends on your backup & failover strategy
random_page_costThe default (4) is usually too high. Better choose 2.5 or 3.
effective_cache_sizeIndicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB.
log_min_duration_statement

Set it between 1 and 5 seconds to help track long-running queries.

log_filenameBetter use an explicit name to help when communicating, e.g., pg-IOM_host_port-%Y%m%d_%H%M.log.
log_rotation_ageSet it to 60 min. or less.
log_line_prefixBetter use a more verbose format than the default, e.g., %m|%a|%c|%p|%u|%h|.
log_lock_waits

Activate it (=on).

stats_temp_directoryBetter redirect it to a RAM disk.
log_autovacuum_min_durationSet it to a few seconds to monitor the vacuum activity.
idle_in_transaction_session_timeout.
(since Postgres 9.6)
Set it to a large value, e.g., 9 hours, to clean up possible leftover sessions. An equivalent parameter exists for the WildFly connection pool where it is set to 3 hours per default.
timezoneMust match the timezone of the application servers e.g., Europe/Berlin.

4.4 Create User and Database

The following steps describe the setup of a new database.

To perform the next steps, you need to be able to use the psql command and be able to access the database server via its superuser (usually postgres).

Connect
su - <postgres OS user>
# set variables
IS_OMS_DB_HOST=<first host from is.oms.db.hostlist>
IS_OMS_DB_PORT=<port of first host from is.oms.db.hostlist>
 
# connect to the database server as the super user
psql -U postgres -h $IS_OMS_DB_HOST -p $IS_OMS_DB_PORT -d postgres

Note

All subsequent statements have to be performed within the psql console opened at the step before.

4.4.1 Create User

IOM connects with its own dedicated database user. Create the user:

Create user
-- create user 
CREATE USER "<value of your is.oms.db.user>" PASSWORD '<value of your is.oms.db.pass>';

4.4.2 Remove the Public Schema

IOM does not make use of the public schema.
Better delete it (if existing) as this is a potential target for diverse attack attempts:

Create user
DROP schema IF EXISTS public;;

4.4.3 Dedicated Tablespace (Optional)

The database initialization dump does not expect a given tablespace, all objects will be placed in the default users tablespace. If your data directory is located on an adequate file system, you can keep the Postgres default tablespace which is located in $PGDATA/base. If you want to define a dedicated tablespace, i.e., on a dedicated file system, you should set it as default for the user and for the database prior to using the provided initialization dump (provided that it has been created):

Default tablespace
-- set default table space
ALTER USER "<value of your is.oms.db.user>" SET default_tablespace = 'tblspc_iom';

Also see Postgres tablespaces: PostgreSQL: Documentation: 11: CREATE TABLESPACE

4.4.4 Create Database

IOM uses its own dedicated database.

  1. Create the database:

    Create database
    -- create database
    CREATE DATABASE "<value of your is.oms.db.name>"
      WITH OWNER = "<value of your is.oms.db.user>"
       ENCODING = 'UTF8'         
       TABLESPACE = pg_default   -- or your dedicated tablespace 
       LC_COLLATE = 'en_US.UTF-8' 
       LC_CTYPE = 'en_US.UTF-8'   
       CONNECTION LIMIT = -1; 
  2. Set database search_path:

    Set search path
    -- set search_path
    ALTER DATABASE "<value of your is.oms.db.name>" SET search_path = customer, oms, omt, product, system, admin;
  3. Exit the psql console:

    Exit console
    -- exit console
    postgres=# \q

5 Installation of IOM

5.1 Preparation of Operating System and File System

5.1.1 Create OS User and Group on the Host

Intershop strongly recommends to create a dedicated group and user for the IOM installation on the host system. It is recommended to place the users' home directory outside the IOM installation. The following script uses the same value for user name and group name:

Create user credentials
# as root set variables
# default: oms
# use "oms" if you do not want to customize the installation layout
OMS_USER=<name of user to own and run IOM>
 
# use "/home/$OMS_USER" as home-directory
OMS_USER_HOME=<home directory of OMS user>
 
# add group
groupadd $OMS_USER

# add user
# users home is created at default location
useradd -g $OMS_USER -d $OMS_USER_HOME -m $OMS_USER

# set password
passwd $OMS_USER

5.1.2 Create OMS Home Directory

  1. Log in as root.

  2. Create directory $OMS_HOME, set owner and group.

    Create $OMS_HOME, set owner and group
    # as root
    # use "/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout
    OMS_HOME=<installation directory of IOM>
     
    mkdir -p $OMS_HOME
    chown $OMS_USER:$OMS_USER $OMS_HOME

5.1.3 Extract the Release Package

  1. Log in as $OMS_USER. 
  2. Place and extract the IOM release package in $OMS_HOME.

    Extract package
    # as $OMS_USER at directory $OMS_HOME
    # extract the IOM release package
    tar -xvzf IOM-2.16.0.0.tgz

    This creates the main directories: etc, var, bin, lib, application, etc.

5.1.4 Edit Installation Properties

For the following tasks, the local environment of IOM has to be defined in $OMS_HOME/etc/ installation.properties. See installation.properties for more information.

5.1.5 Distribute the Release Package Contents

According to the Linux File System Hierarchy Standard, the OMS installation will be distributed on different file systems. The variable $OMS_VAR and the command line switch --etc of integrate.sh define the directories, where to place them according to parts of OMS. There is no need to change the default values, except you want to adapt the installation layout to your own needs.

  1. Edit etc/installation.properties as $OMS_USER if you want to adapt the installation layout to your own needs.
    No changes are necessary in this step if you want to use the default layout.
    1. Change OMS_USER to the name of the user who should own and run IOM.
    2. Change OMS_HOME to the name of the directory where the IOM software has been placed.
    3. Change OMS_VAR to the name of the directory where to place var-data of IOM.
    4. Change OMS_SHARE to the name of the directory where to place shared data of IOM.
    5. Change OMS_LOG to the name of the directory where to place log data of IOM.
  2. Prepare the directories $OMS_SHARE$OMS_VAR, $OMS_LOG and $ETC_TARGET before distributing OMS to these locations. Set the placeholders to the values defined in installation.properties.

  3. As the manual describes the setup process of a single IOM machine, $OMS_SHARE must not be mounted to a real shared file-system. For this kind of installation, a local directory is fully sufficient. In this case, the health-check of the shared file system must be disabled (see description of the property is.oms.sharedfs.healthcheck above).

    Setup user and directories
    # as root at $OMS_HOME
     
    # read installation.properties
    . etc/installation.properties
     
    # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout
    ETC_TARGET=<directory, where to place etc-data>
    
    # prepare the directories 
    mkdir -p "$ETC_TARGET" 
    chown $OMS_USER:$OMS_USER "$ETC_TARGET" 
    mkdir -p "$OMS_VAR" 
    chown $OMS_USER:$OMS_USER "$OMS_VAR"
    mkdir -p "$OMS_LOG"
    chown $OMS_USER:$OMS_USER "$OMS_LOG"
    mkdir -p "$OMS_SHARE" 
    chown $OMS_USER:$OMS_USER "$OMS_SHARE"
  4. Distribute the release package parts by running the integrate.sh script.

    Execute integrate.sh
    # as $OMS_USER at $OMS_HOME
     
    # set variable
    # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout
    ETC_TARGET=<directory, where to place etc-data>
     
    # copy etc- and var-data to configured locations
    bin/integrate.sh --etc="$ETC_TARGET"

    This will copy the relevant parts to the target directory, defined by the --etc=... command line parameter and $OMS_VAR as defined in installation.properties. It also creates symlinks from $OMS_HOME/etc -> $ETC as well as from $OMS_HOME/var -> $OMS_VAR.

  5. Check the newly created symlinks in the $OMS_HOME directory and delete the temporary directory ETC.VAR.SRC.*, which contains backups of etc- and var-directories.

5.2 Initialize Database with Dump

Only for new installations

If you migrate the application from an older version, continue with section Database Migration.

Import the initial dump for IOM that contains the basic necessary database configuration. This dump can be found in the IOM delivery package at $OMS_HOME/postgres/dumps.

  1. Edit etc/cluster.properties as $OMS_USER if you do not want to use the default PostgreSQL configuration. If you want to use a PostgreSQL database at localhost, which was created with the default configuration, you do not need to make any changes in this step.

    1. Change /system-property=is.oms.db.hostlist to the hostname or IP and port of your PostgreSQL database.
    2. Change /system-property=is.oms.db.name to the name of your database dedicated to IOM.
    3. Change /system-property=is.oms.db.user to the name of the user, which should be used to connect to the PostgreSQL database.
    4. Change /system-property=is.oms.db.pass to the password of the user, who connects to the PostgreSQL database.
  2. Import is done using the following SQL command for import.

    Install database dump
    # as $OMS_USER at $OMS_HOME
    # setup environment
    . bin/set_env.sh
     
    # unzip and install initial data dump
    gunzip -c postgres/dumps/OmsDB.initial.2.16.0.0.sql.gz | psql -U $PGUSER -h $PGHOST -p $PGPORT -d $PGDATABASE

5.3 Database Migration

Refer to Guide - IOM Database Migration if you have to migrate the database.

This is the case when migrating from an older version of IOM or if the IOM version you are installing has a higher version number than the dump you have installed before.

5.4 Install WildFly

5.4.1 Download and Place the WildFly Installation Package

  1. Get the latest version of WildFly 17.0 from http://wildfly.org/downloads/ (i.e., 17.0.0.Final) as a TGZ archive into /tmp.
  2. Create the directory $JBOSS_HOME, set owner and group.

    Create JBOSS-HOME directory
    # as root do
    JBOSS_HOME=<set variable according the settings in installation.properties>
    OMS_USER=<name of user to own and run IOM>
     
    mkdir -p $JBOSS_HOME
    chown $OMS_USER:$OMS_USER $JBOSS_HOME
  3. As $OMS_USER unpack the downloaded archive into $JBOSS_HOME:

    Extract WildFly appserver
    # as $OMS_USER at $OMS_HOME
    # setup environment
    . bin/set_env.sh
    
    # set variable
    WILDFLY_VERSION=17.0.0.Final
    
    # extract wildfly package to $JBOSS_HOME
    cd $JBOSS_HOME
    tar -xzf /tmp/wildfly-$WILDFLY_VERSION.tar.gz
    ( cd wildfly-$WILDFLY_VERSION; mv * .[a-zA-Z0-9]* .. )
    rmdir wildfly-$WILDFLY_VERSION

JBOSS_HOME

If you install WildFly exactly this way, no further changes in installation.properties are required. Otherwise, you have to adapt JBOSS_HOME.

5.5 Prepare Configuration

5.5.1 Adapt installation.properties

For this step, set JBOSS_BIND_ADDRESS in installation.properties to an IP address, which is reachable by the browser.
It is also possible to use the default value of 0.0.0.0 here. This value represents „any address“ and covers all available network interfaces.

5.5.2 Install WildFly Application Server as a System Service

The IOM release provides a systemd-unit template to install WildFly as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties, to fill the template. Before expanding the template, make sure you have updated installation.properties. At least the variable JAVA_HOME needs to be adapted.

Since different IOM application servers might run on a single machine, the service name for every server has to be unique.

  1. Edit installation.properties.
    1. JAVA_HOME has to be adapted to point to the directory holding the Java 11 installation.
    2. JBOSS_JAVA_OPTS can be adapted if you want to change the default memory-configuration, garbage-collection configuration, etc.
  2. Expand the systems unit template with the current configuration.

    Expand system-unit template
    # as $OMS_USER setup environment
    . bin/set_env.sh
    
    # expand systemd-unit template
    expand_template.sh < $OMS_ETC/jboss-as.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
  3. Install WildFly as a service.

    Install Wildfly as a service
    # as root set server type
    OMS_SERVER_TYPE=<server type of IOM>
     
    # copy expanded template
    cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system
    
    # enable service
    systemctl enable jboss-as-$OMS_SERVER_TYPE
    
    # start service
    systemctl start jboss-as-$OMS_SERVER_TYPE

5.6 Prepare WildFly as IOM Application Server

5.6.1 Configure WildFly

For the following steps, the WildFly application server needs to be running. WildFly will be configured for usage with IOM. 

  1. Edit installation.properties.
    It is recommended to change the default password defined by JBOSS_ADMIN_PASSWD.
  2. Create the admin user and configure WildFly.

    Configure Wildfly
    # as $OMS_USER in $OMS_HOME setup environment
    . bin/set_env.sh
     
    # create admin user for WildFly management
    add-user.sh -u $JBOSS_ADMIN_USER -p $JBOSS_ADMIN_PASSWD
     
    # load initial configuration of IOM
    jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT -c --file="$OMS_ETC/initSystem.std.$OMS_SERVER_TYPE.cli"
     
    # enhanced standard properties are set in the WildFly 
    cat $OMS_ETC/system.std.$OMS_SERVER_TYPE.properties | update_properties.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
     
    # load $OMS_ETC/cluster.properties
    cat $OMS_ETC/cluster.properties | update_properties.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
  3. Restart the WildFly application server.

    Restart Wildfly
    # as root restart application server
    systemctl restart jboss-as-$OMS_SERVER_TYPE

5.6.2 Deploy IOM Artifacts

For the following steps, the WildFly application server and the PostgreSQL database need to be running. All deployment artifacts, which are listed in $OMS_ETC/deployment.$OMS_SERVER_TYPE.properties will be deployed into the WildFly application server.

  1. Deploy all artifacts defined by deployment.$OMS_SERVER_TYPE.properties.

    Deploy IOM artifacts
    # as $OMS_USER in $OMS_HOME setup environment
    . bin/set_env.sh
     
    # deploy all artifacts defined by deployment.properties
    deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
  2. Check whether all artifacts have been deployed successfully:

    Get deployment status
    # as $OMS_USER in $OMS_HOME setup environment
    . bin/set_env.sh
     
    # get deployment status
    jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT "deployment-info" 

    Example output for standalone server:

    Console output
    NAME                                             RUNTIME-NAME                                     PERSISTENT ENABLED STATUS 
    postgresql-jdbc4                                 postgresql-jdbc4                                 true       true    OK     
    bakery.base-app-2.16.0.0-SNAPSHOT.ear            bakery.base-app-2.16.0.0-SNAPSHOT.ear            true       true    OK     
    process-app-2.16.0.0-SNAPSHOT.ear                process-app-2.16.0.0-SNAPSHOT.ear                true       true    OK     
    bakery.control-app-2.16.0.0-SNAPSHOT.war         bakery.control-app-2.16.0.0-SNAPSHOT.war         true       true    OK     
    bakery.impex-app-2.16.0.0-SNAPSHOT.war           bakery.impex-app-2.16.0.0-SNAPSHOT.war           true       true    OK     
    bakery.communication-app-2.16.0.0-SNAPSHOT.ear   bakery.communication-app-2.16.0.0-SNAPSHOT.ear   true       true    OK     
    bakery.omt-app-2.16.0.0-SNAPSHOT.war             bakery.omt-app-2.16.0.0-SNAPSHOT.war             true       true    OK     
    oms.rest.communication-app-2.16.0.0-SNAPSHOT.war oms.rest.communication-app-2.16.0.0-SNAPSHOT.war true       true    OK     
    oms.monitoring-app-2.16.0.0-SNAPSHOT.war         oms.monitoring-app-2.16.0.0-SNAPSHOT.war         true       true    OK     
    gdpr-app-2.16.0.0-SNAPSHOT.war                   gdpr-app-2.16.0.0-SNAPSHOT.war                   true       true    OK     
    rma-app-2.16.0.0-SNAPSHOT.war                    rma-app-2.16.0.0-SNAPSHOT.war                    true       true    OK
    transmission-app-2.16.0.0-SNAPSHOT.war           transmission-app-2.16.0.0-SNAPSHOT.war           true       true    OK
    order-state-app-2.16.0.0-SNAPSHOT.war            order-state-app-2.16.0.0-SNAPSHOT.war            true       true    OK
    

5.7 Install IOM Watchdog as a System Service

If you operate the IOM as a highly available system, the WildFly application server must not run directly as a system service. Instead, IOM Watchdog has to run as system service, starting and stopping the WildFly application server depending on health checks, made on the application server, see Guide - Intershop Order Management - Technical Overview.

Since different IOM application servers may run on a single machine, the service name for every server has to be unique. 

The IOM release provides a systemd-unit template to install IOM Watchdog as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties is used to fill the template. Before expanding the template, make sure you have updated the installation.properties. At least the variable OMS_HOME needs to be up to date.

  1. Edit installation.properties.
    OMS_HOME has to be up to date.
  2. Expand the systems unit template with the current configuration:

    Expand system-unit template
    # as $OMS_USER setup environment
    . bin/set_env.sh
    
    # expand systemd-unit template
    expand_template.sh < $OMS_ETC/oms-watchdog.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
  3. Install IOM Watchdog as a service.

    Install Wildfly as a service
    # fill the variable according the settings in installation.properties
    OMS_SERVER_TYPE=<server type of IOM>
     
    # as root copy expanded template
    cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system
    
    # enable service
    systemctl enable jboss-as-$OMS_SERVER_TYPE
    
    # start service
    systemctl start jboss-as-$OMS_SERVER_TYPE

5.8 Configure Rotation Logs

The subsystem undertow is configured to write access logs to $OMS_LOG. It is only able to provide a daily rotation of logs. If you want to provide access logs to the ICI (Intershop Commerce Insight) to get a performance analysis, you need to use the hourly rotation of logs. To overcome the limitation of undertow rotation feature, OMS provides a simple shell script to rotate the logs: bin/logrotate.sh.

If you use the default configuration of watchdog.properties and you want a logfile rotation according to access_log.log, you should add watchdog.log to logrotate, too.

This script has to be executed at the beginning of every hour, by adding the following line to the crontab of $OMS_USER:

crontab
0 * * * *  . $OMS_HOME/bin/set_env.sh && logrotate.sh $OMS_LOG/access_log.log $OMS_LOG/watchdog.log


Disclaimer

The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources
Tickets