Guide - Setup Intershop Order Management 2.15

1 Introduction

The present guide is addressed to administrators who want to install IOM 2.15 in their Linux based infrastructure. It enables them to understand what artifacts are contained in the IOM 2.15 delivery and how they can be installed, configured and deployed.

The document describes how to install IOM 2.15

For a technical overview of typical installation scenarios please see references.

1.1 Glossary

CLICommand Line Interface, a tooling for WildFly management
FTPFile Transfer Protocol
HAHigh availability
ICIThe abbreviation for Intershop Commerce Insight, Intershop's reporting and analytics solution.
ICMThe abbreviation for Intershop Commerce Management

The abbreviation for Intershop Order Management

JBossSynonym for WildFly (former name of the WildFly application server)
JDBCJava Database Connectivity
JDKJava Development Kit

Online transaction processing

OMSThe abbreviation for Order Management System, the technical name of IOM
OSOperating System
URLUniform Resource Locator
WildFlyThe application server that IOM runs on

1.2 References

1.3 Additional References

2 Prerequisites

2.1 Java Development Kit

The WildFly application server hosting and running IOM requires an installed Java development kit (JDK) of at least version 11.

The JAVA_HOME global environment variable has to point to the installation directory of the JDK.


JAVA_HOME will be covered in . The PATH will be set automatically by

2.2 Mail Server

The IOM requires an existing mail server that processes internal and external e-mails sent from IOM via the SMTP protocol.

Also server host and port need to be known for later configuration.

2.3 PostgreSQL Server

The IOM requires a PostgreSQL database hosted by a PostgreSQL database server and can reside on its own host.

To make the database server fit for IOM, certain configuration steps in a standard installation are necessary. For setup and initialization steps please refer to section Database Setup and Initialization.

2.3.1 Support

Intershop does not offer PostgreSQL support further than general recommendations for its use as relational database for the IOM.

A list of companies offering professional Postgres support can be found at PostgreSQL: Professional Services.

Also the PostgreSQL community has some excellent mailing lists, pgsql-general being the most active, see PostgreSQL: PostgreSQL Mailing List Archives.

2.3.2 Operating System

The IOM only supports PostgreSQL servers hosted on a Linux based OS.

2.3.3 Version

We recommend PostgresSQL 11. The IOM version 2.15 is also compatible with PostgreSQL 9.5+

3 Definition of Properties

The IOM uses four major .properties files which are explained below.

Installation Properties

The local environment of IOM is defined in $OMS_ETC/ defines shell variables, which are read by $OMS_HOME/bin/ to provide the environment for all scripts and programs belonging to the IOM system.

$OMS_HOME/bin/ provides the content of as simple shell variables. Additionally, it adds some exported variables, e.g., PATH and some variables required by 3rd party programs (e.g., the content of JBOSS_JAVA_OPTS will be exported as JAVA_OPTS to be available for

Variable name
Default/ Exemplary Value
OMS_USERThe OS user that installs and runs IOMoms

The base location of the extracted IOM release package. The default value makes it easy to run a frontend and backend server. adds $OMS_HOME/bin to PATH.

OMS_HOME is exported by

OMS_HOME is passed to WildFly and can be accessed there as ${installation.OMS_HOME}.


OMS_ETC is set by implicitly to the directory where the file is located.

OMS_ETC is exported by and is not listed within the


The location of operational data files for IOM

OMS_VAR is passed to WildFly and can be accessed there as ${installation.OMS_VAR}.

OMS_VAR is exported by


The location of shared data file of IOM.

OMS_SHARE is passed to Wildfly and can be accessed there as ${installation.OMS_SHARE}.

OMS_SHARE is exported by


The locations of logs written by Wildfly, IOM and scripts.

OMS_LOG is passed to Wildfly and can be accessed there as ${installation.OMS_LOG}.

OMS_LOG is exported by

OMS_APPThe location of IOM artifacts deployable into the application server. A list of directories can be passed here, entries have to be separated by a colon ":".$OMS_HOME/application:$OMS_VAR/customization

Identifier of the current IOM application-server. Must not be empty. It has to be unique for every application server of the IOM cluster.

SERVER_ID is used for the following purposes:

  • Added to log entries to become able to identify the application server, that has written the entry
  • Added to log file name to become able to identify the application server, which has created the log-file
  • Used to identify cache instances
  • Appended to session-IDs to enable sticky sessions and session failover
  • Used to identify servers on cluster status
  • Used to identify servers to process failover from active to a standby server

If left empty, raises an error.

SERVER_ID is exported by

SERVER_ID is passed to WildFly and can be accessed there as ${installation.SERVER_ID}. Additionally, it is used to initialize ${}.


The location of the JDK that Wildlfly uses to run adds $JAVA_HOME/bin to PATH.

JAVA_HOME is exported by


The installation location of the WildFly application server that IOM uses to run. Every instance of IOM requires its own Wildfly installation. Intershop recommends following the naming pattern of OMS_HOME for Wildfly, in order to easily run frontend and backend server in parallel on a single machine. adds $JBOSS_HOME/bin to PATH.

JBOSS_HOME is exported by


Bind address to be used for management- and public-interface.


Change the IP if you do not want to bind JBoss on all interfaces.

Bind address to be used by WildFlys JGroups-subsystem needed for cluster communication, see Guide - Intershop Order Management - Technical Overview. The IP address used for cluster communication must be a private interface.


You need to change the default value if you want to set up a cluster of IOM server nodes.

When running more than one server on the same machine and the same bind-address, the listening ports of both servers have to differ.

To do so, JBOSS_PORT_OFFSET has to be set on one server to increase all port numbers by the defined offset. provides the variable JBOSS_MGMT_PORT (not exported), which is set depending on the value of JBOSS_PORT_OFFSET.


These JAVA options are used when the WildFly application server is started and they are used by too.
Configuration options could be used to configure memory usage, garbage collection, etc. appends $JBOSS_JAVA_OPTS to predefined JAVA_OPTS.

JAVA_OPTS is exported by

-Xms512M -Xmx2048M

JBOSS_ADMIN_USERThis is the name of the IOM WildFly user that will be created to manage the application server. Used to configure WildFly for IOM and for deployments of IOM artifacts.omsadmin
JBOSS_ADMIN_PASSWDThis is the password for the IOM WildFly user that is used to manage the application server. Please change the value.not_yet_a_secret

These JAVA options are applied to the Java-based Watchdog program.

WATCHDOG_JAVA_OPTS is not exported by

Cluster Properties

Cluster properties are WildFly system properties, which define the machine independent configuration of an IOM cluster.

These properties are located in $OMS_ETC/

PostgreSQL related properties are read by and exported as environment variables.

Adjust to the real values, used by your OMS cluster. For example, you have to enter the access information for the database so the IOM application server can access it.

Exemplary Value

A comma-separated list of database servers. Each server entry consists of hostname and port, separated by a colon. Setting the port is optional. If not set, standard port 5432 will be used (see Guide - Intershop Order Management - Technical Overview)

First hostname in list exported by as PGHOST

Port of the first entry in list exported by as PGPORT


The database name to connect at the PostgreSQL server

Exported by as PGDATABASE


PostgreSQL user name to connect as

Exported by as PGUSER


The password to be used when connecting to the PostgreSQL server

Exported by as PGPASSWORD

is.oms.db.cacheEnable/disable database cache
Only values enabled and disabled are allowed. A production system should always enable the use of the DB cache.
is.oms.xmlbinder.cacheUse caching for JAXB-context while un/marshaling or validating XML files.
Only values enabled and disabled are allowed. A production system should always enable the use of the JAXB context cache.


is.oms.smtp.hostThe host of the mail server IOM uses to send maillocalhost
is.oms.smtp.portThe port of the mail server IOM uses to send mail25

OPTIONAL The user name for mail server authentication


OPTIONAL The user password for mail server authentication

is.oms.mail.external.fromThe sender address for external mails (e.g., mails sent to the shop customers)
is.oms.mail.internal.fromThe sender address for internal mails (e.g., to report errors via mail)
is.oms.mail.internal.toThe recipient for internal
is.oms.mail.internal.ccThe carbon copy for internal mails
is.oms.mail.internal.bccThe blind carbon copy for internal mails
is.oms.mail.businessoperations.toThe recipient for business operations mailsbusinessoperations@youraddress .com

The base path for e-mail resources that are loaded from the e-mail client. E.g., images or stylesheets. Also, see Concept - IOM Customer E-Mails.


The base path of the file system where IOM reads and writes its operational data. The default value references the value defined at
You must not change the value here, in order to have a consistent configuration.



The publicly accessible base URL of IOM which could be a DNS of the load balancer etc

For ICM it is used at the IOM connector, e.g., for the return label service.

is.oms.validation.pattern.phoneValidation pattern for phone numbers. If not set, the default value will be used.(^$)|(^[+]?[0-9. ()/-]{8,25}$)

Validation pattern for e-mail addresses. If not set, the default value will be used.


The character '\' in the regular expression requires an escaping (\ => \\). Otherwise, the property would not be set correctly.

Desired expression


requires following escaped expression



Validation pattern for passwords. If not set, the default value will be used.


The character '\' in the regular expression requires an escaping (\ => \\). Otherwise, the property would not be set correctly.

Desired expression


requires following escaped expression


is.oms.validation.pattern.password.hintThe displayed note, where you can explain the password rules for OMT users, can be customized.
If not set, the default value will be used.
The password must include a letter, a number and must contain at least 8 characters.


Enable/ disable health check. It will always be activated, except when this parameter is set to "false".




Health checks are now performed using a Java timer, and no more from the REST requests.

Maximum age in seconds for which a health check found within the cache is considered to be valid.
The server status will be returned as 503 after that, assuming that the health check timer has stopped or is hanging.

  • Default: 11 seconds
  • Minimum: is.oms.healthcheck.recurringtime + 6




Health check recurring interval in seconds.

  • Default: 5 seconds
  • Minimum: 1 second

When using the Watchdog, this value should be less than the property watchdog.cycle.


Enable/disable health check for shared file-system.

Checks the ability to write/delete files in directory $OMS_SHARE/.healthcheck. This special directory inside $OMS_SHARE was chosen to have a real indicator for the shared file-system. If you set up your system manually, you have to create the .healthcheck directory manually inside $OMS_SHARE.

If you do not set up a clustered IOM (single IOM node without shared filesystem), you have to disable this health check.


The shared secret for a JSON Web Tokens (JWT) creation/validation. JWTs will be generated with HMAC algorithm (HS256)


Intershop strongly recommends to change the default shared secret used for the JSON Web Tokens creation/validation in the cluster properties.

To secure the JWT, a key of the same size as the hash output or larger must be used with the JWS HMAC SHA-2 algorithms (i.e, 256 bits for "HS256"), see JSON Web Algorithms (JWA) | 3.2. HMAC with SHA-2 Functions.


Deployment Properties

Deployment properties define which artifacts of the IOM should be deployed to the WildFly application server.

The properties are located in $OMS_ETC/ The order of entries within this .properties file is important, as it reflects the order of deployments.

The table below shows the entries in $OMS_ETC/

Cluster server

System Properties

System properties are defined in $OMS_ETC/ This file contains WildFly specific configuration settings. Mostly there is no need to adapt any properties defined in this file, without one exception: the webservices subsystem of Wildfly is responsible for the delivery of wsdl files. For correct creation of links within wsdl-files, a proper configuration of this subsystem is required.

Adjust the following properties in to get properly working wsdl-requests.



Default/Exemplary Value


Hostname to be used for links within wsdl-responses. The client has to be able to follow these links, hence the hostname configured here has to be the publicly visible hostname of your IOM system.



Port number to be used for http-links within wsdl-responses. The client has to be able to follow these links, hence the port configured here has to be the publicly visible http-port of your IOM system.

/subsystem=webservices:wsdl-secure-portPort number to be used for https-links within wsdl-responses. The client has to be able to follow these links, hence the port configured here has to be the publicly visible https-port of your IOM system."8443"
/subsystem=webservices:wsdl-uri-schemeURI scheme to be used for links within wsdl-responses. The client has to be able to follow these links, hence the URI scheme configured here has to be the publicly available scheme of your IOM system."http"

4 Database Setup and Initialization

Only for new installations

Database Setup and Initialization is only required for a new IOM installation. If you migrate the application from an older version you can skip this section.


Define a dedicated OS user for the PostgreSQL service (referred to as the "postgres OS user" in the following descriptions).

Cluster Initialization

Data Directory

The PostgreSQL data directory contains all of the data files for the database. The variable PGDATA is used to reference this directory. It must be prepared prior to the initialization and belongs exclusively to the Postgres OS user.

You should not use this directory for private data, nor add symbolic links to it. But you will probably want an extra file system built on RAID backed up by battery. Just make sure, not to use its root folder for the data directory and have the major version within the path.

This will facilitate maintenance and Postgres major upgrades, e.g., /iomdata/pg_11/data.

This directory must belong to the Postgres OS user.

Example to prepare the PGDATA directory
# as root
mkdir /iomdata/pg_11/data
chown <postgres OS user>:<group> /iomdata/pg_11/data

# as the postgres OS user
# add PGDATA=/iomdata/pg_11/data to the user environment

Initialization (initdb)

The initdb of the standard installation process needs special consideration to work on one of the IOM's databases. Initdb will create a new PostgreSQL database cluster and its superuser, see PostgreSQL 11 | initdb.

It must be called as the Postgres OS user.

There are a few options to choose from during a Postgres initialization for IOM:

  1. Consider using data-checksums (PostgreSQL 11 | initdb | --data-checksums).
  2. Make sure to use an UTF8 encoding. Depending on your operating system, you may need to replace the string "UTF8" with "UTF-8" (all places).


    No change of encoding: This parameter cannot be changed after the cluster initialization.

    The command to perform initdb may change according to the OS, the Postgres version and to the way, you did install it.
    For YUM installations refer to YUM_Installation.


    # Postgres 11, YUM installation on Red Hat 7
    # for more info type /usr/pgsql-11/bin/postgresql-11-setup --help 
    # as root
    export PGSETUP_INITDB_OPTIONS="--encoding=UTF8 --locale=en_US.UTF-8 --data-checksums -U postgres -W"
    /usr/pgsql-11/bin/postgresql-11-setup initdb postgresql-11
    # without YUM, as root
    ...../pgsql-11/bin/initdb --encoding UTF8 --locale=en_US.UTF-8 --data-checksums -D /iomdata/pg_11/data -U postgres -W

Further Recommendations

Client Authentication

The access permissions must be defined in $PGDATA/pg_hba.conf.

Use MD5 as auth-method to prevent passwords to be sent in clear text across the connection.

You cannot use ident for TCP/IP connections, otherwise, the JDBC driver connection from IOM to the database will not work. See PostgreSQL 11 | Chapter 20. Client Authentication for details.

Database Server Configuration

The ideal configuration depends mainly on the server resources and on the activity. Therefore we can only give a general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid-size system with about 32 GB RAM and 24 cores.

To achieve best performances, almost all of the data (tables and indexes) required for the ongoing workload should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.

The IOM is built with Hibernate as API between the application logic and the database. This results mainly in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.

The following main parameters in $PGDATA/postgresql.conf should be adapted, see PostgreSQL 11 | Chapter 19. Server Configuration.

You can consider PGConfig 2.0 as a guideline (using the OLTP Model).

Some aspects of data reliability are discussed here PostgreSQL 11 | Chapter 30. Reliability and the Write-Ahead Log. Understanding VACUUM is also essential when configuring/monitoring Postgres, see PostgreSQL 11 | Chapter 24. Routine Database Maintenance Tasks.


The number of concurrent connections from the application is controlled by the xa-datasource configuration in WildFly.
Some connections will take place beside this pool, mainly for job tasks like import/export. Make sure that the max_connection is set higher here than in WildFly.
Also, note that highly concurrent connections will negatively impact the performances. It is more efficient to queue the requests than to process them all in parallel.


Required for IOM installations. Set its value to about 150% of max_connections.
shared_buffersBetween 1/4 and 1/3 of the total RAM, but not more than about 8 GB, otherwise, the cache management will use too many resources. The remaining RAM is more valuable as file-system cache.
work_memHigher work_mem can increase performance significantly. The default is way too low. Consider using 100-400 MB.
maintenance_work_memIncrease the default similar as with work_mem to favor quicker vacuums. With IOM this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem).
Consider something like 2% of your total RAM per autovacuum_max_workers. e.g., 32GB RAM * 2% * 3 workers = 2GB.
vacuum_cost_*The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load.
wal_levelDepends on your backup, recovery and failover strategy, should be at least archive.
wal_sync_methodDepends on your platform, check PostgreSQL 11 | 19.5. Write Ahead Log | wal_sync_method (enum).


8 (small system) - 128 (large system)
(since Postgres 9.6)
checkpoint_completion_targetUse 0.8 or 0.9.
archive_* and REPLICATIONDepends on your backup & failover strategy
random_page_costThe default (4) is usually too high. Better choose 2.5 or 3.
effective_cache_sizeIndicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB.

Set it between 1 and 5 seconds to help track long-running queries.

log_filenameBetter use an explicit name to help when communicating. E.g.: pg-IOM_host_port-%Y%m%d_%H%M.log.
log_rotation_ageSet it to 60 min or less.
log_line_prefixBetter use a more verbose format than the default. E.g.: %m|%a|%c|%p|%u|%h|.

Activate it (=on).

stats_temp_directoryBetter redirect it to a RAM disk.
log_autovacuum_min_durationSet it to a few seconds to monitor the vacuum activity.
(since Postgres 9.6)
Set it to a large value, e.g., 9 hours, to clean up possible leftover sessions. An equivalent parameter exists for the WildFly connection pool where it is set to 3 hours per default.
timezoneMust match the timezone of the application servers e.g., Europe/Berlin.

Create User and Database

The following steps describe the setup of a new database.

To perform the next steps, you need to be able to use the psql command and be able to access the database server via its superuser (usually postgres).

su - <postgres OS user>
# set variables
IS_OMS_DB_HOST=<first host from is.oms.db.hostlist>
IS_OMS_DB_PORT=<port of first host from is.oms.db.hostlist>
# connect to the database server as the super user
psql -U postgres -h $IS_OMS_DB_HOST -p $IS_OMS_DB_PORT -d postgres


All subsequent statements have to be performed within the psql console opened at the step before.

Create User

IOM connects with its own dedicated database user.

  1. Create user:

    Create user
    -- create user 
    CREATE USER "<value of your is.oms.db.user>" PASSWORD '<value of your is.oms.db.pass>';

Remove the Public Schema

IOM does not make use of the public schema.
Better delete it (if exists) as this is a potential target for diverse attack attempts.

  1. Drop public schema

    Create user
    DROP schema IF EXISTS public;;

Dedicated Tablespace (Optional)

The database initialization dump does not expect a given tablespace, all objects will be placed in the default users tablespace. When your data directory is located on an adequate file system, you can keep the Postgres default tablespace which is located in $PGDATA/base. If you want to define a dedicated tablespace (i.e., on a dedicated file system), you should:

  1. Set it as default for the user and for the database prior using the provided initialization dump (provided it has been created):

    Default tablespace
    -- set default table space
    ALTER USER "<value of your is.oms.db.user>" SET default_tablespace = 'tblspc_iom';

Also see Postgres tablespaces: PostgreSQL: Documentation: 11: CREATE TABLESPACE

Create Database

IOM uses its own dedicated database.

  1. Create the database:

    Create database
    -- create database
    CREATE DATABASE "<value of your>"
      WITH OWNER = "<value of your is.oms.db.user>"
       ENCODING = 'UTF8'         
       TABLESPACE = pg_default   -- or your dedicated tablespace 
       LC_COLLATE = 'en_US.UTF-8' 
       LC_CTYPE = 'en_US.UTF-8'   
       CONNECTION LIMIT = -1; 
  2. Set database search_path:

    Set search path
    -- set search_path
    ALTER DATABASE "<value of your>" SET search_path = customer, oms, omt, product, system, admin;
  3. Exit the psql console:

    Exit console
    -- exit console
    postgres=# \q

5 Installation of IOM

Preparation of Operating System and File System

Create OS User and Group on the Host

Intershop strongly recommends to create a dedicated group and user for the IOM installation on the host system. It is recommended to place the users home directory outside the IOM installation. The following script uses the same value for user name and group name:

Create user credentials
# as root set variables
# default: oms
# use "oms" if you do not want to customize the installation layout
OMS_USER=<name of user to own and run IOM>
# use "/home/$OMS_USER" as home-directory
OMS_USER_HOME=<home directory of OMS user>
# add group
groupadd $OMS_USER

# add user
# users home is created at default location
useradd -g $OMS_USER -d $OMS_USER_HOME -m $OMS_USER

# set password
passwd $OMS_USER

Create OMS Home Directory

  1. Log in as root.

  2. Create directory $OMS_HOME, set owner and group.

    Create $OMS_HOME, set owner and group
    # as root
    # use "/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout
    OMS_HOME=<installation directory of IOM>
    mkdir -p $OMS_HOME

Extract the Release Package

  1. Log in as $OMS_USER. 
  2. Place and extract the IOM release package in $OMS_HOME.

    Extract package
    # as $OMS_USER at directory $OMS_HOME
    # extract the IOM release package
    tar -xvzf IOM-

    This creates the main directories: etc, var, bin, lib, application, etc.

Edit Installation Properties

For the following tasks, the local environment of IOM has to be defined in $OMS_HOME/etc/ See for more information.

Distribute the Release Package Contents

According to the Linux File System Hierarchy Standard, OMS installation will be distributed on different file systems. The variable $OMS_VAR and the command line switch --etc of define the directories, where to place them according to parts of OMS. There is no need to change the default values, except you want to adapt the installation layout to your own needs.

  1. Edit etc/ as $OMS_USER if you want to adapt the installation layout to your own needs.
    You do not need to make any changes at the moment, if you want to use the default layout.
    1. Change OMS_USER to the name of the user, who should own and run IOM.
    2. Change OMS_HOME to the name of the directory, where the IOM software has been placed.
    3. Change OMS_VAR to the name of the directory, where to place var-data of IOM.
    4. Change OMS_SHARE to the name of the directory, where to place shared data of IOM.
    5. Change OMS_LOG to the name of the directory, where to place log data of IOM.
  2. Prepare the directories $OMS_SHARE$OMS_VAR, $OMS_LOG and $ETC_TARGET before distributing OMS to these locations. Set the placeholders to the values defined in

  3. As the manual describes the setup process of a single IOM machine, $OMS_SHARE must not be mounted to a real shared file-system. For this kind of installation, a local directory is fully sufficient. In this case, the health-check of the shared file-system must be disabled (see description of the property is.oms.sharedfs.healthcheck above).

    Setup user and directories
    # as root at $OMS_HOME
    # read
    . etc/
    # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout
    ETC_TARGET=<directory, where to place etc-data>
    # prepare the directories 
    mkdir -p "$ETC_TARGET" 
    mkdir -p "$OMS_VAR" 
    chown $OMS_USER:$OMS_USER "$OMS_VAR"
    mkdir -p "$OMS_LOG"
    chown $OMS_USER:$OMS_USER "$OMS_LOG"
    mkdir -p "$OMS_SHARE" 
  4. Distribute the release package parts by running the script.

    # as $OMS_USER at $OMS_HOME
    # set variable
    # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout
    ETC_TARGET=<directory, where to place etc-data>
    # copy etc- and var-data to configured locations
    bin/ --etc="$ETC_TARGET"

    This will copy the relevant parts to the target directory, defined by the --etc=... command line parameter and $OMS_VAR as defined in It also creates symlinks from $OMS_HOME/etc -> $ETC as well as from $OMS_HOME/var -> $OMS_VAR.

  5. Afterward, check the newly created symlinks in $OMS_HOME directory and delete the temporary directory ETC.VAR.SRC.*, which contains backups of etc- and var-directories.

Initialize Database with Dump

Only for new installations

If you migrate the application from an older version continue with section IOM 2.15 Installation - Installation of IOM#Database Migration.

Import the initial dump for IOM that will contain the basic necessary database configuration. This dump can be found in the IOM delivery package at $OMS_HOME/postgres/dumps.

  1. Edit etc/ as $OMS_USER if you do not want to use the default PostgreSQL configuration. If you want to use a PostgreSQL database at localhost, which was created with the default configuration, you do not need to make any changes at the moment,

    1. Change /system-property=is.oms.db.hostlist to the hostname or IP and port of your PostgreSQL database.
    2. Change / to the name of your database dedicated to IOM.
    3. Change /system-property=is.oms.db.user to the name of the user, which should be used to connect to the PostgreSQL database.
    4. Change /system-property=is.oms.db.pass to the password of the user, who connects to the PostgreSQL database.
  2. Import is done using the following SQL command for import.

    Install database dump
    # as $OMS_USER at $OMS_HOME
    # setup environment
    . bin/
    # unzip and install initial data dump
    gunzip -c postgres/dumps/OmsDB.initial. | psql -U $PGUSER -h $PGHOST -p $PGPORT -d $PGDATABASE

Database Migration

Refer to Guide - IOM Database Migration (2.0 - 2.17) if you have to migrate the database.

This is the case when migrating from an older version of IOM or if the IOM version you are installing has a higher version number than the dump you have installed before.

Install WildFly

Download and Place the WildFly Installation Package

  1. Get the latest version of WildFly 17.0 from (i.e., 17.0.0.Final) as a TGZ archive into /tmp.
  2. Create directory $JBOSS_HOME, set owner and group.

    Create JBOSS-HOME directory
    # as root do
    JBOSS_HOME=<set variable according the settings in>
    OMS_USER=<name of user to own and run IOM>
    mkdir -p $JBOSS_HOME
  3. As $OMS_USER unpack the downloaded archive into $JBOSS_HOME:

    Extract WildFly appserver
    # as $OMS_USER at $OMS_HOME
    # setup environment
    . bin/
    # set variable
    # extract wildfly package to $JBOSS_HOME
    cd $JBOSS_HOME
    tar -xzf /tmp/wildfly-$WILDFLY_VERSION.tar.gz
    ( cd wildfly-$WILDFLY_VERSION; mv * .[a-zA-Z0-9]* .. )
    rmdir wildfly-$WILDFLY_VERSION


If you install WildFly exactly this way, no further changes in are required. Otherwise, you have to adapt JBOSS_HOME.

Prepare Configuration


  1. Set JBOSS_BIND_ADDRESS in to an IP address, which is reachable by the browser.
    It is also possible to use the default value of here. This value represents „any address“ and covers all available network interfaces.

Install WildFly Application Server as a System Service

The IOM release provides a systemd-unit template to install WildFly as a service. The script uses the environment, hence the information stored in, to fill the template. Before expanding the template, make sure you have updated At least the variable JAVA_HOME needs to be adapted.

Since different IOM application servers might run on a single machine, the service name for every server has to be unique.

  1. Edit
    1. JAVA_HOME has to be adapted to point to the directory holding the Java 11 installation.
    2. JBOSS_JAVA_OPTS can be adapted, if you want to change the default memory-configuration, garbage-collection configuration, etc.
  2. Expand the systems unit template with the current configuration.

    Expand system-unit template
    # as $OMS_USER setup environment
    . bin/
    # expand systemd-unit template < $OMS_ETC/jboss-as.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
  3. Install WildFly as a service.

    Install Wildfly as a service
    # as root set server type
    OMS_SERVER_TYPE=<server type of IOM>
    # copy expanded template
    cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system
    # enable service
    systemctl enable jboss-as-$OMS_SERVER_TYPE
    # start service
    systemctl start jboss-as-$OMS_SERVER_TYPE

Prepare WildFly as IOM Application Server

Configure WildFly

For the following steps, the WildFly application server needs to be running. WildFly will be configured for usage with IOM. 

  1. Edit
    It is recommended to change the default password defined by JBOSS_ADMIN_PASSWD.
  2. Create the admin user and configure WildFly.

    Configure Wildfly
    # as $OMS_USER in $OMS_HOME setup environment
    . bin/
    # create admin user for WildFly management -u $JBOSS_ADMIN_USER -p $JBOSS_ADMIN_PASSWD
    # load initial configuration of IOM -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT -c --file="$OMS_ETC/initSystem.std.$OMS_SERVER_TYPE.cli"
    # enhanced standard properties are set in the WildFly 
    cat $OMS_ETC/system.std.$ | --jboss-cli=" -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
    # load $OMS_ETC/
    cat $OMS_ETC/ | --jboss-cli=" -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
  3. Restart the WildFly application server.

    Restart Wildfly
    # as root restart application server
    systemctl restart jboss-as-$OMS_SERVER_TYPE

Deploy IOM Artifacts

For the following steps, the WildFly application server and the PostgreSQL database need to be running. All deployment artifacts, which are listed in $OMS_ETC/deployment.$ will be deployed into the WildFly application server.

  1. Deploy all artifacts defined by deployment.$

    Deploy IOM artifacts
    # as $OMS_USER in $OMS_HOME setup environment
    . bin/
    # deploy all artifacts defined by --jboss-cli=" -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
  2. Check whether all artifacts have been deployed successfully:

    Get deployment status
    # as $OMS_USER in $OMS_HOME setup environment
    . bin/
    # get deployment status -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT "deployment-info" 

    Example output for standalone server:

    Console output
    NAME                                             RUNTIME-NAME                                     PERSISTENT ENABLED STATUS 
    postgresql-jdbc4                                 postgresql-jdbc4                                 true       true    OK     
    bakery.base-app-            bakery.base-app-            true       true    OK     
    process-app-                process-app-                true       true    OK     
    bakery.control-app-         bakery.control-app-         true       true    OK     
    bakery.impex-app-           bakery.impex-app-           true       true    OK     
    bakery.communication-app-   bakery.communication-app-   true       true    OK     
    bakery.omt-app-             bakery.omt-app-             true       true    OK true       true    OK     
    oms.monitoring-app-         oms.monitoring-app-         true       true    OK     
    gdpr-app-                   gdpr-app-                   true       true    OK     
    rma-app-                    rma-app-                    true       true    OK
    transmission-app-           transmission-app-           true       true    OK
    order-state-app-            order-state-app-            true       true    OK

Install IOM Watchdog as a System Service

If you operate the IOM as a highly available system, the WildFly application server must not run directly as a system service. Instead, IOM Watchdog has to run as system service, starting and stopping the WildFly application server depending on health checks, made on the application server, see Guide - Intershop Order Management - Technical Overview.

Since different IOM application servers may run on a single machine, the service name for every server has to be unique. 

The IOM release provides a systemd-unit template to install IOM Watchdog as a service. The script uses the environment, hence the information stored in is used to fill the template. Before expanding the template, make sure you have updated the At least the variable OMS_HOME needs to be up to date.

  1. Edit
    OMS_HOME has to be up to date.
  2. Expand the systems unit template with current configuration:

    Expand system-unit template
    # as $OMS_USER setup environment
    . bin/
    # expand systemd-unit template < $OMS_ETC/oms-watchdog.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
  3. Install IOM Watchdog as a service.

    Install Wildfly as a service
    # fill the variable according the settings in
    OMS_SERVER_TYPE=<server type of IOM>
    # as root copy expanded template
    cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system
    # enable service
    systemctl enable jboss-as-$OMS_SERVER_TYPE
    # start service
    systemctl start jboss-as-$OMS_SERVER_TYPE

Configure Rotation Logs

The subsystem undertow is configured to write access logs to $OMS_LOG. It is only able to provide a daily rotation of logs. If you want to provide access logs to the ICI (Intershop Commerce Insight) to get a performance analysis, you need to use the hourly rotation of logs. To overcome the limitation of undertow rotation feature, OMS provides a simple shell script to rotate the logs: bin/

If you use the default configuration of and you want a logfile rotation according to access_log.log you should add watchdog.log to logrotate too.

This script has to be executed at the beginning of every hour, by adding the following line to the crontab of $OMS_USER:

0 * * * *  . $OMS_HOME/bin/ && $OMS_LOG/access_log.log $OMS_LOG/watchdog.log


The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources