The present guide is addressed to administrators who want to install IOM 2.12 in their Linux based infrastructure. It enables them to understand what artifacts are contained in the IOM 2.12 delivery and how they can be installed, configured and deployed.
The document describes how to install IOM 2.12.
For a technical overview of typical installation scenarios please see references.
Wording | Description |
---|---|
CLI | Command Line Interface, a tooling for WildFly management |
FTP | File Transfer Protocol |
HA | High availability |
ICI | The abbreviation for Intershop Commerce Insight, Intershop's reporting and analytics solution. |
ICM | The abbreviation for Intershop Commerce Management |
IOM | The abbreviation for Intershop Order Management |
JBoss | Synonym for WildFly (former name of the WildFly application server) |
JDBC | Java Database Connectivity |
JDK | Java Development Kit |
OLTP | Online transaction processing |
OMS | The abbreviation for Order Management System, the technical name of IOM |
OS | Operating System |
URL | Uniform Resource Locator |
WildFly | The application server that IOM runs on |
The WildFly application server hosting and running IOM requires an installed Java development kit (JDK) of at least version 11.
The JAVA_HOME
global environment variable has to point to the installation directory of the JDK.
Note
JAVA_HOME
will be covered in installation.properties. The PATH
will be set automatically by set_env.sh.
The IOM requires an existing mail server that processes internal and external e-mails sent from IOM via the SMTP protocol.
Also server host and port need to be known for later configuration.
The IOM requires a PostgreSQL database hosted by a PostgreSQL database server and can reside on its own host.
To make the database server fit for IOM, certain configuration steps in a standard installation are necessary. For setup and initialization steps please refer to section Database Setup and Initialization.
Intershop does not offer PostgreSQL support further than general recommendations for its use as relational database for the IOM.
A list of companies offering professional Postgres support can be found at PostgreSQL: Professional Services.
Also the PostgreSQL community has some excellent mailing lists, pgsql-general being the most active, see PostgreSQL: PostgreSQL Mailing List Archives.
The IOM only supports PostgreSQL servers hosted on a Linux based OS.
We recommend PostgresSQL 10. The IOM version 2.12 is also compatible with PostgreSQL 9.5+
The IOM uses four major property files which are explained below.
The local environment of IOM is defined in $OMS_ETC/installation.properties.
installation.properties defines shell variables, which are read by $OMS_HOME/bin/set_env.sh to provide the environment for all scripts and programs belonging to the IOM system.
$OMS_HOME/bin/set_env.sh provides the content of installation.properties as simple shell variables. Additionally, it adds some exported variables, e.g., PATH
and some variables required by 3rd party programs (e.g., the content of JBOSS_JAVA_OPTS will be exported as JAVA_OPTS to be available for standalone.sh).
Variable name | Description | Default/ Exemplary Value |
---|---|---|
OMS_USER | The OS user that installs and runs IOM | oms |
OMS_HOME | The base location of the extracted IOM release package. The default value makes it easy to run a frontend and backend server. set_env.sh adds $OMS_HOME/bin to PATH. OMS_HOME is exported by set_env.sh. OMS_HOME is passed to WildFly and can be accessed there as ${installation.OMS_HOME}. | /opt/$OMS_USER.$OMS_SERVER_TYPE |
OMS_ETC | OMS_ETC is set by set_env.sh implicitly to the directory where the installation.properties file is located. OMS_ETC is exported by set_env.sh and is not listed within the installation.properties. | - |
OMS_VAR | The location of operational data files for IOM OMS_VAR is passed to WildFly and can be accessed there as OMS_VAR is exported by set_env.sh. | /var/opt/$OMS_USER.$OMS_SERVER_TYPE |
OMS_SHARE | The location of shared data file of IOM. OMS_SHARE is passed to Wildfly and can be accessed there as OMS_SHARE is exported by set_env.sh. | /var/opt/$OMS_USER.share |
OMS_LOG | The locations of logs written by Wildfly, IOM and scripts. OMS_LOG is passed to Wildfly and can be accessed there as OMS_LOG is exported by set_env.sh. | /var/opt/$OMS_USER.log |
OMS_APP | The location of IOM artifacts deployable into the application server. A list of directories can be passed here, entries have to be separated by colon ":". | $OMS_HOME/application:$OMS_VAR/customization |
SERVER_ID | Identifier of current IOM application-server. Must not be empty. Has to be unique for every application server of the IOM cluster. SERVER_ID is used for the following purposes:
If left empty, set_env.sh raises an error. SERVER_ID is exported by set_env.sh. SERVER_ID is passed to WildFly and can be accessed there as | $(hostname)_$OMS_SERVER_TYPE |
JAVA_HOME | The location of the JDK that Wildlfly uses to run set_env.sh adds JAVA_HOME is exported by set_env.sh. | $OMS_HOME/java |
JBOSS_HOME | The installation location of the WildFly application server that IOM uses to run. Every instance of IOM requires an own Wildfly installation. Intershop recommends to follow the naming pattern of OMS_HOME for Wildfly, in order to easily run frontend and backend server in parallel on a single machine. set_env.sh adds JBOSS_HOME is exported by set_env.sh. | /opt/wildfly.$OMS_SERVER_TYPE |
JBOSS_BIND_ADDRESS | Bind address to be used for management- and public-interface. Note Change the IP if you do not want to bind JBoss on all interfaces. | 0.0.0.0 |
JBOSS_BIND_ADDRESS_PRIVATE | Bind address to be used by WildFlys JGroups-subsystem needed for cluster communication, see Guide - Intershop Order Management - Technical Overview. The IP address used for cluster communication must be a private interface. Note You need to change the default value, if you want to setup a cluster of IOM server nodes. | 127.0.0.1 |
JBOSS_PORT_OFFSET | When running more than one server on the same machine and the same bind address, the listening ports of both servers have to differ. To do so, JBOSS_PORT_OFFSET has to be set on one server to increase all port numbers by the defined offset. set_env.sh provides the variable JBOSS_MGMT_PORT (not exported), which is set depending on the value of JBOSS_PORT_OFFSET. | |
JBOSS_JAVA_OPTS | These JAVA options are used when the WildFly application server is started and they are used by boss-cli.sh too. set_env.sh appends JAVA_OPTS is exported by set_env.sh. | -Xms512M -Xmx2048M |
JBOSS_ADMIN_USER | This is the name of the IOM WildFly user that will be created to manage the application server. Used to configure WildFly for IOM and for deployments of IOM artifacts. | omsadmin |
JBOSS_ADMIN_PASSWD | This is the password for the IOM WildFly user that is used to manage the application server. Please change the value. | not_yet_a_secret |
WATCHDOG_JAVA_OPTS | These JAVA options are applied to the Java-based Watchdog program. WATCHDOG_JAVA_OPTS is not exported by set_env.sh. |
Cluster properties are WildFly system properties, which define the machine independent configuration of an IOM cluster.
These properties are located in $OMS_ETC/cluster.properties.
PostgreSQL related properties are read by set_env.sh and exported as environment variables.
Adjust cluster.properties to the real values, used by your OMS cluster. For example, you have to enter the access-information for the database, in order to enable the IOM application server, to access the database.
Property | Description | Exemplary Value |
---|---|---|
is.oms.db.hostlist | Comma separated list of database servers. Each server entry consists of hostname and port, separated by colon. Setting the port is optional. If not set, standard port 5432 will be used (see Guide - Intershop Order Management - Technical Overview) First hostname in list exported by set_env.sh as PGHOST Port of first entry in list exported by set_env.sh as PGPORT | localhost:5432 |
is.oms.db.name | Database name to connect at the PostgreSQL server Exported by set_env.sh as PGDATABASE | oms_db |
is.oms.db.user | PostgreSQL user name to connect as Exported by set_env.sh as PGUSER | oms_user |
is.oms.db.pass | Password to be used when connecting to the PostgreSQL server Exported by set_env.sh as PGPASSWORD | OmsDB |
is.oms.db.cache | Enable/disable database cache Only values enabled and disabled are allowed. A production system should always enable the use of the DB cache. | enabled |
is.oms.xmlbinder.cache | Use caching for JAXB-context while un/marshalling or validating XML files. Only values enabled and disabled are allowed. A production system should always enable the use of the JAXB context cache. | enabled |
is.oms.smtp.host | The host of the mail server IOM uses to send mail | localhost |
is.oms.smtp.port | The port of the mail server IOM uses to send mail | 25 |
is.oms.smtp.user | OPTIONAL The user name for mail server authentication | |
is.oms.smtp.pass | OPTIONAL The user password for mail server authentication | |
is.oms.mail.external.from | The sender address for external mails (e.g., mails sent to the shop customers) | noreply@youraddress.com |
is.oms.mail.internal.from | The sender address for internal mails (e.g., to report errors via mail) | noreply@youraddress.com |
is.oms.mail.internal.to | The recipient for internal mails | operations@youraddress.com |
is.oms.mail.internal.cc | The carbon copy for internal mails | |
is.oms.mail.internal.bcc | The blind carbon copy for internal mails | |
is.oms.mail.businessoperations.to | The recipient for business operations mails | businessoperations@youraddress .com |
is.oms.mail.resources.base.url | OPTIONAL | |
is.oms.dir.var | The base path of the file system where IOM reads and writes its operational data. The default value references the value defined at installation.properties. | ${installation.OMS_VAR} |
is.oms.jboss.base.url | The publicly accessible base URL of IOM which could be a DNS of the load balancer etc For ICM it is used at the IOM connector, e.g., for the return label service. | http://localhost:8080/ |
is.oms.validation.pattern.phone | Validation pattern for phone numbers. If not set, the default value will be used. | (^$)|(^[+]?[0-9. ()/-]{8,25}$) |
is.oms.validation.pattern.email | Validation pattern for email addresses. If not set, the default value will be used. Note The character '\' in the regular expression requires an escaping (\ => \\). Otherwise the property would not be set correctly! | Desired expression ^[A-Za-z0-9._%+-]+@[A-Za-z0-9][A-Za-z0-9.-]*\.[A-Za-z]{2,9}$ requires following escaped expression ^[A-Za-z0-9._%+-]+@[A-Za-z0-9][A-Za-z0-9.-]*\\.[A-Za-z]{2,9}$ |
is.oms.validation.pattern.password | Validation pattern for passwords. If not set, the default value will be used. Note The character '\' in the regular expression requires an escaping (\ => \\). Otherwise the property would not be set correctly! | Desired expression ^(?=[^\s]*[a-zA-Z])(?=[^\s]*[\d])[^\s]{8,}$ requires following escaped expression ^(?=[^\\s]*[a-zA-Z])(?=[^\\s]*[\\d])[^\\s]{8,}$ |
is.oms.validation.pattern.password.hint | The displayed note, where you can explain the password rules for OMT users, can be customized. If not set, the default value will be used. | The password must include a letter, a number and must contain at least 8 characters. |
is.oms.healthcheck.enabled | OPTIONAL Enable/ disable health check. It will always be activated, except when this parameter is set to "false". | true |
is.oms.healthcheck.cachelivetime | OPTIONAL Health checks are now performed using a Java timer, and no more from the REST requests. Maximum age in seconds for which a health check found within the cache is considered to be valid.
| 10 |
is.oms.healthcheck.recurringtime | OPTIONAL Health check recurring interval in seconds.
When using the Watchdog, this value should be less than the property watchdog.cycle. | 5 |
is.oms.sharedfs.healthcheck | Enable/disable health check for shared file-system. Checks the ability to write/delete files in directory $OMS_SHARE/.healthcheck. This special directory inside $OMS_SHARE was chosen to have a real indicator for the shared file-system. If you setup your system manually, you have to create the .healthcheck directory manually inside $OMS_SHARE. If you do not setup a clustered IOM (single IOM node without shared filesystem), you have to disable this health check. | enabled |
is.oms.jwt.secret | Shared secret for a JSON Web Tokens (JWT) creation / validation. JWTs will be generated with HMAC algorithm (HS256) Note Intershop strongly recommends to change the default shared secret used for the JSON Web Tokens creation / validation in the cluster properties. To secure the JWT, a key of the same size as the hash output or larger must be used with the JWS HMAC SHA-2 algorithms (i.e, 256 bits for "HS256"), see JSON Web Algorithms (JWA) | 3.2. HMAC with SHA-2 Functions. | length_must_be_at_least_32_chars |
Deployment properties define which artifacts of the IOM should be deployed to the WildFly application server.
The properties are located in $OMS_ETC/deployment.cluster.properties. The order of entries within this .properties file is important, it reflects the order of deployments.
The table below shows the entries in $OMS_ETC/deployment.cluster.properties:
Cluster server |
---|
bakery.base-app-2.12.0.0.ear |
process-app-2.12.0.0.ear |
bakery.control-app-2.12.0.0.war |
bakery.impex-app-2.12.0.0.war |
bakery.communication-app-2.12.0.0.ear |
bakery.omt-app-2.12.0.0.war |
oms.rest.communication-app-2.12.0.0.war |
oms.monitoring-app-2.12.0.0.war |
gdpr-app-2.12.0.0.war |
rma-app-2.12.0.0.war |
System properties are defined in $OMS_ETC/system.std.cluster.properties. This file contains Wildfly specific configuration settings. Mostly there is no need to adapt any properties defined in this file, without one exception: the webservices subsystem of Wildfly is responsible for delivery of wsdl files. For correct creation of links within wsdl-files, a proper configuration of this subsystem is required.
Adjust the following properties in system.std.cluster.properties to get properly working wsdl-requests.
Property | Description | Default/Exemplary Value |
---|---|---|
/subsystem=webservices:wsdl-host | Hostname to be used for links within wsdl-responses. The client has to be able to follow these links, hence the hostname configured here has to be the publicly visible hostname of your IOM system. | "${jboss.bind.address.unsecure:127.0.0.1}" |
/subsystem=webservices:wsdl-port | Port number to be used for http-links within wsdl-responses. The client has to be able to follow these links, hence the port configured here has to be the publicly visible http-port of your IOM system. | "8080" |
/subsystem=webservices:wsdl-secure-port | Port number to be used for https-links within wsdl-responses. The client has to be able to follow these links, hence the port configured here has to be the publicly visible https-port of your IOM system. | "8443" |
/subsystem=webservices:wsdl-uri-scheme | URI scheme to be used for links within wsdl-responses. The client has to be able to follow these links, hence the URI scheme configured here has to be the publicly available scheme of your IOM system. | "http" |
Only for new installations
Define a dedicated OS user for the PostgreSQL service. (referred to as the "postgres OS user" in following descriptions)
The PostgreSQL data directory contains all of the data files for the database. The variable PGDATA
is used to reference this directory. It must be prepared prior to the initialization and belongs exclusively to the postgres OS user.
You should not use this directory for private data, nor add symbolic links to it. But you will probably want an extra file system built on RAID backed up by battery. Just make sure, not to use its root folder for the data directory and have the major version within the path.
This will facilitate maintenance and Postgres major upgrades, e.g., /iomdata/pg_9.6/data.
This directory must belong to the postgres OS user.
# as root mkdir /iomdata/pg_10/data chown <postgres OS user>:<group> /iomdata/pg_10/data # as the postgres OS user # add PGDATA=/iomdata/pg_10/data to the user environment
The initdb of the standard installation process needs special consideration to work on one of the IOM's databases. Initdb will create a new PostgreSQL database cluster and its superuser, see PostgreSQL 10 | initdb.
It must be called as the postgres OS user.
There are a few options to choose from during a Postgres initialization for IOM:
Make sure to use an UTF8 encoding. Depending on your operating system, you may need to replace the string "UTF8
" with "UTF-8
" (all places).
Note
No change of encoding
This parameter cannot be changed after the cluster initialization.
The command to perform initdb may change according to the OS, the Postgres version and to the way you did install it.
For YUM installations refer to YUM_Installation.
Examples:
# Postgres 10, YUM installation on Red Hat 7 # for more info type /usr/pgsql-10/bin/postgresql-10-setup --help # as root export PGSETUP_INITDB_OPTIONS="--encoding=UTF8 --locale=en_US.UTF-8 --data-checksums -U postgres -W" /usr/pgsql-10/bin/postgresql-10-setup initdb postgresql-10 # without YUM, as root ...../pgsql-10/bin/initdb --encoding UTF8 --locale=en_US.UTF-8 --data-checksums -D /iomdata/pg_10/data -U postgres -W
The access permissions must be defined in $PGDATA/pg_hba.conf.
Use MD5 as auth-method to prevent passwords to be sent in cleartext across the connection.
You cannot use ident for TCP/IP connections, otherwise the JDBC driver connection from IOM to the database will not work. See PostgreSQL 10 | Chapter 20. Client Authentication for details.
The ideal configuration depends mainly on the server resources and on the activity. Therefore we can only give a general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid size system with about 32 GB RAM and 24 cores.
To achieve best performances, almost all of the data (tables and indexes) required for the ongoing work load should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.
The IOM is built with Hibernate as API between the application logic and the database. This results mainly in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.
The following main parameters in $PGDATA/postgresql.conf should be adapted, see PostgreSQL 10 | Chapter 19. Server Configuration.
You can consider PGConfig 2.0 as guideline (using the OLTP Model).
Some aspects of data reliability are discussed here PostgreSQL 10 | Chapter 30. Reliability and the Write-Ahead Log. Understanding VACUUM is also essential when configuring/monitoring Postgres, see PostgreSQL 10 | Chapter 24. Routine Database Maintenance Tasks.
Parameter | Description |
---|---|
max_connections | The number of concurrent connections from the application is controlled by the |
max_prepared_transactions | Required for IOM installations. Set its value to about 150% of max_connections . |
shared_buffers | Between 1/4 and 1/3 of the total RAM, but not more than about 8 GB, otherwise the cache management will use too many resources. The remaining RAM is more valuable as file system cache. |
work_mem | Higher work_mem can increase performances significantly. The default is way too low. Consider using 100-400 MB. |
maintenance_work_mem | Increase the default similar as with work_mem to favor quicker vacuums. With IOM this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem ).Consider something like 2% of your total RAM per autovacuum_max_workers . e.g., 32GB RAM * 2% * 3 workers = 2GB. |
vacuum_cost_* | The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load. |
wal_level | Depends on your backup, recovery and fail over strategy, should be at least archive . |
wal_sync_method | Depends on your platform, check PostgreSQL 10 | 19.5. Write Ahead Log | wal_sync_method (enum). |
max_wal_size | 8 (small system) - 128 (large system) |
max_parallel_workers (since Postgres 9.6) | 0 |
checkpoint_completion_target | Use 0.8 or 0.9 . |
archive_* and REPLICATION | Depends on your backup & fail over strategy |
random_page_cost | The default (4 ) is usually too high. Better choose 2.5 or 3 . |
effective_cache_size | Indicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB. |
log_min_duration_statement | Set it between |
log_filename | Better use an explicit name to help when communicating. E.g.: pg-IOM_host_port-%Y%m%d_%H%M.log . |
log_rotation_age | Set it to 60 min or less. |
log_line_prefix | Better use a more verbose format than the default. E.g.: %m|%a|%c|%p|%u|%h| . |
log_lock_waits | Activate it (=on). |
stats_temp_directory | Better redirect it to a RAM disk. |
log_autovacuum_min_duration | Set it to a few seconds to monitor the vacuum activity. |
idle_in_transaction_session_timeout. (since Postgres 9.6) | Set it to a large value, e.g., 9 hours to cleanup possible left over sessions. An equivalent parameter exists for the WildFly connection pool where it is set to 3 hours per default. |
timezone | Must match the timezone of the application servers e.g., Europe/Berlin. |
The following steps describe the setup of a new database.
To perform the next steps, you need to be able to use the psql
command and be able to access the database server via its superuser (usually postgres).
su - <postgres OS user> # set variables IS_OMS_DB_HOST=<first host from is.oms.db.hostlist> IS_OMS_DB_PORT=<port of first host from is.oms.db.hostlist> # connect to the database server as the super user psql -U postgres -h $IS_OMS_DB_HOST -p $IS_OMS_DB_PORT -d postgres
Note
IOM connects with its own dedicated database user.
Create user:
-- create user CREATE USER "<value of your is.oms.db.user>" PASSWORD '<value of your is.oms.db.pass>';
The database initialization dump does not expect a given tablespace, all objects will be placed in the default users tablespace. When your data directory is located on an adequate file system, you can keep the Postgres default tablespace which is located in $PGDATA/base. If you want to define a dedicated tablespace (i.e., on a dedicated file system), you should:
Set it as default for the user and for the database prior to use the provided initialization dump (provided it has been created):
-- set default table space ALTER USER "<value of your is.oms.db.user>" SET default_tablespace = 'tblspc_iom';
Also see Postgres table spaces: https://www.postgresql.org/docs/10/static/sql-createtablespace.html
IOM uses its own dedicated database.
Create the database:
-- create database CREATE DATABASE "<value of your is.oms.db.name>" WITH OWNER = "<value of your is.oms.db.user>" ENCODING = 'UTF8' TABLESPACE = pg_default -- or your dedicated tablespace LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1;
Set database search_path
:
-- set search_path ALTER DATABASE "<value of your is.oms.db.name>" SET search_path = customer, oms, omt, product, system, admin;
Exit the psql console:
-- exit console postgres=# \q
Intershop strongly recommends to create a dedicated group and user for the IOM installation on the host system. It is recommended to place the users home directory outside the IOM installation. The following script uses the same value for user name and group name:
# as root set variables # default: oms # use "oms" if you do not want to customize the installation layout OMS_USER=<name of user to own and run IOM> # use "/home/$OMS_USER" as home-directory OMS_USER_HOME=<home directory of OMS user> # add group groupadd $OMS_USER # add user # users home is created at default location useradd -g $OMS_USER -d $OMS_USER_HOME -m $OMS_USER # set password passwd $OMS_USER
Log in as root.
Create directory $OMS_HOME, set owner and group.
# as root # use "/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout OMS_HOME=<installation directory of IOM> mkdir -p $OMS_HOME chown $OMS_USER:$OMS_USER $OMS_HOME
Place and extract the IOM release package in $OMS_HOME.
# as $OMS_USER at directory $OMS_HOME # extract the IOM release package tar -xvzf IOM-2.12.0.0.tgz
This creates the main directories: etc, var, bin, lib, application etc.
For the following tasks the local environment of IOM has to be defined in $OMS_HOME/etc/installation.properties. See installation.properties for more information.
According to the Linux File System Hierarchy Standard, OMS installation will be distributed on different file systems. The variable $OMS_VAR
and the command line switch --etc
of integrate.sh define the directories, where to place the according parts of OMS. There is no need to change the default values, except you want to adapt the installation layout to your own needs.
OMS_USER
to the name of the user, who should own and run IOM.OMS_HOME
to the name of the directory, where the IOM software has been placed.OMS_VAR
to the name of the directory, where to place var-data of IOM.OMS_SHARE
to the name of the directory, where to place shared data of IOM.OMS_LOG
to the name of the directory, where to place log data of IOM.Prepare the directories $OMS_SHARE, $OMS_VAR, $OMS_LOG and $ETC_TARGET before distributing OMS to these locations. Set the placeholders to the values defined in installation.properties.
As the manual describes the setup process of a single IOM machine, $OMS_SHARE must not be mounted to a real shared file-system. For this kind of installation, a local directory is fully sufficient. In this case, the health-check of the shared file-system must be disabled (see description of property is.oms.sharedfs.healthcheck
above).
# as root at $OMS_HOME # read installation.properties . etc/installation.properties # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout ETC_TARGET=<directory, where to place etc-data> # prepare the directories mkdir -p "$ETC_TARGET" chown $OMS_USER:$OMS_USER "$ETC_TARGET" mkdir -p "$OMS_VAR" chown $OMS_USER:$OMS_USER "$OMS_VAR" mkdir -p "$OMS_LOG" chown $OMS_USER:$OMS_USER "$OMS_LOG" mkdir -p "$OMS_SHARE" chown $OMS_USER:$OMS_USER "$OMS_SHARE"
Distribute the release package parts by running the integrate.sh script.
# as $OMS_USER at $OMS_HOME # set variable # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout ETC_TARGET=<directory, where to place etc-data> # copy etc- and var-data to configured locations bin/integrate.sh --etc="$ETC_TARGET"
This will copy the relevant parts to the target directory, defined by the --etc=
... command line parameter and $OMS_VAR
as defined in installation.properties. It also creates symlinks from $OMS_HOME/etc -> $ETC as well as from $OMS_HOME/var -> $OMS_VAR.
Only for new installations
Import the initial dump for IOM that will contain the basic necessary database configuration. This dump can be found in the IOM delivery package at $OMS_HOME/postgres/dumps.
Edit etc/cluster.properties as $OMS_USER
if you do not want to use the default PostgreSQL configuration. If you want to use a PostgreSQL database at localhost, which was created with default configuration, you do not need to make any changes at the moment,
/system-property=is.oms.db.hostlist
to the hostname or IP and port of your PostgreSQL database./system-property=is.oms.db.name
to the name of your database dedicated for IOM./system-property=is.oms.db.user
to the name of the user, which should be used to connect to the PostgreSQL database./system-property=is.oms.db.pass
to the password of the user, who connects to the PostgreSQL database.Import is done using the following SQL command for import.
# as $OMS_USER at $OMS_HOME # setup environment . bin/set_env.sh # unzip and install initial data dump gunzip -c postgres/dumps/OmsDB.initial.2.12.0.0.sql.gz | psql -U $PGUSER -h $PGHOST -p $PGPORT -d $PGDATABASE
Refer to Guide - IOM Database Migration (2.0 - 2.17) if you have to migrate the database.
This is the case when migrating from an older version of IOM or if the IOM version you are installing has a higher version number than the dump you have installed before.
Create directory $JBOSS_HOME, set owner and group.
# as root do JBOSS_HOME=<set variable according the settings in installation.properties> OMS_USER=<name of user to own and run IOM> mkdir -p $JBOSS_HOME chown $OMS_USER:$OMS_USER $JBOSS_HOME
As $OMS_USER unpack the downloaded archive into $JBOSS_HOME:
# as $OMS_USER at $OMS_HOME # setup environment . bin/set_env.sh # set variable WILDFLY_VERSION=15.0.1.Final # extract wildfly package to $JBOSS_HOME cd $JBOSS_HOME tar -xzf /tmp/wildfly-$WILDFLY_VERSION.tar.gz ( cd wildfly-$WILDFLY_VERSION; mv * .[a-zA-Z0-9]* .. ) rmdir wildfly-$WILDFLY_VERSION
JBOSS_HOME
If you install WildFly exactly this way, no further changes in installation.properties are required. Otherwise you have to adapt JBOSS_HOME.
JBOSS_BIND_ADDRESS
in installation.properties to an IP address, which is reachable by the browser.0.0.0.0
here. This value represents „any address“ and covers all available network interfaces.IOM release provides a systemd-unit template to install WildFly as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties, to fill the template. Before expanding the template, make sure you have updated installation.properties. At least the variable JAVA_HOME
needs to be adapted.
Since different IOM application servers might run on a single machine, the service name for every server has to be unique.
JAVA_HOME
has to be adapted to point to the directory holding the Java 11 installation.JBOSS_JAVA_OPTS
can be adapted, if you want to change the default memory-configuration, garbage-collection configuration, etc.Expand the systems unit template with current configuration.
# as $OMS_USER setup environment . bin/set_env.sh # expand systemd-unit template expand_template.sh < $OMS_ETC/jboss-as.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
Install WildFly as a service.
# as root set server type OMS_SERVER_TYPE=<server type of IOM> # copy expanded template cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system # enable service systemctl enable jboss-as-$OMS_SERVER_TYPE # start service systemctl start jboss-as-$OMS_SERVER_TYPE
For the following steps, the WildFly application server needs to be running. WildFly will be configured for usage with IOM.
JBOSS_ADMIN_PASSWD
.Create the admin user and configure WildFly.
# as $OMS_USER in $OMS_HOME setup environment . bin/set_env.sh # create admin user for WildFly management add-user.sh -u $JBOSS_ADMIN_USER -p $JBOSS_ADMIN_PASSWD # load initial configuration of IOM jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT -c --file="$OMS_ETC/initSystem.std.$OMS_SERVER_TYPE.cli" # enhanced standard properties are set in the WildFly cat $OMS_ETC/system.std.$OMS_SERVER_TYPE.properties | update_properties.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT" # load $OMS_ETC/cluster.properties cat $OMS_ETC/cluster.properties | update_properties.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
Restart the WildFly application server.
# as root restart application server systemctl restart jboss-as-$OMS_SERVER_TYPE
For the following steps, the WildFly application server and the PostgreSQL database need to be running. All deployment artifacts, which are listed in $OMS_ETC/deployment.$OMS_SERVER_TYPE.properties will be deployed into WildFly application server.
Deploy all artifacts defined by deployment.$OMS_SERVER_TYPE.properties.
# as $OMS_USER in $OMS_HOME setup environment . bin/set_env.sh # deploy all artifacts defined by deployment.properties deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
Check whether all artifacts have been deployed successfully:
# as $OMS_USER in $OMS_HOME setup environment . bin/set_env.sh # get deployment status jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT "deployment-info"
Example output for standalone server:
NAME RUNTIME-NAME PERSISTENT ENABLED STATUS postgresql-jdbc4 postgresql-jdbc4 true true OK bakery.base-app-2.12.0.0-SNAPSHOT.ear bakery.base-app-2.12.0.0-SNAPSHOT.ear true true OK process-app-2.12.0.0-SNAPSHOT.ear process-app-2.12.0.0-SNAPSHOT.ear true true OK bakery.control-app-2.12.0.0-SNAPSHOT.war bakery.control-app-2.12.0.0-SNAPSHOT.war true true OK bakery.impex-app-2.12.0.0-SNAPSHOT.war bakery.impex-app-2.12.0.0-SNAPSHOT.war true true OK bakery.communication-app-2.12.0.0-SNAPSHOT.ear bakery.communication-app-2.12.0.0-SNAPSHOT.ear true true OK bakery.omt-app-2.12.0.0-SNAPSHOT.war bakery.omt-app-2.12.0.0-SNAPSHOT.war true true OK oms.rest.communication-app-2.12.0.0-SNAPSHOT.war oms.rest.communication-app-2.12.0.0-SNAPSHOT.war true true OK oms.monitoring-app-2.12.0.0-SNAPSHOT.war oms.monitoring-app-2.12.0.0-SNAPSHOT.war true true OK gdpr-app-2.12.0.0-SNAPSHOT.war gdpr-app-2.12.0.0-SNAPSHOT.war true true OK rma-app-2.12.0.0-SNAPSHOT.war rma-app-2.12.0.0-SNAPSHOT.war true true OK
If you operate the IOM as a highly available system, the Wildfly application server must not run directly as system service. Instead, IOM Watchdog has to run as system service, starting and stopping the Wildfly application server depending on health checks, made on the application server, see Guide - Intershop Order Management - Technical Overview.
Since different IOM application servers may run on a single machine, the service name for every server has to be unique.
The IOM release provides a systemd-unit template to install IOM Watchdog as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties to fill the template. Before expanding the template, make sure you have updated the installation.properties. At least the variable OMS_HOME
needs to be up to date.
OMS_HOME
has to be up to date.Expand the systems unit template with current configuration:
# as $OMS_USER setup environment . bin/set_env.sh # expand systemd-unit template expand_template.sh < $OMS_ETC/oms-watchdog.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
Install IOM Watchdog as a service.
# fill the variable according the settings in installation.properties OMS_SERVER_TYPE=<server type of IOM> # as root copy expanded template cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system # enable service systemctl enable jboss-as-$OMS_SERVER_TYPE # start service systemctl start jboss-as-$OMS_SERVER_TYPE
The subsystem undertow is configured to write access logs to $OMS_LOG. It is only able to provide a daily rotation of logs. If you want to provide access logs to the ICI (Intershop Commerce Insight) to get a performance analysis, you need to use hourly rotation of logs. To overcome the limitation of undertow rotation feature, OMS provides a simple shell script to rotate the logs: bin/logrotate.sh.
If you use the default configuration of watchdog.properties and you want a logfile rotation according access_log.log you should add watchdog.log to logrotate
too.
This script has to be executed at the beginning of every hour, by adding the following line to the crontab of $OMS_USER
:
0 * * * * . $OMS_HOME/bin/set_env.sh && logrotate.sh $OMS_LOG/access_log.log $OMS_LOG/watchdog.log
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.