The present guide is addressed to administrators who want to install IOM 2.2 in their Linux based infrastructure. It enables them to understand what artifacts are contained in the IOM 2.2 delivery and how they can be installed, configured and deployed.
The document describes how to install IOM 2.2. It makes no difference whether the host is a front-end server, back-end server or a single server installation.
For a technical overview of typical installation scenarios please see references.
Wording | Description |
---|---|
CLI | Command Line Interface, a tooling for WildFly management |
FTP | File Transfer Protocol |
HA | High availability |
ICI | The abbreviation for Intershop Commerce Insight, Intershop's reporting and analytics solution. |
ICM | The abbreviation for Intershop Commerce Management |
IOM | The abbreviation for Intershop Order Management |
JBoss | Synonym for WildFly (former name of the WildFly application server) |
JDBC | Java Database Connectivity |
JDK | Java Development Kit |
OLTP | Online transaction processing |
OMS | The abbreviation for Order Management System, the technical name of IOM |
OS | Operating System |
URL | Uniform Resource Locator |
WildFly | The application server that IOM runs on |
The WildFly application server hosting and running IOM requires an installed Java development kit (JDK) of at least version 8.
The JAVA_HOME
global environment variable has to point to the installation directory of the JDK.
Note
JAVA_HOME
will be covered in installation.properties. The PATH
will be set automatically by set_env.sh.
The IOM requires an existing mail server that processes internal and external emails sent from IOM via the SMTP protocol.
Also server host and port need to be known for later configuration.
The IOM requires Pure-FTPd, as it supports virtual users. The user credentials used for accessing the FTP-server differ from the credentials used for accessing the file system.
The IOM requires a PostgreSQL database hosted by a PostgreSQL database server and can reside on its own host.
To make the database server fit for IOM, certain configuration steps in a standard installation are necessary. For setup and initialization steps please refer to section Database Setup and Initialization.
Intershop does not offer PostgreSQL support further than general recommendations for its use as relational database for the IOM.
A list of companies offering professional Postgres support can be found here www.postgresql.org/support/professional_support/.
Also the PostgreSQL community has some excellent mailing lists, pgsql-general being the most active http://www.postgresql.org/list/.
The IOM only supports PostgreSQL servers hosted on a Linux based OS.
We recommend PostgresSQL 9.6.x . The IOM version 2.2.0.0 is also compatible with Postgresql 9.5.x
The IOM uses three major property files which are explained below.
The local environment of IOM is defined in $OMS_ETC/installation.properties.
Installation.properties defines shell variables, which are read by $OMS_HOME/bin/set_env.sh to provide the environment for all scripts and programs belonging to the IOM system.
$OMS_HOME/bin/set_env.sh provides the content of installation.properties as simple shell variables, additionally it adds some exported variables, e.g., PATH
and some variables required by 3rd party programs (e.g., the content of JBOSS_JAVA_OPTS will be exported as JAVA_OPTS to be available for standalone.sh).
Variable name | Description | Default/ Exemplary Value |
---|---|---|
OMS_USER | The OS user that installs and runs IOM | oms |
OMS_SERVER_TYPE | One of the values "standalone", "frontend" or "backend". Controls which deployment.*.properties are read by (re)deploy.sh. Variable will also be used, to address other server-type specific configuration files (e.g., system.std.<server-type>.properties). OMS_SERVER_TYPE is exported by set_env.sh. OMS_SERVER_TYPE is passed to Wildfly and can be accessed there as ${installation.OMS_SERVER_TYPE}. | standalone |
OMS_HOME | The base location of the extracted IOM release package. The default value makes it easy to run a frontend and backend server. set_env.sh adds $OMS_HOME/bin to PATH. OMS_HOME is exported by set_env.sh. OMS_HOME is passed to WildFly and can be accessed there as ${installation.OMS_HOME}. | /opt/$OMS_USER.$OMS_SERVER_TYPE |
OMS_ETC | OMS_ETC is set by set_env.sh implicitly to the directory where the installation.properties file is located. OMS_ETC is exported by set_env.sh and is not listed within the installation.properties. | - |
OMS_VAR | The location of operational data and log files for IOM OMS_VAR is passed to WildFly and can be accessed there as ${installation.OMS_VAR}. OMS_VAR is exported by set_env.sh. | /var/opt/$OMS_USER.$OMS_SERVER_TYPE |
OMS_LOG | The locations of logs written by Wildfly, IOM and scripts. OMS_LOG is passed to Wildfly and can be accessed there as ${installation.OMS_LOG}. OMS_LOG is exported by set_env.sh. | $OMS_VAR/log |
OMS_APP | The location of IOM artifacts deployable into the application server. List of directories can be passed here, entries have to be separated by colon ":". | $OMS_HOME/application:$OMS_VAR/customization |
SERVER_ID | Identifier of current IOM application-server. Must not be empty. Has to be unique for every application server of the IOM cluster. SERVER_ID is used for the following purposes:
If left empty, set_env.sh raises an error. SERVER_ID is exported by set_env.sh. SERVER_ID is passed to WildFly and can be accessed there as ${installation.SERVER_ID}. Additionally it is used to initialize ${jboss.node.name}. | $(hostname)_$OMS_SERVER_TYPE |
JAVA_HOME | The location of the JDK that Wildlfly uses to run set_env.sh adds $JAVA_HOME/bin to PATH. JAVA_HOME is exported by set_env.sh. | $OMS_HOME/java |
JBOSS_HOME | The installation location of the WildFly application server that IOM uses to run. Every instance of IOM requires on own Wildfly installation. Intershop recommends to follow the naming pattern of OMS_HOME for Wildfly, in order to easily run frontend and backend server in parallel on a single machine. set_env.sh adds $JBOSS_HOME/bin to PATH. JBOSS_HOME is exported by set_env.sh. | /opt/wildfly.$OMS_SERVER_TYPE |
JBOSS_BIND_ADDRESS | Bind address to be used for management- and public-interface. Note Change the IP if you do not want to bind JBoss on all interfaces. | 0.0.0.0 |
JBOSS_PORT_OFFSET | When running more than one server on the same machine and the same bind address, the listening ports of both servers have to differ. To do so, JBOSS_PORT_OFFSET has to be set on one server to increase all port numbers by the defined offset (for recommended value see: Guide - Intershop Order Management - Technical Overview). set_env.sh provides the variable JBOSS_MGMT_PORT (not exported), which is set depending on the value of JBOSS_PORT_OFFSET. | |
JBOSS_JAVA_OPTS | These JAVA options are used when the WildFly application server is started and they are used by boss-cli.sh too. set_env.sh appends $JBOSS_JAVA_OPTS to predefined JAVA_OPTS. JAVA_OPTS is exported by set_env.sh. | -Xms512M -Xmx2048M |
JBOSS_ADMIN_USER | This is the name of the IOM WildFly user that will be created to manage the application server. Used to configure WildFly for IOM and for deployments of IOM artifacts. | omsadmin |
JBOSS_ADMIN_PASSWD | This is the password for the IOM WildFly user that is used to manage the application server. Please change the value. | not_yet_a_secret |
WATCHDOG_JAVA_OPTS | These JAVA options are applied to the Java-based watchdog program. WATCHDOG_JAVA_OPTS is not exported by set_env.sh |
Cluster properties are WildFly system properties, which are defining the machine independent configuration of an IOM cluster.
These properties are located in $OMS_ETC/cluster.properties.
PostgreSQL related properties are read by set_env.sh and exported as environment variables.
Adjust cluster.properties to the real values, used by your OMS cluster. For example, you have to enter the access-information for the database, in order to enable the IOM application server, to access the database.
Property | Description | Exemplary Value |
---|---|---|
is.oms.db.hostlist | Comma separated list of database servers. Each server entry consists of hostname and port, separated by colon. Setting the port is optional. If not set, standard port 5432 will be used (see Guide - Intershop Order Management - Technical Overview) First hostname in list exported by set_env.sh as PGHOST Port of first entry in list exported by set_env.sh as PGPORT | localhost:5432 |
is.oms.db.name | Database name to connect at the PostgreSQL server Exported by set_env.sh as PGDATABASE | oms_db |
is.oms.db.user | PostgreSQL user name to connect as Exported by set_env.sh as PGUSER | oms_user |
is.oms.db.pass | Password to be used when connecting to the PostgreSQL server Exported by set_env.sh as PGPASSWORD | OmsDB |
is.oms.db.cache | Enable/disable database cache Only values enabled and disabled are allowed. A production system should always enable the use of the DB cache. | enabled |
is.oms.xmlbinder.cache | Use caching for JAXB-context while un/marshalling or validating XML files. Only values enabled and disabled are allowed. A production system should always enable the use of the JAXB context cache. | enabled |
is.oms.media.host | The host value for the FTP server mediahost | localhost |
is.oms.media.user | The user name to access FTP server mediahost | mediahost |
is.oms.media.pass | The password to access FTP server mediahost | mediahost |
is.oms.pdf.host | The host value for the FTP server pdfhost | localhost |
is.oms.pdf.user | The user name to access FTP server pdfhost | pdfhost |
is.oms.pdf.pass | The password to access FTP server pdfhost | pdfhost |
is.oms.jms.hostlist | Comma separated list of IOM backend servers. Each server entry consists of hostname and port, separated by colon. The list is only required in distributed IOM installations (see Guide - Intershop Order Management - Technical Overview) | localhost:8080 |
is.oms.smtp.host | The host of the mail server IOM uses to send mail | localhost |
is.oms.smtp.port | The port of the mail server IOM uses to send mail | 25 |
is.oms.mail.external.from | The sender address for external mails (e.g., mails sent to the shop customers) | noreply@youraddress.com |
is.oms.mail.internal.from | The sender address for internal mails (e.g., to report errors via mail) | noreply@youraddress.com |
is.oms.mail.internal.to | The recipient for internal mails | operations@youraddress.com |
is.oms.mail.internal.cc | The carbon copy for internal mails | |
is.oms.mail.internal.bcc | The blind carbon copy for internal mails | |
is.oms.mail.businessoperations.to | The recipient for business operations mails Note Since version 2.2.8 | businessoperations@youraddress .com |
is.oms.dir.var | The base path of the file system where IOM reads and writes its operational data. The default value references the value defined at installation.properties. | ${installation.OMS_VAR} |
is.oms.jboss.base.url | The publicly accessible base URL of IOM which could be a DNS of the load balancer etc For ICM it is used at the IOM connector, e.g., for the return label service. | http://localhost:8080/ |
is.oms.validation.pattern.phone | Validation pattern for phone numbers. If not set, the default value will be used. | (^$)|(^[+]?[0-9. ()/-]{8,25}$) |
is.oms.validation.pattern.email | Validation pattern for email addresses. If not set, the default value will be used. Note The character '\' in the regular expression requires an escaping (\ => \\). Otherwise the property would not be set correctly! | Desired expression ^[A-Za-z0-9._%+-]+@[A-Za-z0-9][A-Za-z0-9.-]*\.[A-Za-z]{2,9}$ requires following escaped expression ^[A-Za-z0-9._%+-]+@[A-Za-z0-9][A-Za-z0-9.-]*\\.[A-Za-z]{2,9}$ |
is.oms.validation.pattern.password | Validation pattern for passwords. If not set, the default value will be used. Note The character '\' in the regular expression requires an escaping (\ => \\). Otherwise the property would not be set correctly! | Desired expression ^(?=[^\s]*[a-zA-Z])(?=[^\s]*[\d])[^\s]{8,}$ requires following escaped expression ^(?=[^\\s]*[a-zA-Z])(?=[^\\s]*[\\d])[^\\s]{8,}$ |
is.oms.validation.pattern.password.hint | The displayed note, where you can explain the password rules for OMT users, can be customized. If not set, the default value will be used. | The password must include a letter, a number and must contain at least 8 characters. |
is.oms.default.max.return.quantity | Configuration value used during return creation in OMT. Switches the displayed default quantity for items to return. True sets the default return quantity to maximum item quantity. False sets the default return quantity to zero. If not set, zero (false) is used. | false |
is.oms.media.healthcheck | Enable/disable health check for ftp media host. Use URL /monitoring/services/health/status to get all health status information which can be used, e.g., by a load balancer. Only values enabled and disabled are allowed. A production system should always enable the use of the health checks. Also see Guide - Intershop Order Management - Technical Overview. | enabled |
is.oms.pdf.healthcheck | Enable/disable health check for ftp pdf host. Use URL /monitoring/services/health/status to get all health status information which can be used, e.g., by a load balancer. Only values enabled and disabled are allowed. A production system should always enable the use of the health checks. Also see Guide - Intershop Order Management - Technical Overview. | enabled |
is_oms_healthcheck_cachelivetime | Note Since version 2.2.7 (optional) Seconds to cache healthcheck requests. No caching will take place when not defined or set to 0. When using the watchdog, this value should be less than the property watchdog.cycle | 2 #(default value) |
Deployment properties define which artifacts of the IOM should be deployed to the WildFly application server.
The properties are located in $OMS_ETC/deployment.$OMS_SERVER_TYPE.properties. There are exist three different files, one for every server-type. Depending on server-type, different set of applications has to be deployed. The order of entries within property files is important, it reflects the order of deployments.
The table below shows the entries for all supported types of server:
Standalone server | Backend server | Frontend server | Notes |
---|---|---|---|
ArticleQueues-jms.xml | ArticleQueues-jms.xml | - | |
CustomerQueues-jms.xml | CustomerQueues-jms.xml | - | |
OrderQueues-jms.xml | OrderQueues-jms.xml | - | |
bakery.base-app-2.2.0.0.ear | bakery.base-app-2.2.0.0.ear | bakery.base-app-2.2.0.0.ear | |
bakery.control-app-2.2.0.0.ear | bakery.control-app-2.2.0.0.ear | - | |
bakery.process-app-2.2.0.0.ear | bakery.process-app-2.2.0.0.ear | - | |
bakery.impex-app-2.2.0.0.ear | bakery.impex-app-2.2.0.0.ear | - | |
bakery.communication-app-2.2.0.0.ear | - | bakery.communication-app-2.2.0.0.ear | |
bakery.omt-app-2.2.0.0.war | - | bakery.omt-app-2.2.0.0.war | |
gdpr-app-2.2.0.0.war | - | gdpr-app-2.2.0.0.war | Note Since version 2.2.7 |
oms.monitoring-app-2.2.0.0.war | oms.monitoring-app-2.2.0.0.war | oms.monitoring-app-2.2.0.0.war |
Only for new installations
Define a dedicated OS user for the PostgreSQL service. (referred to as the "postgres OS user" in following descriptions)
The PostgreSQL data directory contains all of the data files for the database. The variable PGDATA
is used to reference this directory. It must be prepared prior to the initialization and belongs exclusively to the Postgres user.
You should not use this directory for private data, nor add symbolic links into it. But you will probably want an extra file system built on RAID and battery backed for it. Just make sure, not to use its root folder for the data directory and have the major version within the path.
This will facilitate maintenance and Postgres major upgrades, e.g., /iomdata/pg_9.6/data.
This directory must belong to the postgres OS user.
# as root mkdir /iomdata/pg_9.6/data chown <postgres OS user>:<group> /iomdata/pg_9.6/data # as the postgres OS user # add PGDATA=/iomdata/pg_9.6/data to the user environment
The initdb of the standard installation process needs special consideration in order to work on one of the IOM's databases. Initdb will create a new PostgreSQL database cluster and its superuser (see https://www.postgresql.org/docs/9.6/static/app-initdb.html).
It must be called as the postgres OS user.
There are a few options to choose during a Postgres initialization for IOM:
Make sure to use an UTF8 encoding. Depending on your operating system, you may need to replace the string "UTF8" with "UTF-8" (all places).
Note
No change of encoding
This parameter cannot be changed after the cluster initialization.
The command to perform initdb may change according to the OS, the Postgres version and to the way you did install it.
For YUM installations refer to YUM_Installation.
Examples:
# as the postgres OS user # Postgres 9.6, YUM installation on Red Hat 7 # for more info type /usr/pgsql-9.6/bin/postgresql96-setup --help export PGSETUP_INITDB_OPTIONS="--encoding=UTF8 --locale=en_US.UTF-8 --data-checksums -U postgres -W" /usr/pgsql-9.6/bin/postgresql96-setup initdb postgresql-9.6 # without YUM: ...../pgsql-9.6/bin/initdb --encoding UTF8 --locale=en_US.UTF-8 --data-checksums -D /iomdata/pg_9.6/data -U postgres -W
The access permissions must be defined in $PGDATA/pg_hba.conf.
Use md5 as auth-method to prevent passwords to be sent in clear-text across the connection.
Also you cannot use ident for TCP/IP connections, otherwise the JDBC driver connection from IOM to the database will not work. See https://www.postgresql.org/docs/9.6/static/auth-pg-hba-conf.html for details.
The ideal configuration depends mainly on the server resources and on the activity. Hence we can hence only give some general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid size system with about 32 GB RAM and 24 cores.
To achieve best performances, almost all of the data (tables and indexes) required for the ongoing work load should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.
The IOM is built on hibernate as API between the application logic and the database. This results mainly in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.
Following main parameters in $PGDATA/postgresql.conf should be adapted. See https://www.postgresql.org/docs/9.6/static/runtime-config-resource.html.
You can consider http://www.pgconfig.org/ as guideline (using the OLTP Model).
Some aspect of data reliability are discussed here https://www.postgresql.org/docs/9.6/static/wal.html. Understanding vacuum is also essential when configuring/monitoring Postgres https://www.postgresql.org/docs/9.6/static/routine-vacuuming.html.
Parameter | Description |
---|---|
max_connections | The number of concurrent connections from the application is controlled by the xa-datasource configuration in WildFly. |
max_prepared_transactions | Required for IOM installations. Set its value to about 150% of max_connections. |
shared_buffers | Between 1/4 and 1/3 of the total RAM, but not more than about 8 GB, otherwise the cache management will use too many resources. The remaining RAM is more valuable as file system cache. |
work_mem | Higher work_mem can increase performances significantly. The default is ways too low. Consider using 100-400 MB. |
maintenance_work_mem | Increase the default similar as with work_mem to favor quicker vacuums. With IOM this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem) Consider something like 2% of your total RAM per autovacuum_max_workers. e.g., 32GB RAM * 2% * 3 workers = 2GB |
vacuum_cost_* | The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load. |
wal_level | Depends on your backup, recovery and fail over strategy, should be at least archive |
wal_sync_method | Depends on your platform, check https://www.postgresql.org/docs/9.6/static/runtime-config-wal.html#GUC-WAL-SYNC-METHOD |
checkpoint_segments | 8 (small system) - 64 (large system) |
checkpoint_completion_target | Use 0.8 or 0.9 |
archive_* and REPLICATION | Depends on your backup & fail over strategy |
random_page_cost | The default (4) is usually too high. Better choose 2.5 or 3. |
effective_cache_size | Indicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB. |
log_min_duration_statement | Set it between 1 and 5 seconds to help track long running queries. |
log_filename | Better use an explicit name to help when communicating. E.g.: pg-IOM_host_port-%Y%m%d_%H%M.log |
log_rotation_age | 60 min or less |
log_line_prefix | Better use a more verbose format than the default. E.g.: %m|%a|%c|%p|%u|%h| |
log_lock_waits | Activate it (=on) |
stats_temp_directory | Better redirect it to a RAM disk |
log_autovacuum_min_duration | Set it to a few seconds to monitor the vacuum activity. |
idle_in_transaction_session_timeout. | (Postgres 9.6 only) set it to a large value, e.g., 9 hours to cleanup possible left over sessions. An equivalent parameter exists for the WildFly connection pool where it is set to 3 hours per default. |
timezone | Must match the timezone of the application servers e.g. Europe/Berlin |
The following steps describes the setup of a new database.
In order to perform the next steps, you need to be able to use the psql command and be able to access the database server via its superuser (usually postgres).
su - <postgres OS user> # set variables IS_OMS_DB_HOST=<first host from is.oms.db.hostlist> IS_OMS_DB_PORT=<port of first host from is.oms.db.hostlist> # connect to the database server as the super user psql -U postgres -h $IS_OMS_DB_HOST -p $IS_OMS_DB_PORT -d postgres
Note
IOM connects with its own dedicated database user.
Create user
-- create user CREATE USER "<value of your is.oms.db.user>" PASSWORD '<value of your is.oms.db.pass>';
The database initialization dump does not expect a given tablespace, all objects will be placed in the default users tablespace. When your data directory is located on an adequate file system, you can keep the Postgres default tablespace which is located in $PGDATA/base. If you want to define a dedicated tablespace (i.e., on a dedicated file system), you should:
Set it as default for the user and for the database prior to use the provided initialization dump (provided it has been created):
-- set default table space ALTER USER "<value of your is.oms.db.user>" SET default_tablespace = 'tblspc_iom';
Also see Postgres table spaces: http://www.postgresql.org/docs/9.6/static/sql-createtablespace.html
IOM uses its own dedicated database.
Create the database.
-- create database CREATE DATABASE "<value of your is.oms.db.name>" WITH OWNER = "<value of your is.oms.db.user>" ENCODING = 'UTF8' TABLESPACE = pg_default -- or your dedicated tablespace LC_COLLATE = 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF-8' CONNECTION LIMIT = -1;
Set database search_path
.
-- set search_path ALTER DATABASE "<value of your is.oms.db.name>" SET search_path = customer, oms, omt, product, system, admin, public, "$user";
Exit the psql console.
-- exit console postgres=# \q
Intershop strongly recommends to create a dedicated group and user for the IOM installation on the host system. Since different servers (frontend and backend) might run on the same server under the same user, it is recommended to place the users home directory outside the IOM installation. The following script uses the same value for user name and group name.
# as root set variables # default: oms # use "oms" if you do not want to customize the installation layout OMS_USER=<name of user to own and run IOM> # use "/home/$OMS_USER" as home-directory OMS_USER_HOME=<home directory of OMS user> # add group groupadd $OMS_USER # add user # users home is created at default location useradd -g $OMS_USER -d $OMS_USER_HOME -m $OMS_USER # set password passwd $OMS_USER
Log in as root.
Create directory $OMS_HOME, set owner and group.
# as root # use "/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout OMS_HOME=<installation directory of IOM> mkdir -p $OMS_HOME chown $OMS_USER:$OMS_USER $OMS_HOME
Place and extract the IOM release package in $OMS_HOME.
# as $OMS_USER at directory $OMS_HOME # extract the IOM release package tar -xvzf IOM-2.2.0.0.tgz
This creates the main directories: etc, var, bin, lib, application etc.
For the following tasks the local environment of IOM has to be defined in $OMS_HOME/etc/installation.properties. See installation.properties for more information.
According to the Linux File System Hierarchy Standard, OMS installation will be distributed on different file systems. The variable $OMS_VAR and the command line switch --etc of integrate.sh are defining the directories, where to place the according parts of OMS. There is no need to change the default values, except you want to adapt the installation layout to your own needs.
Prepare the directories $OMS_VAR and $ETC_TARGET before distributing OMS to these locations. Set the placeholders to the values defined in installation.properties.
# as root at $OMS_HOME # read installation.properties . etc/installation.properties # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout ETC_TARGET=<directory, where to place etc-data> # prepare the directories mkdir -p $ETC_TARGET chown $OMS_USER:$OMS_USER $ETC_TARGET mkdir -p $OMS_VAR chown $OMS_USER:$OMS_USER $OMS_VAR
Distribute the release package parts by running the integrate.sh script.
# as $OMS_USER at $OMS_HOME # set variable # use "/etc/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout ETC_TARGET=<directory, where to place etc-data> # copy etc- and var-data to configured locations bin/integrate.sh --etc="$ETC_TARGET"
This will copy the relevant parts to the target directory, defined by the --etc=... command line parameter and $OMS_VAR as defined at installation.properties. It also creates symlinks from $OMS_HOME/etc -> $ETC as well as from $OMS_HOME/var -> $OMS_VAR
Only for new installations
Import the initial dump for IOM that will contain the basic necessary database configuration. This dump can be found in the IOM delivery package at $OMS_HOME/postgres/dumps.
Edit etc/cluster.properties as $OMS_USER if you do not want to use the default PostgreSQL configuration. You do not need to make any changes at the moment, if you want to use a PostgreSQL database at localhost, which was created with default configuration.
Import is done using the following SQL command.
# as $OMS_USER at $OMS_HOME # setup environment . bin/set_env.sh # unzip and install initial data dump gunzip -c postgres/dumps/OmsDB.initial.2.2.0.0.sql.gz | psql -U $PGUSER -h $PGHOST -p $PGPORT -d $PGDATABASE
Use the Guide - IOM Database Migration (2.0 - 2.17) if you have to migrate the database while migrating from an older version of IOM.
Create directory $JBOSS_HOME, set owner and group.
# as root do JBOSS_HOME=<set variable according the settings in installation.properties> OMS_USER=<name of user to own and run IOM> mkdir -p $JBOSS_HOME chown $OMS_USER:$OMS_USER $JBOSS_HOME
As $OMS_USER unpack the downloaded archive into $JBOSS_HOME:
# as $OMS_USER at $OMS_HOME # setup environment . bin/set_env.sh # set variable WILDFLY_VERSION=9.0.2.Final # extract wildfly package to $JBOSS_HOME cd $JBOSS_HOME tar -xzf /tmp/wildfly-$WILDFLY_VERSION.tar.gz ( cd wildfly-$WILDFLY_VERSION; mv * .[a-zA-Z0-9]* .. ) rmdir wildfly-$WILDFLY_VERSION
JBOSS_HOME
If you install WildFly exactly this way, no further changes in installation.properties are required. Otherwise you have to adapt JBOSS_HOME.
JBOSS_BIND_ADDRESS
in installation.properties to an IP address, which is reachable by the browser.OMS_SERVER_TYPE
in installation.properties to "standalone".JBOSS_BIND_ADDRESS
in installation.properties to an IP address, which is reachable by the frontend application servers.OMS_SERVER_TYPE
in installation.properties to "backend".JBOSS_BIND_ADDRESS
in installation.properties to an IP address, which is reachable by the browser (or load-balancer, if you plan use more than one frontend server).JBOSS_PORT_OFFSET
in installation.properties. E.g., setting to variable to 100, all port numbers are increased by 100.OMS_SERVER_TYPE
in installation.properties to "frontend". /system-property=is.oms.jms.hostlist
to list all IP/Port combinations of all backend servers.IOM release provides a systemd-unit template to install WildFly as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties, to fill the template. Before expanding the template, make sure you have updated installation.properties. At least the variable JAVA_HOME needs to be adapted.
Since different IOM application servers might run on a single machine, service name for every server has to be unique. To fulfill the requirements of an HA-node installation (see Guide - Intershop Order Management - Technical Overview), the services should be named for the server types.
Expand the systems unit template with current configuration.
# as $OMS_USER setup environment . bin/set_env.sh # expand systemd-unit template expand_template.sh < $OMS_ETC/jboss-as.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
Install WildFly as a service.
# as root set server type OMS_SERVER_TYPE=<server type of IOM> # copy expanded template cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system # enable service systemctl enable jboss-as-$OMS_SERVER_TYPE # start service systemctl start jboss-as-$OMS_SERVER_TYPE
For the following steps, the WildFly application server needs to be running. WildFly will be configured for usage with IOM.
Create the admin user and configure WildFly.
# as $OMS_USER in $OMS_HOME setup environment . bin/set_env.sh # create admin user for WildFly management add-user.sh -u $JBOSS_ADMIN_USER -p $JBOSS_ADMIN_PASSWD # load initial configuration of IOM jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT -c --file="$OMS_ETC/initSystem.std.$OMS_SERVER_TYPE.cli" # enhanced standard properties are set in the WildFly cat $OMS_ETC/system.std.$OMS_SERVER_TYPE.properties | update_properties.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT" # load $OMS_ETC/cluster.properties cat $OMS_ETC/cluster.properties | update_properties.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT" # configure jms load balancing if [ $OMS_SERVER_TYPE = "frontend" ]; then configure_jms_load_balancing.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT" fi
Restart WildFly application server.
# as root restart application server systemctl restart jboss-as-$OMS_SERVER_TYPE
For the following steps, the WildFly application server and the PostgreSQL database need to be running. All deployment artifacts, which are listed in $OMS_ETC/deployment.$OMS_SERVER_TYPE.properties will be deployed into WildFly application server.
Deploy all artifacts defined by deployment.$OMS_SERVER_TYPE.properties.
# as $OMS_USER in $OMS_HOME setup environment . bin/set_env.sh # deploy all artifacts defined by deployment.properties deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"
Check wether all artifacts have been deployed successfully:
# as $OMS_USER in $OMS_HOME setup environment . bin/set_env.sh # get deployment status jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT "deployment-info"
Example output for standalone server:
NAME RUNTIME-NAME PERSISTENT ENABLED STATUS ArticleQueues-jms.xml ArticleQueues-jms.xml true true OK CustomerQueues-jms.xml CustomerQueues-jms.xml true true OK OrderQueues-jms.xml OrderQueues-jms.xml true true OK bakery.base-app-2.2.0.0.ear bakery.base-app-2.2.0.0.ear true true OK bakery.communication-app-2.2.0.0.ear bakery.communication-app-2.2.0.0.ear true true OK bakery.control-app-2.2.0.0.ear bakery.control-app-2.2.0.0.ear true true OK bakery.impex-app-2.2.0.0.ear bakery.impex-app-2.2.0.0.ear true true OK bakery.omt-app-2.2.0.0.war bakery.omt-app-2.2.0.0.war true true OK bakery.process-app-2.2.0.0.ear bakery.process-app-2.2.0.0.ear true true OK oms.monitoring-app-2.2.0.0.war oms.monitoring-app-2.2.0.0.war true true OK postgresql-jdbc4 postgresql-jdbc4 true true OK
If you operate the IOM as a highly available system, the Wildfly application server must not run directly as system service. Instead of it, the IOM watchdog has to run as system service, starting and stopping the Wildfly application server depending on health checks, made on the application server (see Guide - Intershop Order Management - Technical Overview).
Since different IOM application servers may run on a single machine, the service name for every server has to be unique. To fulfill the requirements of an HA-node installation (see Guide - Intershop Order Management - Technical Overview), the services should be named for the server types.
Depending on the server type, the watchdog has to control (backend-, frontend-server), $OMS_ETC/watchdog.properties requires different settings. For backend servers failover has to be enabled, for frontend servers this feature must no be enabled. Set property watchdog.failover.enabled to true for backend-servers and false for frontend-servers. For more information see Guide - Intershop Order Management - Technical Overview.
The IOM release provides a systemd-unit template to install IOM watchdog as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties , to fill the template. Before expanding the template, make sure you have updated the installation.properties. At least variable OMS_HOME needs to be up to date.
Expand the systems unit template with current configuration.
# as $OMS_USER setup environment . bin/set_env.sh # expand systemd-unit template expand_template.sh < $OMS_ETC/oms-watchdog.service.template > /tmp/jboss-as-$OMS_SERVER_TYPE.service
Install IOM watchdog as a service.
# fill the variable according the settings in installation.properties OMS_SERVER_TYPE=<server type of IOM> # as root copy expanded template cp /tmp/jboss-as-$OMS_SERVER_TYPE.service /etc/systemd/system # enable service systemctl enable jboss-as-$OMS_SERVER_TYPE # start service systemctl start jboss-as-$OMS_SERVER_TYPE
The sub system undertow is configured to write access logs to $OMS_VAR/log. It is only able to provide a daily rotation of logs. If you want to provide access logs to the ICI (Intershop Commerce Insight) in order to get a performance analysis, you need to use hourly rotation of logs. To overcome the limitation of undertow rotation feature, OMS provides a simple shell script to rotate the logs: bin/logrotate.sh.
If you use the default configuration of watchdog.properties and you want a logfile rotation according access_log.log you should add watchdog.log to logrotate
too.
This script has to be executed at the beginning of every hour, by adding the following line to the crontab of $OMS_USER:
0 * * * * . $OMS_HOME/bin/set_env.sh && logrotate.sh $OMS_LOG/access_log.log $OMS_LOG/watchdog.log
The following steps illustrate the setup of the Pure-FTPd server on a RedHat based Linux system.
# as root install pure-ftpd yum install pure-ftpd
In the file /etc/pure-ftpd/pure-ftpd.conf, uncomment the following line, in case it is commented out by default:
PureDB /etc/pure-ftpd/pureftpd.pdb
# as root install pure-ftpd as a service cp /usr/lib/systemd/system/pure-ftpd.service /etc/systemd/system/ systemctl enable pure-ftpd systemctl start pure-ftpd
The FTP accounts can be set up on the same host as IOM is installed on or on separate hosts. If you use separate hosts, you have to create operating system user and directory structure first.
The following code snippets show the preparation of directories. This applies if the FTP server is running on the same machine as the IOM application server.
# as $OMS_USER at $OMS_HOME set environment . bin/set_env.sh # create home folder for virtual user mkdir $OMS_VAR/pdfhost # as root set variables # use "/var/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout IS_OMS_PDF_USER=<value of your is.oms.pdf.user> OMS_USER=<value of your OMS_USER> OMS_VAR=<value of your OMS_VAR> # create virtual user for the pdfhost pure-pw useradd $IS_OMS_PDF_USER -u $OMS_USER -d $OMS_VAR/pdfhost pure-pw mkdb /etc/pure-ftpd/pureftpd.pdb
# as $OMS_USER at $OMS_HOME set environment . bin/set_env.sh # create home folder for virtual user mkdir $OMS_VAR/mediahost # as root set variables # use "/var/opt/$OMS_USER.$OMS_SERVER_TYPE" if you do not want to customize the installation layout IS_OMS_MEDIA_USER=<value of your is.oms.media.user> OMS_USER=<value of your OMS_USER> OMS_VAR=<value of your OMS_VAR> # create virtual user for the mediahost pure-pw useradd $IS_OMS_MEDIA_USER -u $OMS_USER -d $OMS_VAR/mediahost pure-pw mkdb /etc/pure-ftpd/pureftpd.pdb
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.