The present guide is addressed to administrators who want to install IOM 2.1.x in their Linux based infrastructure. It enables them to understand what artifacts are contained in the IOM 2.1 delivery and how they can be installed, configured and deployed.
The document describes how to install IOM 2.1.x. It makes no difference whether the host is a front-end server, back-end server or a single server installation.
For a technical overview of typical installation scenarios please see references.
Wording | Description |
---|---|
CLI | Command Line Interface. A tooling for WildFly management. |
FTP | File Transfer Protocol |
HA | High availability |
ICI | The abbreviation for Intershop Commerce Insight. Intershop's reporting and analytics solution. |
ICM | The abbreviation for Intershop Commerce Management. |
IOM | The abbreviation for Intershop Order Management. |
JBoss | Synonym for WildFly (former name of the WildFly application server) |
JDBC | Java Database Connectivity |
JDK | Java Development Kit |
OLTP | Online transaction processing |
OMS | The abbreviation for Order Management System, the technical name of IOM. |
OS | Operating System |
URL | Uniform Resource Locator |
WildFly | The application server that IOM runs on. Formerly known as JBoss. |
The WildFly application server hosting and running IOM requires an installed Java development kit (JDK) of at least version 8.
The JAVA_HOME
global environment variable has to point to the installation directory of the JDK.
Note
JAVA_HOME
will be covered in installation.properties. The PATH
will be set automatically by set_env.sh.
The IOM requires an existing mail server that processes internal and external emails sent from IOM via the SMTP protocol.
Also server host and port need to be known for later configuration.
The IOM requires Pure-FTPd, as it supports virtual users. The user credentials used for accessing the FTP-server differ from the credentials used for accessing the file system.
The IOM requires a PostgreSQL database hosted by a PostgreSQL database server and can reside on its own host.
To make the database server fit for IOM, certain configuration steps in a standard installation are necessary. Setup and initialization your will find in section Database Setup and Initialization.
Intershop does not offer PostgreSQL support further than general recommendations for its use as relational database for IOM.
A list of companies offering professional Postgres support can be found here www.postgresql.org/support/professional_support/.
Also the PostgreSQL community has some excellent mailing lists, pgsql-general being the most active http://www.postgresql.org/list/.
The IOM only supports PostgreSQL servers hosted on a Linux based OS.
PostgresSQL 9.5.x or later is required.
A list of possible incompatibilities with 9.5.x is rather short and should not affect the IOM http://www.postgresql.org/docs/9.5/static/release-9-5.html.
The IOM uses three major property files which are explained below.
The local environment of IOM is defined in $OMS_ETC/installation.properties.
Note
The file installation.properties defines shell variables, which are read by $OMS_HOME/bin/set_env.sh to provide the environment for all scripts and programs belonging to the IOM system.
$OMS_HOME/bin/set_env.sh provides the content of installation.properties as simple shell variables, additionally it adds some exported variables, e.g., PATH
and some variables required by 3rd party programs (e.g., the content of JBOSS_JAVA_OPTS will be exported as JAVA_OPTS to be available for standalone.sh).
Variable name | Description | Default/ Exemplary Value |
---|---|---|
OMS_USER | The OS user that installs and runs IOM | oms |
OMS_HOME | The base location of the extracted IOM release package set_env.sh adds $OMS_HOME/bin to PATH. OMS_HOME is exported by set_env.sh. OMS_HOME is passed to WildFly and can be accessed there as ${installation.OMS_HOME}. | /opt/$OMS_USER |
OMS_ETC is set by set_env.sh implicitly to the directory where installation.property is located. OMS_ETC is exported by set_env.sh. | ||
OMS_VAR | The location of operational data and log files for IOM OMS_VAR is passed to WildFly and can accessed there as ${installation.OMS_VAR}. Note for version >= 2.1.1.0 OMS_VAR is exported by set_env.sh | /var/opt/$OMS_USER |
OMS_LOG | Note for version >= 2.1.1.0 Tho location of log written by Wildfly, IOM and scripts. OMS_LOG is passed to Wildfly and can be accessed there as ${installation.OMS_LOG}. | $OMS_VAR/log |
OMS_APP | The location of IOM artifacts deployable into the application server. List of directories can be passed here, entries have to be separated by ":". | $OMS_HOME/application:$OMS_VAR/customization |
HOSTNAME | Value is added to log-entries. If left empty, set_env.sh falls back to $(hostname). HOSTNAME is exported by set_env.sh. HOSTNAME is passed to WildFly and can be accessed there as ${installation.HOSTNAME}. | $(hostname) |
JAVA_HOME | The location of the JDK that Wildlfly uses to run set_env.sh adds $JAVA_HOME/bin to PATH. JAVA_HOME is exported by set_env.sh. | $OMS_HOME/java |
JBOSS_HOME | The installation location of the WildFly application server that IOM uses to run set_env.sh adds $JBOSS_HOME/bin to PATH. JBOSS_HOME is exported by set_env.sh. | $OMS_HOME/wildfly |
JBOSS_BIND_ADDRESS | Bind address to be used for management- and public-interface. Note Change the IP if you do not want to bind JBoss on all interfaces. | 0.0.0.0 |
JBOSS_JAVA_OPTS | These JAVA options are used when the WildFly application server is started and they are used by boss-cli.sh too. set_env.sh appends $JBOSS_JAVA_OPTS to predefined JAVA_OPTS. JAVA_OPTS is exported by set_env.sh. | -Xms512M -Xmx2048M |
JBOSS_ADMIN_USER | This is the name of the IOM WildFly user that will be created to manage the application server. Used to configure WildFly for IOM and for deployments of IOM artifacts. | omsadmin |
JBOSS_ADMIN_PASSWD | This is the password for the IOM WildFly user that is used to manage the application server. Please change the value | not_yet_a_secret |
Cluster properties are WildFly system properties, which are defining the machine independent configuration of an IOM cluster.
These properties are located in $OMS_ETC/cluster.properties.
PostgreSQL related properties are read by set_env.sh and exported as environment variables.
Adjust cluster.properties to the real values, used by your OMS cluster. For example, you have to enter the access-information for the database, in order to enable the IOM application server, to access the database.
Property | Description | Exemplary Value |
---|---|---|
is.oms.db.host | Name of the PostgreSQL host to connect to Exported by set_env.sh as PGHOST | localhost |
is.oms.db.port | Port number to connect at the PostgreSQL host Exported by set_env.sh as PGPORT | 5432 |
is.oms.db.name | Database name to connect at the PostgreSQL server Exported by set_env.sh as PGDATABASE | oms_db |
is.oms.db.user | PostgreSQL user name to connect as Exported by set_env.sh as PGUSER | oms_user |
is.oms.db.pass | Password to be used when connecting to the PostgreSQL server Exported by set_env.sh as PGPASSWORD | OmsDB |
is.oms.db.cache | Enable/disable database cache Only values enabled and disabled are allowed. A production system should always enable the use of the DB cache. | enabled |
is.oms.xmlbinder.cache | Use caching for JAXB-context while un/marshalling or validating XML files. Only values enabled and disabled are allowed. A production system should always enable the use of the JAXB context cache. | enabled |
is.oms.media.host | The host value for the FTP server mediahost | localhost |
is.oms.media.user | The user name to access FTP server mediahost | mediahost |
is.oms.media.pass | The password to access FTP server mediahost | mediahost |
is.oms.pdf.host | The host value for the FTP server pdfhost | localhost |
is.oms.pdf.user | The user name to access FTP server pdfhost | pdfhost |
is.oms.pdf.pass | The password to access FTP server pdfhost | pdfhost |
is.oms.jms.host | The host of the IOM back-end server. 127.0.0.1 can be used for a single machine installation. Otherwise the IP of the backend server, which is accessed by frontend servers, has to be used here. | 127.0.0.1 |
is.oms.jms.port | The port of the IOM back-end server, usually the HTTP port | 8080 |
is.oms.smtp.host | The host of the mail server IOM uses to send mail | localhost |
is.oms.smtp.port | The port of the mail server IOM uses to send mail | 25 |
is.oms.mail.external.from | The sender address for external mails (e.g., mails sent to the shop customers) | noreply@youraddress .com |
is.oms.mail.internal.from | The sender address for internal mails (e.g., to report errors via mail) | noreply@youraddress .com |
is.oms.mail.internal.to | The recipient for internal mails | operations@youraddress .com |
is.oms.mail.internal.cc | The carbon copy for internal mails | |
is.oms.mail.internal.bcc | The blind carbon copy for internal mails | |
is.oms.dir.var | The base path of the file system where IOM reads and writes its operational data. The default value references the value defined at installation.properties. | ${installation.OMS_VAR} |
is.oms.jboss.base.url | The publicly accessible base URL of IOM which could be a DNS of the load balancer etc For ICM it is used at the IOM connector, e.g., for the return label service. | http://localhost:8080/ |
is.oms.validation.pattern.phone | Validation pattern for phone numbers. | (^$)|(^[+]?[0-9. ()/-]{8,25}$) |
Deployment properties define which artifacts of the IOM should be deployed to the WildFly application server.
The properties are located in $OMS_ETC/deployment.properties.
Adjust the file to only include the artifacts that need to be deployed on the application server. Please make sure to keep the order.
The table below shows the entries for all supported types of server:
Single server | Backend server | Frontend server |
---|---|---|
CacheTopic-jms.xml | CacheTopic-jms.xml | - |
ArticleQueues-jms.xml | ArticleQueues-jms.xml | - |
CustomerQueues-jms.xml | CustomerQueues-jms.xml | - |
OrderQueues-jms.xml | OrderQueues-jms.xml | - |
bakery.base-app-2.1.x.x.ear | bakery.base-app-2.1.x.x.ear | bakery.base-app-2.1.x.x.ear |
bakery.control-app-2.1.x.x.ear | bakery.control-app-2.1.x.x.ear | - |
bakery.process-app-2.1.x.x.ear | bakery.process-app-2.1.x.x.ear | - |
bakery.impex-app-2.1.x.x.ear | bakery.impex-app-2.1.x.x.ear | - |
bakery.communication-app-2.1.x.x.ear | - | bakery.communication-app-2.1.x.x.ear |
bakery.omt-app-2.1.x.x.war | - | bakery.omt-app-2.1.x.x.war |
Define a dedicated OS user for the Postgresql service. (referred as the "postgres OS user" in following descriptions)
The PostgreSQL data directory contains all of the data files for the database. The variable PGDATA
is used to reference this directory. It must be prepared prior to the initialization and belongs exclusively to the Postgres user.
You should not use this directory for private data, nor add symbolic links into it. But you will probably want an extra file system built on RAID and battery backed for it. Just make sure, not to use its root folder for the data directory and have the major version within the path.
This will facilitate maintenance and Postgres major upgrades, e.g., /iomdata/pg_9.5/data.
This directory must belong to the postgres OS user.
# as root mkdir /iomdata/pg_9.5/data chown <postgres OS user>:<group> /iomdata/pg_9.5/data # as the postgres OS user # add PGDATA=/iomdata/pg_9.5/data to the user environment
The initdb of the standard installation process needs special consideration in order to work on one of the IOM's databases. Initdb will create a new PostgreSQL database cluster and its superuser (see https://www.postgresql.org/docs/9.5/static/app-initdb.html).
It must be called as the postgres OS user.
There are a few options to choose during a Postgres initialization for IOM:
Make sure to use an UTF8 encoding. Depending on your operating system, you may need to replace the string "UTF8" with "UTF-8" (all places).
Note
No change of encoding
This parameter cannot be changed after the cluster initialization.
The command to perform initdb may change according to the OS, the Postgres version and to the way you did install it. For YUM installations refer to YUM_Installation.
Examples:
# as the postgres OS user # Postgres 9.5, YUM installation on Red Hat 7 # for more info type /usr/pgsql-9.5/bin/postgresql95-setup --help /usr/pgsql-9.5/bin/postgresql95-setup initdb postgresql-9.5 --encoding UTF8 --locale=en_US.UTF8 --data-checksums -D /iomdata/pg_9.5/data -U postgres -W # without YUM: ...../pgsql-9.5/bin/initdb --encoding UTF8 --locale=en_US.UTF8 --data-checksums -D /iomdata/pg_9.5/data -U postgres -W
The access permissions must be defined in $PGDATA/pg_hba.conf.
Use md5 as auth-method to prevent passwords to be sent in clear-text across the connection.
Also you cannot use ident for TCP/IP connections, otherwise the JDBC driver connection from IOM to the database will not work. See https://www.postgresql.org/docs/9.5/static/auth-pg-hba-conf.html for details.
The ideal configuration depends mainly on the server resources and on the activity. Hence we can hence only give some general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid size system with about 32 GB RAM and 24 cores.
To achieve best performances, almost all of the data (tables and indexes) required for the ongoing work load should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.
The IOM is built on hibernate as API between the application logic and the database. This results mainly in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.
Following main parameters in $PGDATA/postgresql.conf should be adapted. See https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html.
You can consider http://www.pgconfig.org/ as guideline (using the OLTP Model).
Some aspect of data reliability are discussed here https://www.postgresql.org/docs/9.5/static/wal.html. Understanding vacuum is also essential when configuring/monitoring Postgres https://www.postgresql.org/docs/9.5/static/routine-vacuuming.html.
Parameter | Description |
---|---|
max_connections | The number of concurrent connections from the application is controlled by the xa-datasource configuration in WildFly |
max_prepared_transactions | Required for IOM installations. Set its value to about 150% of max_connections. |
shared_buffers | Between 1/4 and 1/3 of the total RAM, but not more than about 8 GB, otherwise the cache management will use too many resources. The remaining RAM is more valuable as file system cache. |
work_mem | Higher work_mem can increase performances significantly. The default is ways too low. Consider using 100-400 MB. |
maintenance_work_mem | Increase the default similar as with work_mem to favor quicker vacuums. With IOM this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem) Consider something like 2% of your total RAM per autovacuum_max_workers. e.g., 32GB RAM * 2% * 3 workers = 2GB |
vacuum_cost_* | The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load. |
wal_level | Depends on your backup, recovery and fail over strategy, should be at least archive |
wal_sync_method | Depends on your platform, check https://www.postgresql.org/docs/9.5/static/runtime-config-wal.html#GUC-WAL-SYNC-METHOD |
checkpoint_segments | 8 (small system) - 64 (large system) |
checkpoint_completion_target | Use 0.8 or 0.9 |
archive_* and REPLICATION | Depends on your backup & fail over strategy |
random_page_cost | The default (4) is usually too high. Better choose 2.5 or 3. |
effective_cache_size | Indicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB. |
log_min_duration_statement | Set it between 1 and 5 seconds to help track long running queries. |
log_filename | Better use an explicit name to help when communicating. E.g.: pg-IOM_host_port-%Y%m%d_%H%M.log |
log_rotation_age | 60 min or less |
log_line_prefix | Better use a more verbose format than the default. E.g.: %m|%a|%c|%p|%u|%h| |
log_lock_waits | Activate it (=on) |
stats_temp_directory | Better redirect it to a RAM disk |
log_autovacuum_min_duration | Set it to a few seconds to monitor the vacuum activity. |
timezone | Must match the timezone of the application servers e.g. Europe/Berlin |
The following steps describes the setup of a new database.
In order to perform the next steps, you need to be able to use the psql command and be able to access the database server via its superuser (usually postgres).
su - <postgres OS user> # set variables IS_OMS_DB_HOST=<value of your is.oms.db.host> IS_OMS_DB_PORT=<value of your is.oms.db.port> # connect to the database server as the super user psql -U postgres -h $IS_OMS_DB_HOST -p $IS_OMS_DB_PORT -d postgres
Note
IOM connects with its own dedicated database user.
Create user
-- create user CREATE USER "<value of your is.oms.db.user>" PASSWORD '<value of your is.oms.db.pass>';
The database initialization dump does not expect a given tablespace, all objects will be placed in the default users tablespace. When your data directory is located on an adequate file system, you can keep the Postgres default tablespace which is located in $PGDATA/base. If you want to define a dedicated tablespace (i.e. on a dedicated file system) , you should:
Set it as default for the user and for the database prior to use the provided initialization dump (provided it has been created):
-- set default table space ALTER USER "<value of your is.oms.db.user>" SET default_tablespace = 'tblspc_iom';
Also see Postgres table spaces:http://www.postgresql.org/docs/9.5/static/sql-createtablespace.html
IOM uses its own dedicated database.
Create the database.
-- create database CREATE DATABASE "<value of your is.oms.db.name>" WITH OWNER = "<value of your is.oms.db.user>" ENCODING = 'UTF8' -- or 'UTF-8' TABLESPACE = pg_default -- or your dedicated tablespace LC_COLLATE = 'en_US.UTF8' -- or 'en_US.UTF-8' LC_CTYPE = 'en_US.UTF8' -- or 'en_US.UTF-8' CONNECTION LIMIT = -1;
Set database search_path
.
-- set search_path ALTER DATABASE "<value of your is.oms.db.name>" SET search_path = "$user", customer, oms, omt, product, public;
Exit the psql console.
-- exit console postgres=# \q
The following steps illustrate the setup of the Pure-FTPd server on a RedHat based Linux system.
# as root install pure-ftpd yum install pure-ftpd
In the file /etc/pure-ftpd/pure-ftpd.conf, uncomment the following line, in case it is commented out by default:
PureDB /etc/pure-ftpd/pureftpd.pdb
# as root install pure-ftpd as a service cp /usr/lib/systemd/system/pure-ftpd.service /etc/systemd/system/ systemctl enable pure-ftpd systemctl start pure-ftpd
The FTP accounts can be set up on the same host as IOM is installed on or on separate hosts. If you use separate hosts, you have to create operating system user and directory structure first.
The following code snippets are showing the preparation of directories, if the FTP server is running on the same machine as the IOM application server.
# as root set variables IS_OMS_PDF_USER=<value of your is.oms.pdf.user> OMS_USER=<value of your OMS_USER> OMS_VAR=<value of your OMS_VAR> # as $OMS_USER create home folder for virtual user mkdir $OMS_VAR/pdfhost # as root create virtual user for the pdfhost pure-pw useradd $IS_OMS_PDF_USER -u $OMS_USER -d $OMS_VAR/pdfhost pure-pw mkdb
# as root set variables IS_OMS_MEDIA_USER=<value of your is.oms.media.user> OMS_USER=<value of your OMS_USER> OMS_VAR=<value of your OMS_VAR> # as $OMS_USER create home folder for virtual user mkdir $OMS_VAR/mediahost # as root create virtual user for the mediahost pure-pw useradd $IS_OMS_MEDIA_USER -u $OMS_USER -d $OMS_VAR/mediahost pure-pw mkdb
Intershop strongly recommends to create a dedicated group and user for the IOM installation on the host system. The following script, uses the same value for user name and group name.
Create user credentials.
# as root set variables # default: oms # use "oms" if you don't want to customize the installation layout OMS_USER=<name of user to own and run IOM> # default: /opt/$OMS_USER # use "/opt/$OMS_USER" if you don't want to customize the installation layout OMS_HOME=<directory where to install IOM> # add group groupadd $OMS_USER # add user useradd -g $OMS_USER -d $OMS_HOME -m $OMS_USER # set password passwd $OMS_USER
Place and extract the IOM release package in $OMS_HOME.
# as $OMS_USER at directory $OMS_HOME # extract the IOM release package tar -xvzf IOM-2.1.x.x.tgz
This creates the main directories: etc, var, bin, lib, application etc.
For the following tasks the local environment of IOM has to be defined in ~/etc/installation.properties. See installation.properties for more information.
You should add the following line to ~/.profile of $OMS_USER in order to have a readily prepared environment just after logging in:
. ~/bin/set_env.sh
According to the Linux File System Hierarchy Standard, OMS installation will be distributed on different file systems. The variable $OMS_VAR and the command line switch --etc of integrate.sh are defining the directories, where to place the according parts of OMS. There is no need to change the default values, except you want to adapt the installation layout to your own needs.
Prepare the directories $OMS_VAR and $ETC_TARGET before distributing OMS to these locations. Set the placeholders to the values defined in installation.properties.
# as root set variables # default: oms # use "oms" if you don't want to customize the installation layout OMS_USER=<name of user to own and run IOM> # default: /etc/opt/$OMS_USER # use "/etc/opt/$OMS_USER" if you don't want to customize the installation layout ETC_TARGET=<directory, where to place etc-data> # default: /var/opt/$OMS_USER # use "/var/opt/$OMS_USER" if you don't want to customize the installation layout OMS_VAR=<directory, where to place var-data> # prepare the directories mkdir -p $ETC_TARGET chown $OMS_USER:$OMS_USER $ETC_TARGET mkdir -p $OMS_VAR chown $OMS_USER:$OMS_USER $OMS_VAR
Distribute the release package parts by running the integrate.sh script as $OMS_USER.
# as $OMS_USER at $OMS_HOME # read $OMS_USER defined at installation.properties . etc/installation.properties # set variable # default: /etc/opt/$OMS_USER # use "/etc/opt/$OMS_USER" if you don't want to customize the installation layout ETC_TARGET=<directory, where to place etc-data> # copy etc- and var-data to configured locations bin/integrate.sh --etc="$ETC_TARGET"
This will copy the relevant parts to the target directory, defined by the --etc=... command line parameter and $OMS_VAR as defined at installation.properties. It also creates symlinks from $OMS_HOME/etc -> $ETC as well as from $OMS_HOME/var -> $OMS_VAR:
Import the initial dump for IOM that will contain the basic necessary database configuration. This dump can be found in the IOM delivery package at $OMS_HOME/postgres/dumps.
Edit etc/cluster.properties as $OMS_USER if you don't want to use the default PostgreSQL configuration. You don't need to make any changes at the moment, if you want to use a PostgreSQL database at localhost, which was created with default configuration.
Import is done using the following SQL command.
# as $OMS_USER at $OMS_HOME # setup environment . bin/set_env.sh # unzip and install initial data dump gunzip -c postgres/dumps/OmsDB.initial.2.1.x.x.sql.gz | psql -U $PGUSER -h $PGHOST -p $PGPORT -d $PGDATABASE
Use the the Guide - IOM Database Migration 2.1 if you need to migrate to version 2.1.x.
Get the latest version of WildFly 9 from http://wildfly.org/downloads/ (currently 9.0.2) as a tgz archive into $OMS_HOME.
As $OMS_USER unpack the downloaded archive into $OMS_HOME and create a symlink to the directory wildfly:
# as $OMS_USER at $OMS_HOME # set variable WILDFLY_VERSION=9.0.2.Final # extract archive and create symbolic link tar -xf wildfly-$WILDFLY_VERSION.tar.gz && ln -s wildfly-$WILDFLY_VERSION wildfly && rm wildfly-$WILDFLY_VERSION.tar.gz
JBOSS_HOME
If you are installing WildFly exactly this way, no further changes in installation.properties are required. Otherwise you have to adapt JBOSS_HOME.
IOM release provides a systemd-unit template to install WildFly as a service. The expand_template.sh script uses the environment, hence the information stored in installation.properties, to fill the template. Before expanding the template, make sure you have updated installation.properties. At least the variable JAVA_HOME needs to be adapted.
Expand the systems unit template with current configuration.
# as $OMS_USER setup environment . bin/set_env.sh # expand systemd-unit template expand_template.sh < $OMS_ETC/jboss-as.service.template > /tmp/jboss-as.service
Install WildFly as a service.
# as root copy expanded template cp /tmp/jboss-as.service /etc/systemd/system # enable service systemctl enable jboss-as # start service systemctl start jboss-as
For the following steps, the WildFly application server needs to be running. WildFly will be configured for usage with IOM.
Create the admin user, the ejbuser and configure WildFly.
# as $OMS_USER setup environment . bin/set_env.sh # create admin user for WildFly management add-user.sh -u $JBOSS_ADMIN_USER -p $JBOSS_ADMIN_PASSWD # create ejbuser add-user.sh -a -u ejbuser -p Evalolaku416 # load initial configuration of IOM jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD --controller=$JBOSS_BIND_ADDRESS -c < $OMS_ETC/initSystem.std.cli # enhanced standard properties are set in the WildFly update_properties.sh < $OMS_ETC/system.std.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS # load $OMS_ETC/cluster.properties update_properties.sh < $OMS_ETC/cluster.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS
# as $OMS_USER setup environment . bin/set_env.sh # create admin user for WildFly management add-user.sh -u $JBOSS_ADMIN_USER -p $JBOSS_ADMIN_PASSWD # create ejbuser add-user.sh -a -u ejbuser -p Evalolaku416 # load initial configuration of IOM jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD --controller=$JBOSS_BIND_ADDRESS -c < $OMS_ETC/initSystem.std.cli # enhanced standard properties are set in the WildFly update_properties.sh < $OMS_ETC/system.std.standalone.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS # load $OMS_ETC/cluster.properties update_properties.sh < $OMS_ETC/cluster.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS
Restart WildFly application server.
# as root restart application server systemctl restart jboss-as
The deployment.properties file is now required to be defined.
For the following steps, the WildFly application server and the PostgreSQL database need to be running. All deployment artifacts, which are listed in $OMS_ETC/deployment.properties will be deployed into WildFly application server.
Deploy all artifacts defined by deployment.properties.
# as $OMS_USER setup environment . bin/set_env.sh # deploy all artifacts defined by deployment.properties deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS"
Check wether all artifacts have deployed successfully:
# as $OMS_USER setup environment . bin/set_env.sh # get deployment status jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS "deployment-info"
Example output for version 2.1.0.0:
NAME RUNTIME-NAME PERSISTENT ENABLED STATUS ArticleQueues-jms.xml ArticleQueues-jms.xml true true OK CacheTopic-jms.xml CacheTopic-jms.xml true true OK CustomerQueues-jms.xml CustomerQueues-jms.xml true true OK OrderQueues-jms.xml OrderQueues-jms.xml true true OK bakery.base-app-2.1.0.0.ear bakery.base-app-2.1.0.0.ear true true OK bakery.communication-app-2.1.0.0.ear bakery.communication-app-2.1.0.0.ear true true OK bakery.control-app-2.1.0.0.ear bakery.control-app-2.1.0.0.ear true true OK bakery.impex-app-2.1.0.0.ear bakery.impex-app-2.1.0.0.ear true true OK bakery.omt-app-2.1.0.0.war bakery.omt-app-2.1.0.0.war true true OK bakery.process-app-2.1.0.0.ear bakery.process-app-2.1.0.0.ear true true OK postgresql-jdbc4 postgresql-jdbc4 true true OK
Sub system undertow is configured to write access logs to $OMS_VAR/log. It is only able to provide a daily rotation of logs. If you want to provide access logs to ICI (Intershop Commerce Insight) in order to get a performance analysis, you need to use hourly rotation of logs. To overcome the limitation of undertow rotation feature, OMS provides a simple shell script to rotate the logs: bin/logrotate.sh.
This script has to be executed at the beginning of every hour, by adding the following line to the crontab of $OMS_USER:
0 * * * * . $HOME/bin/set_env.sh && logrotate.sh $OMS_VAR/log/access_log.log
0 * * * * . $HOME/bin/set_env.sh && logrotate.sh $OMS_LOG/access_log.log
IOM application server has to be installed and configured according the instructions in section Prepare WildFly to act as single IOM Application Server. Before starting the following configuration steps, OMS has to be undeployed.
# as $OMS_USER undeploy all artifacts undeploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS"
Edit the content of deployment.properties to contain the following entries only.
Please take care to use the order of entries exactly as listed here:
CacheTopic-jms.xml ArticleQueues-jms.xml CustomerQueues-jms.xml OrderQueues-jms.xml bakery.base-app-2.1.x.x.ear bakery.control-app-2.1.x.x.ear bakery.process-app-2.1.x.x.ear bakery.impex-app-2.1.x.x.ear
JBOSS_BIND_ADDRESS
in installation.properties to an IP address, which is reachable by the frontend application servers.is.oms.jms.host
in cluster.properties to the IP address of the backend application server, which is used by frontend application servers to connect to the backend application server.JBOSS_BIND_ADDRESS
(in installation.properties) to „any address“ (0.0.0.0), is.oms.jms.host
and JBOSS_BIND_ADDRESS
have to be identical.If you have changed the setting of JBOSS_BIND_ADDRESS
in installation.properties, you have to:
Recreate the environment, to reconfigure the application server and and to restart it.
# as $OMS_USER recreate environment: . $OMS_HOME/bin/set_env.sh # recreate configuration of WildFly system service expand_template.sh < $OMS_ETC/jboss-as.service.template > /tmp/jboss-as.service
# as root update systems-unit file cp /tmp/jboss-as.service /etc/systemd/system # restart WildFly systemctl restart jboss-as
After restarting WildFly, changes of cluster.properties and deployment.properties have to be applied.
# apply changes of cluster.properties update_properties.sh < $OMS_ETC/cluster.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS # redeploy OMS according to the changes made to deployment.properties deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS"
# apply system properties for backend server update_properties.sh < $OMS_ETC/system.std.backend.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS # apply changes of cluster.properties update_properties.sh < $OMS_ETC/cluster.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS # redeploy OMS according to the changes made to deployment.properties deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS"
IOM application server has to be installed and configured according the instructions in section Prepare WildFly to act as single IOM Application Server. Before starting the following configuration steps:
Undeploy the IOM.
# as $OMS_USER undeploy all artifacts undeploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS"
Change the content of deployment properties to contain the following entries only.
Please take care to use the order of entries exactly as listed here:
bakery.base-app-2.1.x.x.ear bakery.communication-app-2.1.x.x.ear bakery.omt-app-2.1.x.x.war
is.oms.jms.host
in cluster.properties to the IP address of the backend application server, which is used by frontend application servers to connect to the backend application server.Apply the changes of cluster.properties and deployment.properties.
# apply changes of cluster.properties update_properties.sh < $OMS_ETC/cluster.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS
# apply system properties for frontend server update_properties.sh < $OMS_ETC/system.std.frontend.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS # apply changes of cluster.properties update_properties.sh < $OMS_ETC/cluster.properties | jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS
Restart Wildfly application server
# as root restart application server systemctl restart jboss-as
Deploy the IOM.
# as $OMS_USER deploy all IOM artifacts according to the changes made to deployment.properties deploy.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS"
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.