Guide - Intershop Order Management - Technical Overview (2.2 - 2.9)

1 Introduction

The Intershop Order Management System (IOM) as a middle-ware for e-commerce combines the order processes used in all various channels. It takes incoming orders from all available channels and connects them with selected order fulfillment processes. Depending on configurations, these processes are managed individually for each combination of channels. In addition, it provides the customers with greater transparency on product availability, order status, and returns. Thus, it supports call center employees, warehouse employees, and business managers in their respective work-field.

This guide gives a technical overview about the Intershop Order Management System as well as the applied technical concepts and latest technology updates.

The main target group of this document are system administrators.

1.1 Glossary

TermDescription
GDPRGeneral Data Protection Regulation
Gluster-FSA scale-out network-attached storage file system
HAHigh availability
IOMThe abbreviation for Intershop Order Management
JMS

Java Message Service

OMSThe abbreviation for Order Management System, the technical name of the IOM
OMTThe abbreviation for Order Management Tool, the graphical management tool of the IOM
RESTRepresentational State Transfer
RMAReturn Merchandize Authorization
SMTPSimple Mail Transfer Protocol
SOAPSimple Object Access Protocol
Spring MVCA Web Framework implementing the Model-View-Controller Pattern
WatchdogAn IOM tool to monitor and manage the availability of IOM application servers

1.2 References

2 Architecture Overview

The Intershop Order Management System basically consists of applications running on one or more application servers. If preferred, applications can be distributed over frontend and backend servers.

Additional components are:

  • One database to store all data of the application
  • FTP-hosts to manage import/export, media files e.g product images, and document files, e.g., customer invoice documents
  • an SMTP mail server to send e-mails

The following image shows an exemplary architecture of the Intershop Order Management System excluding external applications.

The cloud as external application represents shops and suppliers as well as payment service providers or accounting systems and more.

OMS system

2.1 Further Notes

  • XSLT files are required by the backend app server to generate documents on-the-fly
  • Frontend servers need access to PdfHost to show PDF documents within OMT (e.g., invoice)
  • Backend server needs access to PdfHost to store PDF documents on it (e.g., invoices)
  • Backend server needs access to PdfHost to push PDF documents to stores/suppliers via sftp

3 Application Server

The open source project Wildfly is used as application server.

For further information please see WildFly 9 Documentation.

3.1 Directory Structure

The following table gives an overview about the given directory structure of the IOM.

Major PathDirectoryDescription
opt/oms/application/XML, ear, war files of IOM standard product

bin/Programs and scripts to run and operate IOM

doc/Documentation

lib/Additional libraries required to run IOM (currently jdbc driver only)

data/Initial data

wildfly -> wildfly-<version>Symlink to wildfly. Symlink is used in installation.properties to define the wildfly location.

wildfly-<version>/Wildfly installation. Since we do not require a specific version (only major version is fixed), the wildfly version may differ in different installations.

etc -> /etc/opt/omsSymlink to configuration

var -> /var/opt/omsSymlink to variable data
etc/opt/oms/
Configurations
var/opt/oms/log/Location for log-files

xslt/Xsl templates to generate documents and customer mails on-the-fly (backend server)

importarticle/Import/export of all kinds of data (products, stocks, dispatches, returns)

communication/Exchange of data, e.g., for orders.xml

mediahost/Media data, e.g., product images

pdfhost/PDF documents created by the backend server, e.g., invoices, credit notes, delivery notes

jobs/Reserved for projects, working files and archived files for scheduled jobs for projects

customization/XML, ear, war files of current project

4 Applications

There are several applications running on the application server. Tasks to be taken by the applications include, among others, basic functionality, processing of defined business processes, communication with external applications as well as the graphical user interaction. All applications are implemented in Java.

The following list gives an overview about all applications of the IOM.

4.1 Messaging of Applications

Messaging ensures loose coupling of components. It allows the communication between different components of a distributed application to be loosely-coupled , reliable, and asynchronous.

For this purpose JMS messaging is used across various applications and it allows the IOM to be deployed on several distributed application servers, as mentioned in Architecture Overview.

4.2 Application Base

The application Base contains the essential (and crucial) functionality of the IOM and it provides several functionalities used by the other applications.

It must be deployed on every application server where one of the other applications are deployed.

4.3 Application Monitoring

The application Monitoring supports the monitoring of application servers.

The deployment of the application is optional for each installation, but it is recommended for HA environments.

Please see section High Availability for more detailed information.

4.4 Frontend Applications

4.4.1 Order Management Tool (OMT)

The application OMT is the standard graphical user interface of the Intershop Order Management System.

It can be used to manage the IOM in a more comfortable way using a common internet browser. This tool provides functionalities to manage customers, items, and orders. Due to the sensitive data a login is needed. For this purpose, the OMT comes with a user and role management.

It will be deployed on the frontend application server.

For frontend functionality, the application uses several frameworks e.g., Bootstrap, jQuery, and further more.

The backend of the OMT is based on frameworks such as Spring, Spring MVC, and Hibernate.

OMT exclusively communicates with the application Base which must be running in the same application server.

4.4.2 Communication

The application Communication is responsible to handle communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include a general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.

For further information see:

4.4.3 General Data Protection Regulation (GDPR)

The application GDPR offers functionalities including REST interfaces to support the General Data Protection Regulation of the IOM as well as other external systems that can be connected.

Also see Overview - IOM REST API.

4.4.4 Return Merchandize Authorization (RMA)

Since IOM 2.9.0.0

The application RMA offers functionalities including REST interfaces to support the process of  Return Merchandize Authorization of the IOM as well as other external systems that can be connected.

For further information see:

4.4.5 REST Communication

Since IOM 2.9.0.0

The application REST Communication is responsible to handle communication with external applications. Intended external applications are mostly shops and suppliers. Offered services are replacements of the SOAP interfaces offered with the application Communication. This includes the creation of dispatch messages, response messages and return messages.

For further information see:

4.5 Backend Applications

4.5.1 Process

The application Process contains message-driven beans and it is the starting point of the business processes of the IOM.

Typical business processes are the announcement of an order, routing of ordered articles to one or more suppliers, creation of invoice documents, or creation of payment notifications.

4.5.2 Control

The application Control is responsible for all processes that should be triggered periodically (scheduled).

Scheduled processes are for example:

  • Continue processing of business objects in abnormal state
  • Import and export

4.5.3 Impex

The application Impex is responsible for the import and export of selected business objects.

Impex can be used to exchange data with the connected actors as required. Possible business objects can be orders, customers or products, for example.

5 Database

The IOM requires one database which stores all data of the application.

5.1 PostgresSQL

The open source project PostgresSQL is used as the database management system for the IOM.

For further information please refer to PostgreSQL 9.6.12 Documentation.

5.2 Data

This includes all data of the application spread across several database schemas, according to the purpose of data.

It contains data for:

  • Business objects such as orders, customers, products, and channels
  • Configurations of business processes
  • Graphical user interface
  • Testing

6 Mail Server

An SMTP server needs to be configured on each application server to send different kinds of mail types.

General mail properties can be configured in cluster.properties.

6.1 Internal Mails

Internal mails are sent, e.g., in case of exceptions within the business processing that cannot be handled.

6.2 External Mails

External mails are, for example, mails to customers regarding orders, delivery or return confirmation.

7 FTP Server

7.1 PDF Host

The PDF Host is used to transfer documents like delivery notes, return labels, invoices, credit notes, and so on.

7.2 Media Host

The Media Host is used to transfer article media data like images. It will also be used to import and export article data in general.

8 High Availability

High availability can be defined as follows: The system is designed for highest requirements in terms of performance and reliability. Several platform capabilities allow easy scaling without downtimes.

From version 2.2 Intershop Order Management is ready for High Availability.

The followings sections describe a tested and working approach to enabling IOM to be HA.

8.1 Symmetric HA Nodes

High availability can be provided by using symmetric high availability nodes. Therefore every server node consists of:

  • A Wildfly application server acting as frontend server
  • A Wildfly application server acting as backend server
  • An FTP-Server

The configuration of all high availability nodes is identical, so every server node is able to replace any other. As a result, the entire system can answer requests as long as at least one node exists.

The following diagram shows an example of high availability nodes. Gluster FS is used to replicate data between FTP servers and backend servers. IOM Watchdog for managing the availability of application servers is not mentioned here.

symmetric HA nodes

8.2 Load Balancer

If two or more frontend servers are used, a load balancer has to be placed in front of them. This load balancer has to support:

  • Session stickiness
  • Session fail-over for undertow-subsystem
  • Health check requests

8.2.1 Session-stickiness

In general all session data are persisted within the database. But there are some transient data stored at the session, which are used in AJAX requests only. The called page prepares some information stored at the session and AJAX calls embedded into the response of the page are accessing them. This mechanism works only if page- and AJAX-requests are both handled by the same frontend application server.

Wildfly Application Server supports sticky sessions by appending a route-information to the session-cookie. The route is simply a unique identifier of the server that should handle the session. As long this server is available, the load balancer has to send all requests of this session to the same server. To guarantee this behavior, the load balancer has to support this kind of session stickiness.

Both, frontend application server and load balancer have to use identical routes. At the application server side the route is configured by the following property, located at $OMS_ETC/system.std.frontend.properties:

system.std.frontend.properties
# Make sure the "route" property at the load balancer uses the same ID as set here for "instance-id".
# The value assigned to instance-id will be appended to the session-ID. With the help of this information
# the load balancer (e.g., mod_prox_balancer) realizes session stickiness.
/subsystem=undertow: instance-id="${installation.SERVER_ID}"

This example shows the default configuration of IOM. instance-id is set to the value ${installation.SERVER_ID}, which itself is defined as SERVER_ID in $OMS_ETC/installation.properties.

8.2.2 Session Failover

If one of the frontend servers disappears, e.g., for maintenance, due to a hardware failure, etc., the load balancer has to be able to send all incoming requests dedicated for this frontend server to the remaining frontend servers.

8.2.3 Health Check Requests

The load balancer should support health check requests. The health check requests support the load balancer in deciding which frontend application server to use and which not. E.g., if the frontend server is currently redeployed, is unable to connect to the database or it is unable to connect the FTP server, then the load balancer should not use the according frontend server any longer. 

8.2.4 Load Balancer - Using Apache HTTP Server Configuration

You can use any load balancer you want, as long it is able to fulfill the requirements described in the sections above.

If you choose Apache HTTP server to be used as load balancer, the combination of mod_proxy_balancer and mod_proxy_hcheck is able to fulfill the requirements. The example below shows a configuration snipped of mod_proxy_balancer and mod_proxy_hcheck made for two frontend application servers. The according machines have IPs 10.10.10.1 and 10.10.10.2 with OMT application listening on port 8180. The value of route uses the default setting of SERVER_ID, which is set in $OMS_ETC/installation.properties to $(hostname)_$(OMS_SERVER_TYPE). Health-check URL is set for both servers to /monitoring/services/health/status. The interval for checking health status is set to 10 seconds.

Apache HTTP Server Configuration
# load modules required for reverse proxy
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_http_module modules/mod_proxy_http.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so
LoadModule watchdog_module modules/mod_watchdog.so
LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so
LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so
LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so
LoadModule slotmem_shm_module modules/mod_slotmem_shm.so

# make sure, not to act as forward proxy
ProxyRequests Off

# do not rewrite Host-Header when forwarding request
ProxyPreserveHost on

# define proxy with balancer-members and settings
# make sure, route is identical to the settings made at the frontend-application-servers
<Proxy balancer://omscluster>
    BalancerMember http://10.10.10.1:8180 max=5 timeout=40 retry=40 acquire=30000 route=fe1.iom.com_frontend hcmethod=GET hcinterval=10 hcuri=/monitoring/services/health/status
    BalancerMember http://10.10.10.2:8180 max=5 timeout=40 retry=40 acquire=30000 route=fe2.iom.com_frontend hcmethod=GET hcinterval=10 hcuri=/monitoring/services/health/status
</Proxy>

# enable balancer-manager, avoid proxying
<Location /balancer-manager>
  SetHandler balancer-manager
  ProxyPass !
</Location>
# enable server-status, avoid proxying
ExtendedStatus on
<Location /server-status>
  SetHandler server-status
  ProxyPass !
</Location>

# proxying everything else
ProxyPass / balancer://omscluster/ stickysession=JSESSIONID
ProxyPassReverse / balancer://omscluster/ stickysession=JSESSIONID

8.3 Frontend Application Server as Part of HA Node

Frontend servers do not hold any state information within the file-system. Therefore you can start, stop, create or destroy them whenever you like.

8.3.1 IOM Watchdog

Additionally to the health checks made on the frontend servers controlling the behavior of the load balancer, it is recommended to run the frontend server under control of server monitoring tool, i.e., IOM Watchdog. Additionally, the Watchdog provides the ability to restart the server whenever it becomes unhealthy or the process dies unexpectedly. Please see Guide - IOM Watchdog 2.2 - 2.11 and Health Check Requests and Cluster Status for more information.

Since traffic to frontend servers is already controlled by health check requests made by the load balancer, which can react quite fast (it can enable/disable a frontend server within some seconds), the reaction of the watchdog on unhealthy frontend servers should be more lazy, e.g., react after 5 minutes in unhealthy state. The according configuration is made in $OMS_ETC/watchdog.properties of the frontend server.

watchdog.healthcheck.timeout
# number of seconds before failed health-checks lead to restart of the
# watched process
watchdog.healthcheck.timeout = 300

Frontend servers are working in parallel. The failover feature of watchdog has to be disabled for this type of server. The according configuration is made in $OMS_ETC/watchdog.properties of the frontend server.

watchdog.failover.enabled
# switch on/off failover functionality. Has to be set to true, when watchdog
# controls a backend-server. Has to be set to false, when a frontend-server
# is controlled.
watchdog.failover.enabled = false

8.3.2 Port Offset

Since frontend- and backend server combined as HA node would compete for the same ports, the port configuration or the address-binding of one of the servers must be changed. The easiest and recommended solution is setting JBOSS_PORT_OFFSET in $OMS_ETC/installation.properties of the frontend server to a value of 100. Setting this property this way, adds 100 to all port numbers of the frontend server. E.g., the port of the undertow subsystem (embedded web-server) changes from 8080, the default value, to 8180, Wildfly administration port changes from 9990 (default) to 10090.

When managing the frontend server via jboss-cli.sh, it is important to pass always correct address and port to the program. IOMs set_env.sh helps to provide according variables JBOSS_BIND_ADDRESS and JBOSS_MGMT_PORT. When using jboss-cli.sh you have to pass them as: --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT.

8.3.3 List of JMS Hosts

Frontend application servers are communicating with backend application servers using JMS only. To do so, every frontend application server has to know the listening addresses and ports of all backend application servers. This list is stored at the property is.oms.jms.hostlist ($OMS_ETC/cluster.properties). This property only holds the list of servers, a second step is necessary to apply this configuration:

configure_jms_load_balancing.sh --jboss-cli="jboss-cli.sh -u=$JBOSS_ADMIN_USER -p=$JBOSS_ADMIN_PASSWD -c --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT"

In fact, whenever a new HA-node is added to the system or is removed permanently, the is.oms.jms.hostlist has to be updated on every frontend server and have to be applied by executing configure_jms_load_balancing.sh on every frontend server too. This is only necessary when adding/ removing HA-nodes permanently. If a HA-node or a backend server is unreachable due to a system-failure, no configuration changes are necessary.

8.4 Backend Application Server as Part of HA Node

Backend servers are stateful. There are some measures to deal with this statefulness:

  • A JMS message sent by a frontend server must be handled by a single backend server only. This is guaranteed by the JMS load-balancing algorithm used by the frontend servers.
  • Identical jobs must not run at the same time on different backend servers.
  • Identical message handlers must not run at the same time on different backend servers.
  • Directories containing stateful data (e.g., import or export files) are synchronized to all backend servers.

8.4.1 IOM Watchdog

In difference to the frontend servers, the usage of the IOM Watchdog for backend servers is not optional. The most important purposes of the watchdog for backend servers are:

  • Make sure that an unhealthy server does not receive JMS messages and does not process job-tasks any longer.
  • Make sure that only one backend server is active.

The Watchdog starts the backend server only if no other backend server is already active. IOM Watchdog stops the backend server process in case it becomes unhealthy or loses its active-status. Hence, Watchdog for backend servers has the same purpose as the health checks of the load balancer on frontend servers. It guarantees that an unhealthy backend server will be taken out of the system. Exactly as the health checks made by the load balancer, an unhealthy backend server has to be deactivated within some seconds. Therefore it is recommended to use a much shorter health check timeout for backend servers. The according configuration is made in $OMS_ETC/watchdog.properties of the backend server.

watchdog.healthcheck.timeout
# number of seconds before failed health-checks lead to restart of the
# watched process
watchdog.healthcheck.timeout = 20

Watchdog uses a centralized repository (database) where all Watchdogs are negotiating the currently active backend server. All Watchdogs are constantly checking this database. If the formerly active backend-server has lost its active state, e.g. due becoming unhealthy, the next watchdog will take over. It claims the active state and starts another backend-server on another node. The failover feature has to be enabled for backend servers in $OMS_ETC/watchdog.properties.

watchdog.failover.enabled
# switch on/off failover functionality. Has to be set to true, when watchdog
# controls a backend-server. Has to be set to false, when a frontend-server
# is controlled.
watchdog.failover.enabled = true

For details please see Guide - IOM Watchdog 2.2 - 2.11.

8.4.2 Job Scheduling

Cluster jobs only exist on the backend server. They require no special attention in regard to HA with the current approach using only one active backend server. Please see Guide - Intershop Order Management - Job Scheduling | Clustered Jobs for more information.

8.4.3 Data Synchronization

Some directories of the backend server are containing stateful data. These directories have to be synchronized across all backend servers. The following directories of backend server have to be synchronized:

  • $OMS_VAR/communication/messages
  • $OMS_VAR/importarticle
  • $OMS_VAR/jobs

It is recommended to use a kind of clustered files system for synchronization (e.g., Gluster-FS). 

8.5 FTP Server as Part of HA Node

IOM uses two FTP-servers for internal communication between frontend- and backend servers. The HA-node combines both FTP servers to a single server having two accounts. The FTP-server as part of an HA node is used by the local frontend and local backend server only. All IOM application servers have to use the following configuration in $OMS_ETC/cluster.properties:

/system-property=is.oms.media.host: "localhost"
/system-property=is.oms.pdf.host: "localhost"

The directories, the FTP server accounts are using, are synchronized between all HA nodes. For synchronization a clustered file system should be used, since all nodes are equal in this case. There exists no single node, having a master role.

The following directories have to be synchronized between all frontend servers:

  • $OMS_VAR/pdfhost
  • $OMS_VAR/mediahost

Checking the functionality of the FTP server is part of the health check, see Concept - IOM Server Health Check.

Depending on your infrastructure, you can choose different solutions to provide an highly available FTP service for IOM:

  • If you already have a highly available FTP-server, you can use it. You can configure all IOM application servers to point on this FTP-server.
  • If you already have a highly available network file system, you can use multiple FTP-Servers, all accessing the network file system. Every IOM application server should access a locally installed FTP-server which uses the HA network file system to share/ synchronize data.

8.6 Health Check Requests and Cluster Status

8.6.1 Health Check Requests

All application servers provide a health check that can be requested using URL at /monitoring/services/health/status. It responds with HTTP status code 200 if the application server is healthy, otherwise it responds with 5XX.

To ease error analysis, the content delivered by the health check URL contains further information about processed checks. These information are provided in a machine readable format (JSON), which can also easily be understood by humans.

Health checks require the deployment of oms.monitoring-app-2.2.x.x.war.

Health checks of FTP-servers can be enabled or disabled. The according settings are realized by the two properties:

/system-property=is.oms.media.healthcheck: "enabled"|"disabled"
/system-property=is.oms.pdf.healthcheck: "enabled"|"disabled"

See Concept - IOM Server Health Check for more information.

8.6.2 Cluster Status

For an overview about all application servers within a cluster, each health check request updates the current status of the application server within the database. The cluster status can be seen using URL /monitoring/services/health/clusterstatus. Basic authentication is required. The user also requires the permission Basic application management assigned at root organization.

See Concept - IOM Server Health Check for more information.

8.7 High Availability Database

Frontend servers and backend server are all connecting to the PostgreSQL database.

For high availability also the database and connection to it has to support HA:

  • Database reconnect
  • PostgreSQL HA cluster

8.7.1 DB Reconnect

To provide HA, the application servers are able to reconnect to the database without the need of a restart.

To work properly, invalid connections must be recognized and evicted from the pool. The xa-datasource configuration defines how this happens. We recommend to use the background validation checker rather than the validate-on-match method to reduce the check overhead. Moreover, the timeouts configuration parameters may influence the reconnect behavior (old connections might not get evicted as long as the timeouts are not reached).
Please see https://access.redhat.com/documentation/en-US/JBoss_Enterprise_Web_Platform/5/html/Administration_And_Configuration_Guide/ch12s02.html for more information about datasource configuration.

Exemplary Configuration of XA-Datasource
xa- datasource configuration example
# The pool size depends on the number of application servers and the database server ressources
min-pool-size="30"
max-pool-size="80"
pool-prefill="true"

# timeouts
set-tx-query-timeout="true"
query-timeout="3600"
blocking-timeout-wait-millis="3000"
idle-timeout-minutes="60"

#connection validation
validate-on-match="false"
background-validation="true"
background-validation-millis="20000"
exception-sorter-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter"
valid-connection-checker-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker"

# ressources that are unused or not supported by IOM
interleaving="false"
pad-xid="false"
wrap-xa-resource="false"
same-rm-override="false"
share-prepared-statements="false"

8.7.2 PostgreSQL HA Cluster

IOM supports access to PostgreSQL HA clusters but has always connect to the master database.

A PostgreSQL HA cluster usually consists of one master server and one or more hot-standby servers. The master-server is the only one, which is allowed to change data. An additional witness-server is needed by the fail-over process when the total number of servers (master + standbys) is odd.

During the fail-over, the IOM application must be redirected to the new master. One solution is to add a proxy layer between the IOM application servers and the Postgres HA-cluster. This proxy-layer can be realized by PgBouncer. PgBouncer has to be reconfigured on the fly (without restart) whenever the current master changes. PgBouncer being a connection pool, can also be used to limit the number of connections to PostgreSQL. More than one instance of PgBoucer should be defined to avoid single points of failures.

The IOM database connection address is defined in $OMS_ETC/cluster.properties by the property is.oms.db.hostlist which supports a number of one or more database host addresses. For more information please see Guide - Setup Intershop Order Management 2.2.

For more information about PostgreSQL HA clusters see http://repmgr.org and https://pgbouncer.github.io

8.8 Clock Synchronization

All hosts of the cluster should use a clock synchronization service (daemon).

Disclaimer

The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources
Support Tickets