The Intershop Order Management System (IOM) as a middle-ware for e-commerce combines the order processes used in all various channels. It takes incoming orders from all available channels and connects them with selected order fulfillment processes. Depending on configurations, these processes are managed individually for each combination of channels. In addition, it provides the customers with greater transparency on product availability, order status, and returns. Thus, it supports call center employees, warehouse employees and business managers in their respective work-field.
This guide gives a technical overview about the Intershop Order Management System as well as the applied technical concepts and latest technology updates.
The main target group of this document are system administrators.
Term | Description |
---|---|
GDPR | General Data Protection Regulation |
Gluster-FS | A scale-out network-attached storage file system |
HA | High availability |
IOM | The abbreviation for Intershop Order Management |
JMS | Java Message Service |
OMS | The abbreviation for Order Management System, the technical name of the IOM |
OMT | The abbreviation for Order Management Tool, the graphical management tool of the IOM |
REST | Representational State Transfer |
RMA | Return Merchandize Authorization |
SMTP | Simple Mail Transfer Protocol |
SOAP | Simple Object Access Protocol |
Spring MVC | A Web Framework implementing the Model-View-Controller Pattern |
Watchdog | An IOM tool to monitor and manage the availability of IOM application servers |
The Intershop Order Management System basically consists of applications running on one or more application servers. If preferred, applications can be distributed over frontend and backend servers.
Additional components are:
The following image is showing an exemplary architecture of the Intershop Order Management System excluding external applications.
As application server, the open source project Wildfly is used.
For further information please see https://docs.jboss.org/author/display/WFLY9/Documentation.
The following table gives an overview about the given directory structure of the IOM.
Major Path | Directory | Description |
---|---|---|
opt/oms/ | application/ | XML, ear, war files of IOM standard product |
bin/ | Programs and scripts to run and operate IOM | |
doc/ | Documentation | |
lib/ | Additional libraries required to run IOM (currently jdbc driver only) | |
data/ | Initial data | |
wildfly -> wildfly-<version> | Symlink to wildfly. Symlink is used in installation.properties to define the wildfly location. | |
wildfly-<version>/ | Wildfly installation. Since we do not require a specific version (only major version is fixed), the wildfly version may differ in different installations. | |
etc -> /etc/opt/oms | Symlink to configuration | |
var -> /var/opt/oms | Symlink to variable data | |
etc/opt/oms/ | Configurations | |
var/opt/oms/ | log/ | Location for log-files |
xslt/ | Xsl templates to generate documents and customer mails on-the-fly (backend-server) | |
importarticle/ | Import/export of all kinds of data (products, stocks, dispatches, returns) | |
communication/ | Exchange of data, e.g., for orders.xml | |
mediahost/ | Media data, e.g., product images | |
pdfhost/ | PDF documents created by the backend-server, e.g., invoices, credit notes, delivery notes | |
jobs/ | Reserved for projects, working files and archived files for scheduled jobs for projects | |
customization/ | XML, ear, war files of current project |
There are several applications running on the application server. Tasks to be taken by the applications include, among others, basic functionality, processing of defined business processes, communication with external applications as well as the graphical user interaction. All applications are implemented in Java.
The following list gives an overview about all applications of the IOM.
Messaging ensures loose coupling of components. It allows the communication between different components of a distributed application to be loosely-coupled , reliable, and asynchronous.
For this purpose JMS messaging is used across various applications. JMS messaging is used within the application-server only, messages are not sent from one application-server to another.
The application Base contains the essential (and crucial) functionality of the IOM and it provides several functionalities used by the other applications.
It has to be deployed on every application server where one of the other applications are deployed.
The application Monitoring supports the monitoring of application servers.
The deployment of the application is optional for each installation, but is recommended for HA environments.
Please see section High Availability for more detailed information.
The application Process contains message-driven beans and it is the starting point of the business processes of the IOM.
Typical business processes are the announcement of an order, routing of ordered articles to one or more suppliers, creation of invoice documents, or creation of payment notifications.
On frontend-servers the processes are triggered by Messages of the Communication-application and by other locally running processes only. Messages are not received from other application-servers.
The application OMT is the standard graphical user interface of the Intershop Order Management System.
It can be used to manage the IOM in a more comfortable way using a common internet browser. This tool provides functionalites to manage customers, items and orders. Due to the sensitive data a login is needed. For this purpose, the OMT comes with a user and role management.
It will be deployed on the frontend application server.
For frontend functionality, the application uses several frameworks, e.g., Bootstrap, jQuery, and more.
The backend of the OMT is based on frameworks such as Spring, Spring MVC, and Hibernate.
OMT exclusively communicates with the application Base which must be running in the same application server.
The application Communication is responsible to handle communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include a general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.
For further information see:
The application GDPR offers functionalities including REST interfaces to support the General Data Protection Regulation of the IOM as well as of other external systems that can be connected.
Also see Overview - IOM REST API.
The application RMA offers functionalities including REST interfaces to support the process of Return Merchandize Authorization of the IOM as well as of other external systems that can be connected.
For further information see:
The application REST Communication is responsible to handle communication with external applications. Intended external applications are mostly shops and suppliers. Offered services are replacements of the SOAP interfaces offered with the application Communication. This includes the creation of dispatch messages, response messages and return messages.
For further information see:
The application Process contains message-driven beans and it is the starting point of the business processes of the IOM.
Typical business processes are the announcement of an order, routing of ordered articles to one or more suppliers, creation of invoice documents, or creation of payment notifications.
On backend-servers all processes are triggered by the Control-application only and by messages sent by locally running processes. Messages are not received from other application-servers.
The application Control is responsible for all processes that should be triggered periodically (scheduled).
Scheduled processes are for example:
The application Impex is responsible for the import and export of selected business objects.
Impex can be used to exchange data with the connected actors as required. Possible business objects can be orders, customers or products, for example.
The IOM requires one database which stores all data of the application.
The open source project PostgresSQL is used as the database management system for the IOM.
For further information please refer to PostgreSQL 9.6.12 Documentation.
This includes all data of the application spread across several database schemas, according to the purpose of data.
It contains data for:
An SMTP server needs to be configured on each application server to send different kinds of mail types.
General mail properties can be configured in cluster.properties.
Internal mails are sent, e.g., in case of exceptions within the business processing that cannot be handled.
External mails are, for example, mails to customers regarding orders, delivery or return confirmation.
The PDF Host is used to transfer documents like delivery notes, return labels, invoices, credit notes, and so on.
The Media Host is used to transfer article media data like images. It will also be used to import and export article data in general.
High availability (HA) can be defined as follows. The system is designed for highest requirements in terms of performance and reliability. Several platform capabilities allow easy scaling without downtimes.
From version 2.2 Intershop Order Management is ready for High availability.
The following sections describe a tested and working approach to enable IOM to be HA.
High availability can be provided by using symmetric high availability nodes. Therefore every server node consists of:
The configuration of all high availability nodes is identical, so every server node is able to replace any other. As a result, the entire system can answer requests as long as at least one node exists.
The following diagram shows an example of high availability nodes. GlusterFS can be used to replicate data between FTP servers and backend servers. IOM Watchdog for managing the availability of application servers is not mentioned here.
If two or more frontend servers are used, a load balancer has to be placed in front of them. This load balancer has to support:
In general, all session data are persisted within the database. But there are some transient data stored at the session, which are used in AJAX requests only. The called page prepares some information stored at the session and AJAX calls embedded into the response of the page are accessing them. This mechanism only works if page- and AJAX-requests are both handled by the same frontend application server.
Wildfly Application Server supports sticky sessions by appending a route information to the session cookie. The route is simply a unique identifier of the server that should handle the session. As long as this server is available, the load balancer has to send all requests of this session to the same server. To guarantee this behavior, the load balancer has to support this kind of session stickiness.
Both, frontend application server and load balancer have to use identical routes. At the application server side the route is configured by the following property, located at $OMS_ETC/system.std.frontend.properties:
# Make sure the "route" property at the load balancer uses the same ID as set here for "instance-id". # The value assigned to instance-id will be appended to the session-ID. With the help of this information # the load balancer (e.g., mod_prox_balancer) realizes session stickiness. /subsystem=undertow: instance-id="${installation.SERVER_ID}"
This example shows the default configuration of IOM. instance-id is set to the value ${installation.SERVER_ID}, which itself is defined as SERVER_ID
in $OMS_ETC/installation.properties.
If one of the frontend servers disappears, e.g., for maintenance, due to a hardware failure etc., the load balancer has to be able to send all incoming requests dedicated for this frontend server to the remaining frontend servers.
The load balancer should support health check requests. The health check requests support the load balancer in deciding which frontend application server to use and which not, e.g., if the frontend server is currently redeployed, is unable to connect to the database or if it is unable to connect the FTP server, then the load balancer should not use the according frontend server any longer.
You can use any load balancer you want, as long as it is able to fulfill the requirements described in the sections above.
If you choose Apache HTTP server to be used as load balancer, the combination of mod_proxy_balancer and mod_proxy_hcheck is able to fulfill the requirements. The example below shows a configuration snippet of mod_proxy_balancer and mod_proxy_hcheck made for two frontend application servers. The according machines have IPs 10.10.10.1 and 10.10.10.2 with OMT application listening on port 8180. The value of route uses the default setting of SERVER_ID
, which is set in $OMS_ETC/installation.properties to $(hostname)_$(OMS_SERVER_TYPE). The health check URL is set to /monitoring/services/health/status for both servers. The interval for checking the health status is set to 10 seconds.
# load modules required for reverse proxy LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so LoadModule watchdog_module modules/mod_watchdog.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so # make sure, not to act as forward proxy ProxyRequests Off # do not rewrite Host-Header when forwarding request ProxyPreserveHost on # define proxy with balancer-members and settings # make sure, route is identical to the settings made at the frontend-application-servers <Proxy balancer://omscluster> BalancerMember http://10.10.10.1:8180 max=5 timeout=40 retry=40 acquire=30000 route=fe1.iom.com_frontend hcmethod=GET hcinterval=10 hcuri=/monitoring/services/health/status BalancerMember http://10.10.10.2:8180 max=5 timeout=40 retry=40 acquire=30000 route=fe2.iom.com_frontend hcmethod=GET hcinterval=10 hcuri=/monitoring/services/health/status </Proxy> # enable balancer-manager, avoid proxying <Location /balancer-manager> SetHandler balancer-manager ProxyPass ! </Location> # enable server-status, avoid proxying ExtendedStatus on <Location /server-status> SetHandler server-status ProxyPass ! </Location> # proxying everything else ProxyPass / balancer://omscluster/ stickysession=JSESSIONID ProxyPassReverse / balancer://omscluster/ stickysession=JSESSIONID
Frontend servers do not hold any state information within the file system and the business processes running in frontend servers can be interrupted at any time. Therefore you can start, stop, create or destroy them whenever you like.
In addition to the health checks made on the frontend servers controlling the behavior of the load balancer, it is recommended to run the frontend server under control of the server monitoring tool, i.e., IOM Watchdog. Additionally, the Watchdog provides the ability to restart the server whenever it becomes unhealthy or the process dies unexpectedly. Please see Guide - IOM Watchdog 2.2 - 2.11 and Health Check Requests and Cluster Status for more information.
Since traffic to frontend servers is already controlled by health check requests made by the load balancer, which can react quite fast (it can enable/disable a frontend server within some seconds), the reaction of the watchdog on unhealthy frontend servers should be more lazy, e.g., react after 5 minutes in unhealthy state. The according configuration is made in $OMS_ETC/watchdog.properties of the frontend server.
# number of seconds before failed health-checks lead to restart of the # watched process watchdog.healthcheck.timeout = 300
Frontend servers are working in parallel. The failover feature of watchdog has to be disabled for this type of server. The according configuration is made in $OMS_ETC/watchdog.properties of the frontend server.
# switch on/off failover functionality. Has to be set to true, when watchdog # controls a backend-server. Has to be set to false, when a frontend-server # is controlled. watchdog.failover.enabled = false
Since frontend and backend server combined as HA node would compete for the same ports, the port configuration or the address binding of one of the servers must be changed. The easiest and recommended solution is setting JBOSS_PORT_OFFSET
in $OMS_ETC/installation.properties of the frontend server to a value of 100. Setting this property this way adds 100 to all port numbers of the frontend server, e.g., the port of the undertow subsystem (embedded web-server) changes from 8080, the default value, to 8180, Wildfly administration port changes from 9990 (default) to 10090.
When managing the frontend server via jboss-cli.sh, it is important to always pass the correct address and port to the program. IOMs set_env.sh helps to provide according variables JBOSS_BIND_ADDRESS
and JBOSS_MGMT_PORT
. When using jboss-cli.sh you have to pass them as: --controller=$JBOSS_BIND_ADDRESS:$JBOSS_MGMT_PORT
.
Backend servers are stateful. There are some measures to deal with this statefulness:
In contrast to the frontend servers, the usage of the IOM Watchdog for backend servers is not optional. The most important purposes of the watchdog for backend servers are:
The Watchdog starts the backend server only if no other backend server is already active. IOM Watchdog stops the backend server process in case it becomes unhealthy or loses its active status. Hence, Watchdog for backend servers has the same purpose as the health checks of the load balancer on frontend servers. It guarantees that an unhealthy backend server will be taken out of the system. Exactly as the health checks made by the load balancer, an unhealthy backend server has to be deactivated within some seconds. Therefore it is recommended to use a much shorter health check timeout for backend servers. The according configuration is made in $OMS_ETC/watchdog.properties of the backend server.
# number of seconds before failed health-checks lead to restart of the # watched process watchdog.healthcheck.timeout = 20
Watchdog uses a centralized repository (database) where all Watchdogs are negotiating the currently active backend server. All Watchdogs are constantly checking this database. If the formerly active backend server has lost its active state, e.g., because it became unhealthy, the next watchdog will take over. It claims the active state and starts another backend server on another node. The failover feature has to be enabled for backend servers in $OMS_ETC/watchdog.properties.
# switch on/off failover functionality. Has to be set to true, when watchdog # controls a backend-server. Has to be set to false, when a frontend-server # is controlled. watchdog.failover.enabled = true
For details please see Guide - IOM Watchdog 2.2 - 2.11.
Cluster jobs only exist on the backend server. They require no special attention in regard to HA with the current approach using only one active backend server. Please see Guide - Intershop Order Management - Job Scheduling | Clustered Jobs for more information.
Some directories of the backend server are containing stateful data. These directories have to be synchronized across all backend servers. The following directories of backend servers have to be synchronized:
It is recommended to use a kind of clustered files system for synchronization (e.g., GlusterFS).
IOM uses two FTP servers for internal communication between frontend and backend servers. The HA node combines both FTP servers to a single server having two accounts. The FTP server as part of an HA node is used by the local frontend and local backend server only. All IOM application servers have to use the following configuration in $OMS_ETC/cluster.properties:
/system-property=is.oms.media.host: "localhost" /system-property=is.oms.pdf.host: "localhost"
The directories, the FTP server accounts are using, are synchronized between all HA nodes. For synchronization a clustered file system should be used, since all nodes are equal in this case. There is no single node with a master role.
The following directories have to be synchronized between all frontend servers:
Checking the functionality of the FTP server is part of the health check, see Concept - IOM Server Health Check.
Depending on your infrastructure, you can choose different solutions to provide a highly available FTP service for IOM:
All application servers provide a health check that can be requested using the URL at /monitoring/services/health/status. It responds with HTTP status code 200 if the application server is healthy, otherwise it responds with 5XX.
To ease error analysis, the content delivered by the health check URL contains further information about processed checks. These information is provided in a machine-readable format (JSON), which can also be easily understood by humans.
Health checks require the deployment of oms.monitoring-app-2.x.x.x.war.
Health checks of FTP servers can be enabled or disabled. The according settings are realized by the two properties:
/system-property=is.oms.media.healthcheck: "enabled"|"disabled" /system-property=is.oms.pdf.healthcheck: "enabled"|"disabled"
See Concept - IOM Server Health Check for more information.
For an overview about all application servers within a cluster, each health check request updates the current status of the application server within the database. The cluster status can be seen using the URL /monitoring/services/health/clusterstatus. Basic authentication is required. The user also requires the permission Basic application management assigned at root organization.
See Concept - IOM Server Health Check for more information.
Frontend servers and backend servers are all connecting to the PostgreSQL database.
For high availability the database and the connection to it also have to support HA:
To provide HA, the application servers are able to reconnect to the database without the need of a restart.
To work properly, invalid connections must be recognized and evicted from the pool. The xa-datasource configuration defines how this happens. We recommend to use the background validation checker rather than the validate-on-match method to reduce the check overhead. Moreover, the timeout's configuration parameters may influence the reconnect behavior (old connections might not get evicted as long as the timeouts are not reached).
Please see redhat's Administration and Configuration Guide for more information about datasource configuration.
xa- datasource configuration example # The pool size depends on the number of application servers and the database server ressources min-pool-size="30" max-pool-size="80" pool-prefill="true" # timeouts set-tx-query-timeout="true" query-timeout="3600" blocking-timeout-wait-millis="3000" idle-timeout-minutes="60" #connection validation validate-on-match="false" background-validation="true" background-validation-millis="20000" exception-sorter-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter" valid-connection-checker-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker" # ressources that are unused or not supported by IOM interleaving="false" pad-xid="false" wrap-xa-resource="false" same-rm-override="false" share-prepared-statements="false"
IOM supports access to PostgreSQL HA clusters, but always has to be connected to the master database.
A PostgreSQL HA cluster usually consists of one master server and one or more hot-standby servers. The master server is the only one that is allowed to change data. An additional witness server is needed by the failover process when the total number of servers (master + standbys) is odd.
During the failover, the IOM application must be redirected to the new master. One solution is to add a proxy layer between the IOM application servers and the Postgres HA cluster. This proxy layer can be realized by PgBouncer. PgBouncer has to be reconfigured on the fly (without restart) whenever the current master changes. PgBouncer being a connection pool, can also be used to limit the number of connections to PostgreSQL. More than one instance of PgBoucer should be defined to avoid single points of failures.
The IOM database connection address is defined in $OMS_ETC/cluster.properties by the property is.oms.db.hostlist
which supports a number of one or more database host addresses. For more information please see Guide - Setup Intershop Order Management 2.10
For more information about PostgreSQL HA clusters see http://repmgr.org and https://pgbouncer.github.io
All hosts of the cluster should use a clock synchronization service (daemon).
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.