The Intershop Order Management System (IOM) as a middle-ware for e-commerce combines the order processes used in all various channels. It takes incoming orders from all available channels and connects them with selected order fulfillment processes. Depending on configurations, these processes are individually managed for each combination of channels. In addition, it provides the customers with greater transparency on product availability, order status, and returns. Thus, it supports call center employees, warehouse employees, and business managers in their respective work-field.
This guide gives a technical overview about the Intershop Order Management System as well as the applied technical concepts and latest technology updates.
The main target group of this document are system administrators.
Term | Description |
---|---|
GDPR | General Data Protection Regulation |
Gluster-FS | A scale-out network-attached storage file system |
HA | High availability |
IOM | The abbreviation for Intershop Order Management |
JMS | Java Message Service |
OMS | The abbreviation for Order Management System, the technical name of the IOM |
OMT | The abbreviation for Order Management Tool, the graphical management tool of the IOM |
REST | Representational State Transfer |
RMA | Return Merchandize Authorization |
SMTP | Simple Mail Transfer Protocol |
SOAP | Simple Object Access Protocol |
Spring MVC | A Web Framework implementing the Model-View-Controller Pattern |
Watchdog | An IOM tool to monitor and manage the availability of IOM application servers |
The Intershop Order Management System basically consists of applications running on one or more application servers. If high availability is required, some backend components are running in a hot standby state.
Additional components are:
The following image shows the general architecture of the Intershop Order Management System excluding external applications. This image concentrates on the IOM applications themselves and their communication between them. A more infrastructure centric view can be found in the section High Availability.
As application server, the open source project Wildfly is used.
For further information please see https://docs.wildfly.org/.
The following table gives an overview about the given directory structure of the IOM.
Configuration variable | Default setting | Sub-Directory | Description |
---|---|---|---|
JBOSS_HOME | /opt/wildfly.$OMS_SERVER_TYPE | . | WildFly installation. Since we do not require a specific version (only major version is fixed), the wildFly version may differ in different installations. |
OMS_HOME | /opt/$OMS_USER.$OMS_SERVER_TYPE | application/ | XML, ear, war files of IOM standard product |
bin/ | Programs and scripts to run and operate IOM | ||
doc/ | Documentation | ||
lib/ | Additional libraries required to run IOM (currently jdbc driver only) | ||
data/ | Initial data | ||
etc | Symlink to $OMS_ETC (symlink is not really necessary. It is there for convenience, when moving around manually. | ||
var | Symlink to $OMS_VAR (symlink is not really necessary. It is there for convenience, when moving around manually. | ||
OMS_ETC | /etc/opt/$OMS_USER.$SERVER_TYPE | . | Configuration files, e.g., for quartz |
OMS_VAR | /var/opt/$OMS_USER.$SERVER_TYPE | . | Var file system root |
templates/ | Containing velocity templates for customer e-mails | ||
customization/ | XML, ear, war files of current project | ||
xslt/ | XSL transformation templates for invoices, return slips etc. | ||
OMS_LOG | /var/opt/$OMS_USER.log | . | Location for log-files |
OMS_SHARE | /var/opt/$OMS_USER.share | . | Shared file system root |
archive/ | Note Since OMS 2.17 Folder used to archive old data, principally sensitive one before to delete them in the database. | ||
importarticle/ | Import/export of all kind of data (products, stocks, dispatches, returns) | ||
communication-messages/ | Exchange of data, e.g., for orders.xml | ||
media/ | Media data, e.g., product images | ||
pdf/ | PDF documents created by the backend server, e.g., invoices, credit notes, delivery notes | ||
jobs/ | Reserved for projects, working files and archived files for scheduled jobs for projects |
There are several applications running on the application server. Task to be taken by the applications include, among others basic functionality, processing of defined business processes, communication with external applications as well as the graphical user interaction. All applications are implemented in Java.
The following list gives an overview about all applications of the IOM.
Messaging ensures loose coupling of components. It allows the communication between different components of a distributed application to be loosely-coupled, reliable, and asynchronous.
For this purpose JMS messaging is used across various applications. JMS messaging is used within the application server only, messages are not sent from one application server to another.
The application Base contains the essential (and crucial) functionality of the IOM and it provides several functionalities used by the other applications.
Note
The application Monitoring supports the monitoring of application servers.
The deployment of the application is optional for each installation, but is recommended for HA environments.
Please see section High Availability for more detailed information.
The application OMT is the standard graphical user interface of the Intershop Order Management System.
It can be used to manage the IOM in a more comfortable way using a common Internet browser. This tool provides functionality to manage customers, items, and orders. Due to the sensitive data a login is needed. For this purpose, the OMT comes with a user and role management.
For frontend functionality, the application uses several frameworks e.g., Bootstrap, jQuery, and others.
The backend of the OMT is based on frameworks such as Spring, Spring MVC, and Hibernate.
OMT communicates exclusively with the application Base which must be running in the same application server.
The application Communication is responsible for communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include a general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.
For further information see:
The application Process contains message-driven beans and it is the starting point of the business processes of the IOM.
Typical business processes are the announcement of an order, routing of ordered articles to one or more suppliers, creation of invoice documents, or creation of payment notifications.
Processes are triggered by the Control-application and messages sent from locally running processes. Messages are not received from other application-servers.
The application Control is responsible for all processes that should be triggered periodically (scheduled).
Scheduled processes are for example:
The application Impex is responsible for the import and export of selected business objects.
Impex can be used to exchange data with the connected actors as required. Possible business objects can be orders, customers, or products, for example.
The application GDPR offers functionality including REST interfaces to support the General Data Protection Regulation of the IOM as well as other external systems, that can be connected.
Also see Overview - IOM REST API.
The application RMA offers functionality including REST interfaces to support the process of Return Merchandize Authorization of the IOM as well as other external systems, that can be connected.
For further information see:
The application REST Communication is responsible for communication with external applications. Intended external applications are mostly shops and suppliers. The supported services are replacements of the SOAP interfaces offered with the application Communication. This includes e.g. the creation of dispatch messages, response messages and return messages.
For further information see Overview - IOM REST API.
Note
Since IOM 2.13
The application Transmission offers functionality including REST interfaces to support the message transmission handling of the IOM.
For further information see Overview - IOM REST API.
Note
Since IOM 2.15
The application Order State is a replacement of the SOAP OrderState service and offers a REST interface to get the status of one or more orders for given search criterias.
For further information see Overview - IOM REST API.
The IOM requires one database which stores all data of the application.
The open source project PostgresSQL is used as the database management system for the IOM.
For further information please refer to the PostgreSQL 11.2 Documentation.
All data of the application spread across several database schemas, according to the purpose of data.
This contains data for:
An SMTP server has to be configured for the cluster to send different kinds of mail types.
General mail properties can be configured in cluster.properties.
Internal mails are sent, e.g., in case of exceptions that cannot be handled within the business processing.
External mails are for example mails to customers regarding orders, delivery or return confirmation.
High availability can be defined as follows:
The system is designed for highest requirements in terms of performance and reliability. Several platform capabilities allow easy scaling without downtimes.
Since version 2.2, Intershop Order Management is ready for High Availability.
The following sections describe a tested and working approach to enabling IOM to be HA.
High availability can be provided by using symmetric cluster nodes. Therefore every server node consists of:
The configuration of all cluster nodes is identical, so every server node is able to replace any other. As a result, the entire system can answer requests as long as at least one node exists.
The following diagram shows two clustered nodes with most cluster specific infrastructure and communication. GlusterFS can be used to replicate data between IOM servers. IOM Watchdog for managing the availability of application servers is not shown here.
Most IOM applications (e.g., base, customization, process) running in WildFly application server are scaleable. They can be executed on many application servers in parallel and hence they do not need our attention.
There are two IOM applications running in WildFly application server that must never be executed in parallel:
In order to guarantee that these applications are running on one node only, the singleton subsystem of WildFly application server is used. In short, this subsystem does the following:
Additionally to the health checks made on the WildFly application servers controlling the behavior of the load balancer, it is required to run the WildFly application server under control of server monitoring tool, i.e., IOM Watchdog. Additionally, The Watchdog provides the ability to restart the server whenever it becomes unhealthy or the process dies unexpectedly. Please see Guide - IOM Watchdog 2.2 - 2.11 and IOM Technical Overview - High Availability (2.12 - 2.17)#Health Check Requests and Cluster Status for more information.
Since traffic to IOM servers is already controlled by health check requests made by the load balancer, which can react quite fast (it can enable/disable a WildFly application server within some seconds), the reaction of the Watchdog on unhealthy IOM application servers can be more lazy, e.g., react after 5 minutes in unhealthy state. The according configuration is made in $OMS_ETC/watchdog.properties.
# number of seconds before failed health-checks lead to restart of the # watched process watchdog.healthcheck.timeout = 300
Some directories of IOM servers are containing stateful runtime data, that have to be shared by all IOM servers. These directories are placed as sub-directories within $OMS_SHARE.
It is recommended to use a kind of clustered files system for synchronization (e.g., GlusterFS). Clustered file systems are highly available. The failure of a single node has impact on this node only. Using a simple shared file system server is not providing a highly available solution. If the file server fails, all IOM servers are affected too.
If two or more IOM servers are used, a load balancer has to be placed in front of them. This load balancer has to support:
In general all session data are persisted within the database, but there are some transient data stored at the session, which are used in AJAX requests only. The called page prepares some information stored at the session and AJAX calls embedded into the response of the page are accessing them. This mechanism works only if page- and AJAX-requests are both handled by the same frontend application server.
WildFly application server supports sticky sessions by appending a route-information to the session-cookie. The route is simply a unique identifier of the server that should handle the session. As long as this server is available, the load balancer has to send all requests of this session to the same server. To guarantee this behavior, the load balancer has to support this kind of session stickiness.
Both, frontend application server and load balancer, have to use identical routes. At the application server side, the route is configured by the following property, located at $OMS_ETC/system.std.frontend.properties:
# Make sure the "route" property at the load balancer uses the same ID as set here for "instance-id". # The value assigned to instance-id will be appended to the session-ID. With the help of this information # the load balancer (e.g., mod_prox_balancer) realizes session stickiness. /subsystem=undertow: instance-id="${installation.SERVER_ID}"
This example shows the default configuration of IOM. instance-id
is set to the value ${installation.SERVER_ID}
, which itself is defined as SERVER_ID
in $OMS_ETC/installation.properties.
If one of the WildFly servers disappears, e.g., for maintenance, due to a hardware failure, etc., the load balancer has to be able to send all incoming requests dedicated for this WildFly server to the remaining WildFly servers.
The load balancer should support health check requests. The health check requests support the load balancer in deciding which WildFly application server to use and which not. E.g., if the WildFly server is currently redeployed, is unable to connect to the database or it is unable to access the shared file system, the load balancer should not use the according WildFly server any longer.
You can use any load balancer you want, as long it is able to fulfill the requirements described in the sections above.
If you choose Apache HTTP server to be used as load balancer, the combination of mod_proxy_balancer
and mod_proxy_hcheck
is able to fulfill the requirements. The example below shows a configuration snippet of mod_proxy_balancer
and mod_proxy_hcheck
made for two WildFly application servers. The according machines have IPs 10.10.10.1 and 10.10.10.2 with OMT application listening on port 8080. The value of route
uses the default setting of SERVER_ID
, which is set in $OMS_ETC/installation.properties to $(hostname)_$(OMS_SERVER_TYPE)
. Health-check URL is set for both servers to /monitoring/services/health/status
. The interval for checking health status is set to 10 seconds.
# load modules required for reverse proxy LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_hcheck_module modules/mod_proxy_hcheck.so LoadModule watchdog_module modules/mod_watchdog.so LoadModule lbmethod_byrequests_module modules/mod_lbmethod_byrequests.so LoadModule lbmethod_bytraffic_module modules/mod_lbmethod_bytraffic.so LoadModule lbmethod_bybusyness_module modules/mod_lbmethod_bybusyness.so LoadModule slotmem_shm_module modules/mod_slotmem_shm.so # make sure, not to act as forward proxy ProxyRequests Off # do not rewrite Host-Header when forwarding request ProxyPreserveHost on # define proxy with balancer-members and settings # make sure, route is identical to the settings made at the WildFly application servers <Proxy balancer://omscluster> BalancerMember http://10.10.10.1:8180 max=5 timeout=40 retry=40 acquire=30000 route=fe1.iom.com_cluster hcmethod=GET hcinterval=10 hcuri=/monitoring/services/health/status BalancerMember http://10.10.10.2:8180 max=5 timeout=40 retry=40 acquire=30000 route=fe2.iom.com_cluster hcmethod=GET hcinterval=10 hcuri=/monitoring/services/health/status </Proxy> # enable balancer-manager, avoid proxying <Location /balancer-manager> SetHandler balancer-manager ProxyPass ! </Location> # enable server-status, avoid proxying ExtendedStatus on <Location /server-status> SetHandler server-status ProxyPass ! </Location> # proxying everything else ProxyPass / balancer://omscluster/ stickysession=JSESSIONID ProxyPassReverse / balancer://omscluster/ stickysession=JSESSIONID
All application servers provide a health check that can be requested using the URL /monitoring/services/health/status. It responds with HTTP status code 200 if the application server is healthy, otherwise it responds with 5XX.
To ease error analysis, the content delivered by the health check URL contains further information about processed checks. This information is provided in a machine readable format (JSON), which can also easily be understood by humans.
Health checks require the deployment of oms.monitoring-app-2.x.x.x.war.
Health checks of shared file system can be enabled or disabled. If the IOM installation consists of a single node only (e.g., for tests or demos), a shared file system is not required. The according setting is provided by the property is.oms.sharedfs.healthcheck
:
/system-property=is.oms.sharedfs.healthcheck: "enabled"|"disabled"
See Concept - IOM Server Health Check for more information.
For an overview of all application servers within a cluster, each health check request updates the current status of the application server within the database. The cluster status can be seen using the URL /monitoring/services/health/clusterstatus. Basic authentication is required. The user also requires the permission Basic application management assigned at root organization.
See Concept - IOM Server Health Check for more information.
Frontend and backend servers are all connecting to the PostgreSQL database.
For high availability also the database and connection to it has to support HA:
To provide HA, the application servers are able to reconnect to the database without a restart.
To work properly, invalid connections must be recognized and evicted from the pool. The xa-datasource
configuration defines how this happens.
Intershop recommends to use the background validation checker rather than the validate-on-match method to reduce the check overhead. Moreover, the timeouts configuration parameters may influence the reconnect behavior (old connections might not get evicted as long as the timeouts are not reached).
For more information about datasource configuration, see Datasource Parameters in the redhat customer portal.
xa- datasource configuration example # The pool size depends on the number of application servers and the database server ressources min-pool-size="30" max-pool-size="80" pool-prefill="true" # timeouts set-tx-query-timeout="true" query-timeout="3600" blocking-timeout-wait-millis="3000" idle-timeout-minutes="60" #connection validation validate-on-match="false" background-validation="true" background-validation-millis="20000" exception-sorter-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter" valid-connection-checker-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker" # resources that are unused or not supported by IOM interleaving="false" pad-xid="false" wrap-xa-resource="false" same-rm-override="false" share-prepared-statements="false"
IOM supports access to PostgreSQL HA clusters but has always connect to the master database.
A PostgreSQL HA cluster usually consists of one master server and one or more hot-standby servers. The master server is the only one, which is allowed to change data. An additional witness server is needed by the failover process when the total number of servers (master + standbys) is odd.
During the failover, the IOM application must be redirected to the new master. One solution is to add a proxy layer between the IOM application servers and the Postgres HA-cluster. This proxy-layer can be realized by PgBouncer. PgBouncer has to be reconfigured on the fly (without restart) whenever the current master changes. PgBouncer being a connection pool, can also be used to limit the number of connections to PostgreSQL. More than one instance of PgBoucer should be defined to avoid single points of failures.
The IOM database connection address is defined in $OMS_ETC/cluster.properties by the property is.oms.db.hostlist
which supports a number of one or more database host addresses. For more information see Guide - Setup Intershop Order Management 2.10.
For more information about PostgreSQL HA clusters see http://repmgr.org and https://pgbouncer.github.io
All hosts of the cluster should use a clock synchronization service (daemon).
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.