The Intershop Order Management System (IOM) as a middle-ware for e-commerce combines the order processes used in all various channels. It takes incoming orders from all available channels and connects them with selected order fulfillment processes. Depending on configurations, these processes are individually managed for each combination of channels. In addition, it provides the customers with greater transparency on product availability, order status, and returns. Thus, it supports call center employees, warehouse employees, and business managers in their respective work-field.
This guide gives a technical overview of the Intershop Order Management System as well as the applied technical concepts and latest technology updates.
The main target group of this document are system administrators.
Term | Description |
---|---|
GDPR | General Data Protection Regulation |
HA | High availability |
HTTP | Hypertext Transfer Protocol |
HTTPS | Hypertext Transfer Protocol Secure |
IOM | The abbreviation for Intershop Order Management |
JMS | Java Message Service |
JSON | JavaScript Object Notation |
OMS | The abbreviation for Order Management System, the technical name of the IOM |
OMT | The abbreviation for Order Management Tool, the graphical management tool of the IOM |
REST | Representational State Transfer |
RMA | Return Merchandise Authorization |
SMTP | Simple Mail Transfer Protocol |
SOAP | Simple Object Access Protocol |
Spring MVC | A Web Framework implementing the Model-View-Controller Pattern |
The following figure provides a very high-level overview of components of IOM, other components required by IOM, and their relations. When going from top to bottom, the following components can be found:
All incoming communication has to go through an HTTP-Proxy, which has the following purposes and requirements:
It can be seen that there are IOM application servers of two different types. Every IOM installation needs to have exactly one application server that is running IOM scalable applications and IOM singleton applications. For high availability and scalability much more IOM application servers may be part of the IOM cluster, but these must run IOM scalable applications only. More information can be found in the section IOM Application Server.
The IOM application servers are using two different kinds of persistent storage: a shared file system and a database. In fact, an IOM installation is defined by its data, persisted at these two places. Therefore, the persistent data have to be managed very carefully. They have to be backed up both, in order to be able to restore an IOM installation.
Finally, the IOM applications need to be able to access an SMTP server in order to send e-mails as part of implemented business processes.
Any HTTP proxy that meets the requirements listed in the Architecture Overview can be used. The section Ingress and Ingress Controller provides more information about actual implementation.
IOM applications are running inside Wildfly Application Server. This application server provides many base technologies used by IOM, e.g. JMS, JPA, EJB, etc.
For further information please see https://docs.wildfly.org/.
The following table gives an overview of the directory structure of IOM's shared file system.
Sub-Directory | Description |
---|---|
archive/ | Folder used to archive old data, principally sensitive one, before deleting them in the database. |
importarticle/ | Import/export of all kind of data (products, stocks, dispatches, returns) |
communication-messages/ | Exchange of data, e.g., for orders.xml |
media/ | Media data, e.g., product images |
pdf/ | PDF documents created by IOM application server, e.g., invoices, credit notes, delivery notes |
jobs/ | Reserved for projects, working files, and archived files for scheduled jobs for projects |
IOM requires a PostgreSQL Database Server. For further information see https://www.postgresql.org. The section PostgreSQL Database Server will give more information on how to use the PostgreSQL Database Server along with IOM.
An SMTP server is required to send different kinds of e-mails. The section SMTP Server provides information on how to use the SMTP server along with IOM.
IOM is delivered in the form of Docker images which are dedicated to run in Kubernetes. Intershop also provides Helm Charts for IOM, which makes it very easy to operate IOM.
A general view on the concept on how to operate IOM with the help of Helm charts is shown in the following figure. The project owners (e.g. implementation partners) have to define a set of values, which is controlling the behavior of the IOM installation. Using the Helm command-line tool along with these values and the IOM Helm Charts, they are able to execute all tasks that are required to run IOM in a Kubernetes environment.
For more information on how to use Helm Charts for IOM see Operate Intershop Order Management (GitHub).
The main goal of IOM Helm Charts is to provide a running IOM system. For this reason, the Helm Charts for IOM has to cover all components which are required to operate IOM, as described in the section above. These are:
These components have to be translated into the world of Kubernetes.
Within a Kubernetes environment, an Ingress-object is defining how HTTP access to the underlying service (IOM in this case) has to be made. In fact, the Ingress-object holds the configuration for load-balancing, HTTPS-termination, and sticky sessions. But the Ingress-object is a configuration snippet only. This configuration has to be applied to an Ingress controller, which is a really running software. Some different implementations of Ingress controllers exist, the most common one is the NGINX Ingress controller. Within professional Kubernetes clusters, usually, only one global Ingress controller exists, which is used by all Ingress-objects.
There are two problems the Helm Charts for IOM have to deal with:
To solve these problems, Helm Charts for IOM is providing an integrated NGINX Ingress controller, which can be used if necessary. This will be the case if no Ingress controller exists at all, or if the existing global Ingress controller is not an NGINX implementation. In this case, the internal NGINX Ingress controller has to be looped in as a proxy between the global Ingress controller and IOM.
The following table shows use cases and the corresponding settings of values to properly control Helm Charts for IOM. See Operate Intershop Order Management (GitHub) for examples.
# | Use Case | nginx.enabled | nginx.proxy.enabled | ingress.className |
---|---|---|---|---|
1 | global NGINX Ingress controller available | false | - | - |
2 | global Ingress controller available, but it is not an NGINX | true | true | - |
3 | global Ingress controller not available at all | true | false | nginx-iom |
nginx.enabled and nginx.proxy.enabled in the headers of the table above are two parameters that are used to control integrated NGINX Ingress Controller of IOM Helm Charts. These parameters are explained in detail in Operate Intershop Order Management (GitHub) ingress.className in the header of the table is a parameter that has to be set at Ingress object definitions. The description of this parameter is also covered by Operate Intershop Order Management (GitHub).
According to the information in Overall Architecture described above, two different types of IOM application servers exist: one with IOM singleton and scalable applications, which has to exist in each installation of IOM exactly once, and 0 or more application servers, which are running IOM scalable applications only.
The IOM Helm Charts are using a StatefulSet to realize the required behavior. Within a stateful set, the pods have fixed names, all ending with a number. The first pod (ending with 0) is the one and only, which is running IOM singleton applications.
Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).
Different types of IOM installations require different types of SMTP-Servers. A production environment of IOM requires access to a professional e-mail service to deliver e-mails sent by IOM. Of course, this professional e-mail service is not part of Helm Charts for IOM.
The SMTP server, which is part of IOM Helm Charts is dedicated for test, demo and preproduction installations only. MailHog is such a kind of SMTP server, which was created especially for testing purposes. For this reason, MailHog is part of IOM Helm Charts.
MailHog is not an SMTP server only. It also provides a REST API and a web-GUI to get access to the received e-mails. Helm Charts for IOM provides the required Ingress object, which has to be configured in the same way as described in section Ingress and Ingress controller.
Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).
As with the SMTP server, different types of IOM installations have different requirements for installations of PostgreSQL Database Server. A production system of IOM requires a professionally managed PostgreSQL Database Server, providing HA (high availability) features, full backup and restore capabilities, etc.
The Helm Charts for IOM do not cover a PostgreSQL Database Server, which is suitable for a production environment. For production environments PostgreSQL Database Server should be used from a service, providing all the required features or must be set up and operated manually.
The PostgreSQL Database Server, which is part of IOM Helm Charts does not provide any HA or backup/restore features. It should be used for test and uncritical demo installations only. The integrated PostgreSQL Database Server allows deciding whether to store the database data in memory or to persist them in the file system.
Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).
The shared file system is as important for IOM as the database data. Therefore, production systems of IOM must use an external storage provider (e.g. Azure-Files) instead of the one built into Helm Charts for IOM. For the same reason, the external storage provider needs to have very good backup and restore capabilities.
Uncritical IOM installations, like test and demo installations, may use the storage provider which is defined by Helm Charts for IOM.
Configuration examples and a full list of configuration parameters can be found in Operate Intershop Order Management (GitHub).
An IOM installation without customization cannot do anything. It is an empty shell only that needs to be filled.
Customization of IOM consists basically of the following things:
All these data have to be provided by implementation partners in a very special way, which is described in Guide - IOM Standard Project Structure 3.X. The easiest way to provide artifacts in this standard project structure is the usage of the IOM Project Archetype. Projects created this way also provide the ability to easily create the project-specific IOM docker image. This docker image can then be rolled out by IOM Helm Charts.
IOM provides standard metrics in Prometheus format at http://<pod-name>:9990/metrics.
IOM writes all logging information to stdout. Each log-entry is written as a single line in JSON format. There are different sources of log-messages, all using different formats. To enable automated processing of such log-lines, all different log formats are providing unique meta-data, which give more information about the content and structure of the "real" content.
Key | Description |
---|---|
tenant | The name of the tenant, e.g. Intershop Note Deprecated since IOM 3.4.0.0. Datadog will inject according information in the future, without the need to loop them through IOM. |
environment | The name of the environment, e.g. prod, pre-prod, etc. Note Deprecated since IOM 3.4.0.0. Datadog will inject according information in the future, without the need to loop them through IOM. |
logHost | The hostname of the pod that has written the log-line |
logVersion | The log-type-specific version of log-format |
logType | access|message|script access: access-log of Wildfly's undertow sub-system message: log-messages written by Wildfly Application Server, its sub-systems, and IOM script: messages written by shell scripts, e.g. to start Wildfly, initialize the database, etc. |
appName | The name of container and customization if available, e.g. iom, iom-app, iom-config, iom-app+ci, iom+ci |
appVersion | The version of IOM and of customization if available, e.g. 3.0.0.0+1.2.0.0 |
configName | Name of project configuration that was selected, e.g. ci. |
Version 1.0 of access-log provides the following information:
Key | Description |
---|---|
eventSource | Default entry, made by undertow subsystem. The source of the event in the request. There is redundancy with meta-data, but the undertow sub-system should not define the content of IOM's meta-data. |
hostName | Default entry, made by undertow subsystem. The Wildfly host that processed the request. There is redundancy with meta-data, but the undertow sub-system should not define the content of IOM's meta-data. |
bytesSent | The number of bytes sent in the body of the response. |
dateTime | Date and time of request, using format-string: "yyyy-MM-dd'T'HH:mm:ss.SSSXXX" (see JavaDoc of SimpleDateFormat). |
remoteHost | IP/hostname of the host that sent the request. |
requestLine | The complete request-line of the HTTP-request. |
responseHeaderContent-Type | Content of response-header Content-Type. |
responseHeaderSet-Cookie | Content of response-header Set-Cookie. |
responseCode | HTTP response code. |
remoteUser | Name of the user sending the request. |
localIp | IP of Wildfly Application-Server that received the request. |
localPort | Port of Wildfly Application-Server that received the request. |
requestProtocol | The protocol of the request. |
responseTime | Response time in milliseconds. |
requestScheme | The URI scheme of the request. |
requestHeaderReferer | Content of request-header Referer. |
requestHeaderUser-Agent | Content of request-header User-Agent. |
requestHeaderHost | Content of request-header Host. |
requestHeaderCookie | Content of request-header Cookie. |
requestHeaderX-Forwarded-For | Content of request-header X-Forwarded-For. |
requestHeaderX-Real-IP | Content of request-header X-Real-IP. |
requestHeaderX-Forwarded-Host | Content of request-header X-Forwarded-Host. |
requestHeaderX-Forwarded-Proto | Content of request-header X-Forwarded-Proto. |
For better readability the logline in the following example was formatted. The IOM application-server will always write such a log-entry to a single line.
{ "eventSource": "web-access", "hostName": "default-host", "tenant": "Intershop", "environment": "aks-ci", "logHost": "ci-iom-0", "logVersion": "1.0", "appVersion": "3.0.0.0-SNAPSHOT@19137+1.2.0.0-SNAPSHOT", "appName": "iom-app+ci", "logType": "access", "configName": "ci", "bytesSent": 0, "dateTime": "2020-09-15T16:56:20.697Z", "localIp": "10.244.1.7", "localPort": 8080, "remoteHost": "10.244.0.148", "remoteUser": null, "requestHeaderReferer": "http://global-ingress-nginx-controller.iom-3-0-0-0-nginx.svc.cluster.local/omt/app/roleAssignment/userManagement", "requestHeaderUser-Agent": "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0", "requestHeaderHost": "global-ingress-nginx-controller.iom-3-0-0-0-nginx.svc.cluster.local", "requestHeaderCookie": "route=1600188975.597.41.791971; JSESSIONID=stHSCG0HI0eWUgLOThNGBm-pYEtfSohEh6js3jdZ.ci-iom-0; org.springframework.web.servlet.i18n.CookieLocaleResolver.LOCALE=en", "requestHeaderX-Forwarded-For": "10.244.0.148", "requestHeaderX-Real-IP": "10.244.0.148", "requestHeaderX-Forwarded-Host": "global-ingress-nginx-controller.iom-3-0-0-0-nginx.svc.cluster.local", "requestHeaderX-Forwarded-Proto": "http", "requestLine": "GET /omt/static/amCharts/js/amCharts.js?version=3.0.0.0-SNAPSHOT HTTP/1.1", "requestProtocol": "HTTP/1.1", "requestScheme": "http", "responseCode": 304, "responseHeaderContent-Type": null, "responseHeaderSet-Cookie": null, "responseTime": 0 }
Version 1.0 of message-log provides the following information:
Key | Description |
---|---|
timestamp | The timestamp of the log message. |
sequence | The sequence number of messages. |
loggerClassName | The class name of the logger. |
loggerName | The name of the logger. |
level | The level of the logged message. |
message | The simple unformatted message without stack trace. |
threadName | The name of the callers' thread. |
threadId | The ID of the callers' thread. |
mdc | The mapped diagnostic context entry. |
ndc | The nested diagnostic context entries. |
hostName | The hostname of the IOM application server that has written the message. |
processName | The name of the process. |
processId | The ID of the process. |
stackTrace | The exception stack trace (formatting characters are present, but quoted). |
sourceClassName | The class of the code calling the log method. |
sourceFileName | The source file of the code calling the log method. |
sourceMethodName | The callers' method name. |
sourceLineNumber | The line number of the caller. |
sourceModuleName | The name of the module the log message came from. |
sourceModuleVersion | The version of the module the log message came from. |
For better readability the logline in the following example was formatted. The IOM application-server will always write such a log-entry to a single line. Additionally, the stack trace was shortened and line breaks were added, which makes the example to show invalid JSON.
{ "timestamp": "2020-09-14T09:38:14.852Z", "sequence": 1933085, "loggerClassName": "org.jboss.as.ejb3.logging.EjbLogger_$logger", "loggerName": "org.jboss.as.ejb3.invocation", "level": "ERROR", "message": "WFLYEJB0034: EJB Invocation failed on component CreateOrderTransmissionPTBean for method public abstract bakery.logic.valueobject.ProcessContainer bakery.logic.service.controller.Executable.execute(bakery.logic.valueobject.ProcessContainer) throws java.lang.Exception", "threadName": "Thread-20 (ActiveMQ-client-global-threads)", "threadId": 1406, "mdc": {}, "ndc": "", "hostName": "ci-iom-0", "processName": "jboss-modules.jar", "processId": 222, "stackTrace": ": javax.ejb.EJBTransactionRolledbackException: (orderPU) exception found for object 'class bakery.persistence.dataobject.configuration.connections.CommunicationPartnerDO'\n\tat org.jboss.as.ejb3@17.0.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInCallerTx(CMTTxInterceptor.java:203)\n\tat org.jboss.as.ejb3@17.0.0.Final//org.jboss.as.ejb3.tx.CMTTxInterceptor.required(CMTTxInterceptor.java:364)\n\tat ... org.hibernate@5.3.10.Final//org.hibernate.query.internal.AbstractProducedQuery.doList(AbstractProducedQuery.java:1537)\n\tat org.hibernate@5.3.10.Final//org.hibernate.query.internal.AbstractProducedQuery.list(AbstractProducedQuery.java:1505)\n\tat org.hibernate@5.3.10.Final//org.hibernate.query.Query.getResultList(Query.java:132)\n\tat deployment.bakery.base-app-3.0.0.0-SNAPSHOT.ear.bakery.persistence-core-3.0.0.0-SNAPSHOT.jar//bakery.persistence.bean.configuration.connections.CommunicationPartnerPersistenceBean.getCommunicationPartnerList(CommunicationPartnerPersistenceBean.java:1087)\n\t ... 321 more\n", "sourceClassName": "org.jboss.as.ejb3.component.interceptors.LoggingInterceptor", "sourceFileName": "LoggingInterceptor.java", "sourceMethodName": "processInvocation", "sourceLineNumber": 77, "sourceModuleName": "org.jboss.as.ejb3", "sourceModuleVersion": "17.0.0.Final", "tenant": "Intershop", "environment": "aks-ci", "logHost": "ci-iom-0", "logVersion": "1.0", "appVersion": "3.0.0.0-SNAPSHOT@19129+1.2.0.0-SNAPSHOT", "appName": "iom-app+ci", "logType": "message", "configName": "ci" }
Version 1.0 of script-log provides the following information:
Key | Description |
---|---|
timestamp | The timestamp of log-message, formatted by "date --iso-8601=seconds". |
level | The level of the logged message. |
processName | The name of the process that has written the message. |
message | The simple unformatted message. |
additionalInfo | Additional info, belonging to the message. |
For better readability the logline in the following example was formatted. The IOM scripts will always write such a log-entry to a single line.
{ "tenant": "Intershop", "environment": "aks-ci", "logHost": "ci-iom-0", "logVersion": "1.0", "appName": "iom-config+ci", "appVersion": "3.0.0.0-SNAPSHOT@19129+1.2.0.0-SNAPSHOT", "logType": "script", "timestamp": "2020-09-14T09:26:31+00:00", "level": "INFO", "processName": "apply_json_config.sh", "message": "success", "configName": "ci", "additionalInfo": "462 files were processed." }
Logging can be configured by a set of parameters, all having the prefix log. A detailed description of these parameters can be found in Operate Intershop Order Management (GitHub).
IOM is a system that is mainly event-driven. Events trigger business processes that run asynchronously within the IOM application servers.
These business processes may send other events, triggering other business processes, and so on. Initial sources of events may be HTTP requests, coming from outside or jobs and schedules, triggered by timers.
Technically, events are realized by Java Message Service (JMS), and business processes are realized by message-driven beans.
The following list shows the deployment artifacts of IOM and the order in which they are loaded.
The application Base contains the essential (and crucial) functionality of the IOM and provides several functionalities used by the other applications.
Right after base application, applications provided by project customizations will be loaded.
The application Process contains message-driven beans and is the starting point of the business processes of the IOM. Typical business processes are the announcement of an order, routing of ordered articles to one or more suppliers, creation of invoice documents, or creation of payment notifications. Processes are triggered by the Control application, by other locally running business processes, and by incoming HTTP requests. Messages are sent and received locally only, not from other application servers.
The application Control is responsible for all processes that should be triggered periodically (scheduled). Scheduled processes are for example:
The application Control is a singleton application and must be deployed on one IOM application server only.
The application Impex is responsible for the import and export of selected business objects. Impex can be used to exchange data with the connected actors as required. Possible business objects can be orders, customers, or products, for example.
The application Impex is one of the singleton applications and must be deployed on one IOM application server only.
The application Communication is responsible for communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.
For further information see:
The application OMT is the standard graphical user interface of the Intershop Order Management System.
It can be used to manage the IOM in a more comfortable way using a common Internet browser. This tool provides functionality to manage customers, items, and orders. Due to the sensitive data, a login is needed. For this purpose, the OMT comes with user and role management.
For frontend functionality, the application uses several frameworks e.g., Bootstrap, jQuery, and others. The backend of the OMT is based on frameworks such as Spring, Spring MVC, and Hibernate.
The application Communication is responsible for communication with external applications. Intended external applications are mostly shops and suppliers. Offered services include general order handling, return management, stock reservation, and more. Services are offered as SOAP and REST.
For further information see:
The application GDPR offers functionality including REST interfaces to support the General Data Protection Regulation of the IOM as well as other external systems that can be connected.
Also, see Reference - IOM REST API.
The application RMA offers functionality including REST interfaces to support the process of Return Merchandise Authorization of the IOM as well as other external systems that can be connected.
For further information see Overview - IOM Return Merchandise Authorization and Overview - Intershop Order Management REST API.
The application Transmission offers functionality including REST interfaces to support the message transmission handling of the IOM.
For further information see Overview - Intershop Order Management REST API.
The application Schedule provides the functionality of customizable, timebase-triggered jobs.
For further information see Cookbook - IOM Job Framework and
The application Order State is a replacement of the SOAP OrderState service and offers a REST interface to get the status of one or more orders for given search criteria.
For further information see Overview - Intershop Order Management REST API.
The application Order is a replacement of the SOAP Order service and offers a REST interface to create orders.
For further information see Overview - Intershop Order Management REST API.
The application Monitoring supports the monitoring of application servers.
Please see section Health Check Requests for more information.
The message-driven beans defined by the application Control are triggered by Quartz jobs, defined in quartz-jobs-cluster.xml. Since the application Control is rolled out on one IOM application server only, Quartz configuration becomes effective only on the IOM application server on which the application Control is deployed. But the Quartz sub-system and the corresponding job configuration are active on all IOM application servers. Please keep this in mind when customizing Quartz jobs.
See also Reference - IOM Quartz Jobs 2.2.
There is a second concept that allows time-based orders to be triggered: Schedules.
This concept is mainly intended to be used by projects to define custom jobs. The respective application Schedules is rolled out on all IOM application servers. Hence, the jobs triggered by Schedules are running on all IOM application servers, too.
Also, see Cookbook - IOM Job Framework.
In contrast to traditional JMS-driven applications, Java Messages in IOM are sent and received only locally. There is no distribution of JMS events across different IOM application servers.
High availability (HA) can be defined as follows:
The system is designed for the highest requirements in terms of performance and reliability. Several platform capabilities allow easy scaling without downtimes.
The following sections describe a tested and working approach to enabling IOM to be highly available.
High availability can be provided by using multiple IOM application servers running in parallel. As shown in the section Overall Architecture, there are two different types of IOM application servers. Each IOM cluster needs to have exactly one IOM application server that is running IOM singleton applications. However, these IOM singleton applications do not process any requests coming from outside. Instead, the IOM singleton applications process jobs only asynchronously.
From the standpoint of High Availability, all IOM application-servers are identical in their ability to process requests synchronously. Hence, if there is at least one running IOM application server, the application will be available.
For high availability, Helm value replicaCount has to be set to 2 or higher, see Operate Intershop Order Management (GitHub).
During updates of IOM, the system cannot be available since database migration cannot be processed while application servers are still running. For more information see Operate Intershop Order Management (GitHub).
Some directories of IOM servers are containing stateful runtime data that have to be shared by all IOM servers. These directories are placed as sub-directories within the shared file system of IOM.
Intershop recommends to use an HA-ready storage service for a shared file system. Otherwise, IOM cannot be highly available. If the shared file system is not available, all IOM servers are affected and will not be able to answer any requests.
For high availability, the Helm value persistence.storageClass has to be set to a highly available storage class. See Operate Intershop Order Management (GitHub).
According to section Ingress and Ingress Controller the Load Balancer / Ingress Controller can be provided globally within Kubernetes cluster or by IOM Helm Charts itself. If the global Ingress Controller is used, it has to be ensured that it is highly available. If the integrated NGINX Ingress Controller is used, at least two instances of the controller must run. This is controlled by the parameter ingress-nginx.controller.replicaCount
, see Operate Intershop Order Management (GitHub).
Session stickiness and session failover are realized by the Ingress controller. Depending on the capabilities of the Ingress controller, this is realized by the global or by the integrated NGINX Ingress controller, see section Ingress and Ingress Controller.
If one of the IOM application servers is not available, e.g., for an upgrade or technical issues, the load balancer / Ingress controller has to be able to send all incoming requests dedicated for this IOM application server to one of the remaining IOM application servers. To do so, IOM provides a REST API for health check requests, which informs the Kubernetes cluster about the state of each IOM application server. The according configuration is already part of IOM Helm Charts. The health check requests support the load-balancer / Ingress controller in deciding which IOM application server to use and which not.
All IOM application servers provide a health check that can be requested using the URL /monitoring/services/health/status. It responds with HTTP status code 200 if the application server is healthy. Otherwise, it responds with 5XX.
To ease error analysis, the content delivered by the health check URL contains further information about processed checks. This information is provided in a machine-readable format (JSON), which can also easily be understood by humans.
See Concept - IOM Server Health Check for more information.
IOM application servers are all connecting to the PostgreSQL database.
For high availability of the whole system, also the database and connection to it have to support HA:
To provide HA, the application servers are able to reconnect to the database without a restart.
To work properly, invalid connections must be recognized and evicted from the pool. The xa-datasource configuration defines how this happens.
IOM uses the background validation checker rather than the validate-on-match method to reduce checking overhead. Moreover, the timeouts configuration parameters may influence the reconnect behavior (old connections might not get evicted as long as the timeouts are not reached). For more information about data source configuration, see Datasource Parameters in the Wildfly documentation.
The current configuration of IOM xa-datasource looks like this:
# The pool size depends on the number of application servers and the database server ressources. # It can be set at runtime by according environment variables /subsystem=datasources/xa-data-source=OmsDB: min-pool-size="${env.JBOSS_XA_POOLSIZE_MIN}" /subsystem=datasources/xa-data-source=OmsDB: max-pool-size="${env.JBOSS_XA_POOLSIZE_MAX}" /subsystem=datasources/xa-data-source=OmsDB: pool-prefill="true" # timeouts /subsystem=datasources/xa-data-source=OmsDB: set-tx-query-timeout="true" /subsystem=datasources/xa-data-source=OmsDB: query-timeout="3600" /subsystem=datasources/xa-data-source=OmsDB: blocking-timeout-wait-millis="3000" /subsystem=datasources/xa-data-source=OmsDB: idle-timeout-minutes="60" # connection validation /subsystem=datasources/xa-data-source=OmsDB: validate-on-match="false" /subsystem=datasources/xa-data-source=OmsDB: background-validation="true" /subsystem=datasources/xa-data-source=OmsDB: background-validation-millis="20000" /subsystem=datasources/xa-data-source=OmsDB: exception-sorter-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter" /subsystem=datasources/xa-data-source=OmsDB: valid-connection-checker-class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker" # features, that are not used or not supported by IOM /subsystem=datasources/xa-data-source=OmsDB: interleaving="false" /subsystem=datasources/xa-data-source=OmsDB: pad-xid="false" /subsystem=datasources/xa-data-source=OmsDB: wrap-xa-resource="false" /subsystem=datasources/xa-data-source=OmsDB: same-rm-override="false" /subsystem=datasources/xa-data-source=OmsDB: share-prepared-statements="false" # required by metrics /subsystem=datasources/xa-data-source=OmsDB: statistics-enabled="true"
IOM supports access to PostgreSQL HA clusters, but always has to be connected to the master database.
A PostgreSQL HA cluster usually consists of one master server and one or more hot-standby servers. The master server is the only one that is allowed to change data. The failover process requires an additional witness server if the total number of servers (masters + standbys) is odd.
During the failover, the IOM application must be redirected to the new master. One solution is to add a proxy layer between the IOM application servers and the Postgres HA-cluster. This proxy-layer can be realized by PgBouncer. PgBouncer has to be reconfigured on the fly (without restart) whenever the current master changes. PgBouncer being a connection pool, can also be used to limit the number of connections to PostgreSQL. More than one instance of PgBouncer should be defined to avoid single points of failure.
The IOM database connection address is defined by the Helm parameter oms.db.hostlist, which supports a number of one or more database host addresses. For more information see Operate Intershop Order Management (GitHub).
For more information about PostgreSQL, HA clusters see http://repmgr.org and https://pgbouncer.github.io.
All IOM application servers and the IOM database require synchronized clocks. Additionally, all IOM application servers and the IOM database have to use the same time zone. Currently, the time zone is fixed and will be set for IOM application servers and the database to Etc/UTC.
Preconditions
Transregional installations require IOM v.3.5.0.0 or newer in combination with IOM Helm charts v.1.4.0 or newer.
A transregional installation of IOM spans over different Kubernetes clusters in different regions or at least in different locations. The goal of this type of setup is to guarantee continued availability, even if a whole location has failed.
This document covers IOM Helm releases only (the red boxes in the drawing below). All other infrastructure and processes, that are required for a transregional installation of IOM are not in scope of this document.
Additionally, it is important to know that IOM application servers do not communicate with each other. Hence, IOM itself does not require communication paths between Kubernetes clusters. Only the communication paths displayed in the drawing are required.
An IOM Helm release is an IOM running in a Kubernetes cluster which is operated (installed/updated) by IOM Helm charts. To make an IOM Helm release a part of a transregional installation of IOM, some aspects have to be considered in more detail: