Guide - Operate Intershop Order Management 3.X

1 Introduction

The present guide is addressed to administrators and software developers who want to operate IOM 3. It enables them to understand what are the components of IOM, how to configure them and to run processes like installations and updates.

For a technical overview please see references.

1.1 Glossary

WordingDescription
DockerAn operating system-level virtualization software. See also Kubernetes and Helm.
HelmA package manager for Kubernetes. See also Docker.
CLICommand Line Interface
IOM

The abbreviation for Intershop Order Management

JBossSynonym for WildFly (former name of the WildFly application server)
KubernetesAn open-source system for automating deployment, scaling, and management of containerized applications. See also Docker and Helm.
OMSThe abbreviation for Order Management System, the technical name of IOM
URLUniform Resource Locator
WildFlyThe application server that IOM runs on

1.2 References

1.3 Additional References

2 Prerequisites

Production systems for Intershop Order Management (IOM) are usually provided as a service in the Azure Cloud. This service is part of the corresponding CaaS contracts with Intershop. Non-CaaS production environments require separate agreements with Intershop.
For the purpose of adapting the software to specific customer requirements and/or customer-specific environments, it is also possible to operate IOM (for example for corresponding CI environments, test systems etc) outside the Azure Cloud and independently of Azure Kubernetes (AKS). In support of this, this document is intended for IOM administrators and software developers.

2.1 Kubernetes

The exact required version of Kubernetes can be found in the system requirements (see References).

IOM requires a Kubernetes runtime environment. Intershop cannot provide support on how to setup, maintain or operate a Kubernetes runtime environment.

When using the Intershop CaaS environment, Kubernetes is included. In this case, Intershop is fully responsible to set up, maintain and operate the Kubernetes cluster as part of Intershop CaaS environment.

2.2 Helm

The exact required version of Helm can be found in the system requirements (see References).

IOM requires Helm to be operated in a Kubernetes environment. Intershop cannot provide support on how to setup and use Helm properly.

When using the Intershop CaaS environment, Helm is included. In this case, Intershop is fully responsible to setup and use Helm as part of Intershop CaaS environment.

2.3 Mail Server

Exact requirements of Mail server can be found in the system requirements (see References).

IOM requires an existing mail server that processes e-mails sent from IOM via the SMTP protocol. Intershop cannot provide support on how to setup, maintain or operate a Mail server. A Mail server is not part of the Intershop CaaS environment.

2.4 PostgreSQL Server

Exact requirements of PostgreSQL server can be found in the system requirements (see References).

IOM requires a PostgreSQL database hosted by a PostgreSQL database server. Intershop cannot provide support on how to setup and operate a PostgreSQL server. Some configuration hints will be given as part of this document in section PostgreSQL Server Configuration.

When using the Intershop CaaS environment, a PostgreSQL database is included. In this case, Intershop is fully responsible to set up and maintain the database as well as set up and operate the according PostgreSQL server as part of Intershop CaaS environment.

3 Tools & Concepts

In order to understand this document, some basic concepts and tools have to be known. It is not the goal of this document to teach you all these tools and concepts. However, it is intended to provide an insight into how these tools and concepts are used in the context of Intershop Order Management.

3.1 Kubernetes

Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions. (https://en.wikipedia.org/wiki/Kubernetes).

Since Kubernetes is a standard for cloud operations, using it for IOM promises the best compatibility with a wide range of cloud providers. Nevertheless, functionality is guaranteed for Microsoft Azure Kubernetes service as part of the Intershop CaaS environment only. You can use other environments at your own risk.

A full description of Kubernetes can be found at https://kubernetes.io/docs/home/.

3.2 Kubectl

Kubectl is a command-line interface to control Kubernetes clusters. It is part of Kubernetes, see https://kubernetes.io/docs/reference/kubectl/overview/.

Since it is a client, which runs on the machine used to control the Kubernetes-cluster, it has to be installed separately. For this reason, it is listed as a separate tool. In the narrow sense, it is not required to operate IOM, but it is used in this document within the section Examples, to view the status of Kubernetes-objects.

3.3 Helm

Helm (https://helm.sh) sits on top of Kubernetes. Helm is a tool to manage the life cycle (install, upgrade, rollback, uninstall) of complex Kubernetes applications. To do so, it enables to develop and provide so-called Helm charts, which are basically descriptions of Kubernetes objects, combined by a template and scripting language.

3.4 IOM Docker Images

IOM is provided in form of Docker-images. These images can be used directly, as shown in section Examples of this document, or can be the base for further customizations in the context of projects.

The images are available at:

  • docker.intershop.de/intershop/iom-dbaccount:1.1.0.0
  • docker.intershop.de/intershop/iom-config:3.0.0.0
  • docker.intershop.de/intershop/iom-app:3.0.0.0

Note

Adapt the tag (version number), if you use a newer version of IOM. For a full list of available versions see Overview - IOM Public Release Notes.

docker.intershop.de is a private docker registry. Private Docker registries require authentication and sufficient rights to pull images from them. The according authentication data can be passed in a Kubernetes secret object, which has to be set using the Helm parameter imagePullSecrets.

The document Pull an Image from a Private Registry from Kubernetes documentation explains in general, how to create Kubernetes secret objects, suitable to authenticate at a private Docker registry. Pull images from an Azure container registry to a Kubernetes cluster from Microsoft Azure documentation explains how to apply this concept to private Azure Container Registries.

The following box shows an example for how to create a Kubernetes secret to be used to access the private Docker Registry docker.intershop.de. The name of the newly created secret is intershop-pull-secret, which has to be passed to Helm parameter imagePullSecrets. It has to reside within the same Kubernetes namespace as the IOM cluster which uses the secret.

kubectl create secret docker-registry intershop-pull-secret \
    --docker-server=docker.intershop.de \
    --docker-username='<user name>' \
    --docker-password='<password>' \
    -n <kubernetes namespace>

3.5 IOM Helm-charts

IOM Helm-charts is a package containing the description of all Kubernetes-objects required to run IOM in Kubernetes. IOM Helm-charts are provided by Intershop at https://repository.intershop.de/helm. To use IOM Helm-charts, you have to execute the following commands (you may also have to pass credentials, which is not shown in the example).

# Add all Intershop charts
helm repo add intershop https://repository.intershop.de/helm

# Now the repo can be used to install IOM.
# The following command was taken from the examples section. Without the preconditions described there, it will not work.
# It is shown here only for demonstration, how to reference the IOM Helm-chart after adding the according repository.
helm install demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait

The following illustration shows the most important components and personas when operating IOM with Helm. The project owner has to define a values file (available configuration parameters are explained in section Parameters), which can be used along with IOM Helm-charts to install, upgrade, rollback, and uninstall IOM within a Kubernetes runtime environment.

This is a very generalized view, which has some restrictions when used with IOM. The next section explains these restrictions in detail.

Helm overview

3.5.1 Restrictions on Rollback

IOM uses a database, which is constantly evolving along with new releases of IOM. For this reason, every version of IOM brings its own migration scripts, which are lifting the database to the new level. Old versions of the IOM database are in general not compatible with new versions of IOM application servers and vice versa. Also, projects change the database, when rolling out new or changed project configurations.

Helm does not know anything about changes inside the database. When rolling back a release, only the changes in values and IOM Helm-packages are rolled back. To avoid inconsistencies and failures (e.g. rollback to an old IOM application server version after updating the database structures to the new version), it is strongly recommended to avoid rollback in general.

3.5.2 Restrictions on Upgrade

The same reasons that make the rollback process problematic also limit the upgrade process. 

When executing the upgrade process, the standard behavior of Helm is to keep the application always online. The different IOM application servers are updated one after another. In case of incompatible database changes, this would lead to problems, since one of the following cases is unavoidable: an old IOM application server tries to work with an already updated IOM database or vice versa.

To overcome this problem, IOM Helm-charts provide the parameter downtime, which controls the behavior of the upgrade process. If downtime is set to true, the whole IOM cluster will be stopped during the upgrade process. The IOM database will be upgraded first and after that, the IOM application servers are started again. This setting should always be used, when upgrading to a new IOM version, unless otherwise noted.

Within the context of projects, many changes can be applied to the running IOM cluster, without requiring downtime. In this case, the value of downtime has to be set to false, before starting the upgrade process.

For security reasons, the default value of downtime is true to avoid any inconsistencies. Once you have understood the concept of downtime parameter, you should set it to false, to avoid downtimes as often as possible, and only set it to true, if really required.

3.6 Intershop CaaS Environment

The previous section IOM Helm charts brought you a general view on Helm, the IOM Helm-charts, and the according to processes. The Intershop CaaS environment modifies this concept a little bit, as shown in the following illustration.

The project owner is not able to trigger any processes directly. He is only able to manage a sub-set of values to be applied along with the IOM Helm-chart. The processes are triggered by a flux-controller, which observes the Git-repository holding the values files. Depending on the type of IOM installation (INT, pre-PROD, PROD, etc.) processes might need to be triggered manually by Intershop Operations. Intershop Operations maintains a values file too, which has higher precedence than the file of the project owner. This way it is ensured, that the project owner is not able to change some critical settings. Which ones are affected, depends on the type of IOM installation (INT, pre-PROD, PROD, etc.). E.g. a project owner should never be able to set log-level to DEBUG or TRACE on PRODenvironments.

Intershop CaaS


4 Examples

Despite the fact, that Kubernetes and IOM Helm-charts make it very easy to set up and upgrade IOM installations, a reference to all the exiting parameters that are available to control IOM Helm-charts, is a very uncomfortable starting point. For this reason, three typical usage scenarios were chosen, to provide an easy-to-understand entry point into IOM Helm-charts. All examples were designed in a way, that Intershop CaaS environment is not required. The following examples strictly follow the concept described in section IOM Helm-Charts.

In order to understand the optional and required components defined in IOM Helm-charts, it is strongly recommended to read Guide - Intershop Order Management - Technical Overview first.

4.1 Local Demo System Running in Docker Desktop on Mac OS X 

4.1.1 Preconditions

  • Mac computer: Mac OS X >= v.10.15
  • Sufficient hardware resources: >= 16 GB main memory, multicore CPU
  • Installation of Docker Desktop: >= v.2.3
    • See: https://www.docker.com/products/docker-desktop 
    • >= 12 GB memory and >= 2 CPUs have to be assigned (Preferences | Resources | Advanced)
    • Enable Kubernetes (Preferences | Kubernetes)
    • Directories used to hold persistent data, have to be shared with Docker-Desktop (Preferences | Resources | File Sharing)
  • Installation of Helm: >= v3.2
  • Access to IOM docker images
  • Access to IOM Helm-charts

4.1.2 Requirements and Characteristics of IOM Installation

Requirements and characteristics are numbered. You will find these numbers in the values-file, which is listed below, in order to see the relation between requirement and current configuration.

  1. Usage of integrated PostgreSQL server.
  2. PostgreSQL data stored persistently to safe main memory.
  3. No reset of PostgreSQL data during the installation process.
  4. Usage of the integrated SMTP server (Mailhog).
  5. Web access to the GUI of Mailhog.
  6. The shared file system of IOM stored persistently.
  7. Local access to the shared file system of IOM.
  8. Due to limited resources, only one IOM application server should run.
  9. Usage of the integrated NGINX controller for direct access to GUIs of IOM and Mailhog.
  10. Due to limited resources, only one instance of the integrated NGINX controller should run.
  11. No access from another computer required.

4.1.3 Values File

This values file cannot be copied as it is. Before it can be used, persistence.hostPath and postgres.persistence.hostPath have to be changed to existing paths, which are shared with Docker Desktop.

The values file contains minimal settings only, except dbaccount.resetData, which was listed explicitly, even if it contains the default value only.

values.yaml
# use one IOM server only (requirement #8).
replicaCount: 1

imagePullSecrets:
  - intershop-pull-secret

image:
  repository: "docker.intershop.de/intershop/iom-app"
  tag: "3.0.0.0"

# configure ingress to forward requests for host "localhost" to IOM (requirements #9, #11).
# since integrated NGINX controller should be used, its class has to be set explicitly.
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx-iom"
  hosts:
    - host: localhost
      paths: ["/"]

# IOM has to know its own public URL
oms:
  publicUrl: "https://localhost/"

# store data of shared-FS into local directory (requirement #6, #7)
persistence:
  hostPath: /Users/username/iom-share

config:
  image:
    repository: "docker.intershop.de/intershop/iom-config"
    tag: "3.0.0.0"

# create IOM database and according database user before starting IOM. 
# do not reset existing data during installation (requirement #3)
dbaccount:
  enabled: true
  resetData: false # optional, since false is default
  image:
    repository: "docker.intershop.de/intershop/iom-dbaccount"
    tag: "1.1.0.0"

# use integrated PostgreSQL server (requirement #1).
# store database data persistently into local directory (requirement #2).
postgres:
  enabled: true
  persistence:
    enabled: true
    hostPath: /Users/username/pgdata

# enable integrated NGINX ingress controller.
# this controller should not act proxy (requirement #9).
nginx:
  enabled: true
  proxy:
    enabled: false

# configure integrated NGINX ingress controller.
# one instance of NGINX is sufficient for demo scenario (requirement #10).
# set type to LoadBalancer to be accessible from public network (requirement #9).
ingress-nginx:
  controller:
    replicaCount: 1
    service:
      type: LoadBalancer

# enable integrated SMTP server (requirement #4).
# configure ingress to forward requests for any host to mailhog GUI (requirements #9).
# since ingress for IOM defined a more specific rule, mailhog GUI can be reached using any hostname except localhost.
# since integrated NGINX controller should be used, its class has to be set explicitly.
mailhog:
  enabled: true
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: "nginx-iom"
    hosts:
      - host:
        paths: ["/"]

Windows: IOM Share

The current example just works when using Docker Desktop on Windows. When working on Windows, you have to take care to use Unix-Style path names. E.g., if the IOM share is located at C:\Users\username\iom-share, the according entry in values.yaml has to be noted as /c/Users/unsername/iom-share.

Windows: persistent PostgreSQL data

Setting postgresql.persistence.hostPath to a local directory, does not work on Windows, even if the directory is correctly shared with Docker Desktop. When starting the PostgreSQL server, it tries to take ownership of the data directory, which is not working in this case. There are two possibilities to overcome this problem:

  • Do not store PostgreSQL data persistently, by setting postgres.persistence.enabled it to false.
  • Use a Docker volume for persistent storage of PostgreSQL data. The following box shows, how to do this.
Windows: persistent PostgreSQL data
# create docker volume "iom-pgdata"
docker volume create —name=iom-pgdata -d local

# get mount-point of newly created docker volume
# use mount-point as value for helm-parameter postgres.persistence.hostPath
docker volume inspect —format='{{.Mountpoint}}' iom-pgdata
/var/lib/docker/volumes/iom-pgdata/_data

# to remove docker volume, execute the following command
docker volume rm iom-pgdata

4.1.4 Installation of IOM

Create a file, values.yaml, and fill it with the content listed above. Adapt the settings of persistence.hostPath and postgres.persistence.hostPath to point to directories on your computer, which is shared with Docker Desktop. After that, the installation process of IOM can be started.

install IOM
# create namespace "iom"
kubectl create namespace iom

# install IOM into namespace "iom"
helm install demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait

This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl you can see the status of every Kubernetes object. For simplicity, the following example is showing the status of pods only.

Just open a second terminal window and enter the following commands.

observe progress
# A few seconds after start of IOM, only the integrated Postgres server is in "Init" phase yet. All other
# pods are in earlier phases.
kubectl get pods -n iom
NAME                                                  READY   STATUS              RESTARTS   AGE
demo-iom-0                                            0/1     Pending             0          2s
demo-mailhog-5dd4565b98-jphkm                         0/1     ContainerCreating   0          2s
demo-ingress-nginx-controller-f5bf56d64-cp9b5         0/1     ContainerCreating   0          2s
demo-postgres-7b796887fb-j4hdr                        0/1     Init:0/1            0          2s

# After a some seconds all pods except IOM are "Running" and READY (integrated Postgresql server, integrated 
# SMTP server, intergrated NGINX). IOM is in Init-phase, which means the init-containers are currently executed.
kubectl get pods -n iom
NAME                                                  READY   STATUS     RESTARTS   AGE
demo-iom-0                                            0/1     Init:1/3   0          38s
demo-mailhog-5dd4565b98-jphkm                         1/1     Running    0          38s
demo-ingress-nginx-controller-f5bf56d64-cp9b5         1/1     Running    0          38s
demo-postgres-7b796887fb-j4hdr                        1/1     Running    0          38s

# The first init-container executed in iom-pod, is dbaccount. Log messages can be seen
# by executing the following command. If everything works well, the last message will announce the
# successful execution of create_dbaccount.sh script.
kubectl logs demo-iom-0 -n iom -f -c dbaccount
...
{"tenant":"company-name","environment":"system-name","logHost":"demo-iom-0","logVersion":"1.0","appName":"iom-dbaccount","appVersion":"1.1.0.0-SNAPSHOT","logType":"script","timestamp":"2020-08-06T11:33:17+00:00","level":"INFO","processName":"create_dbaccount.sh","message":"success","configName":null}

# The second init-container executed by iom-pod is config, which fills the database and applies 
# migrations and configurations. The last message of config container will announce successful execution
# of load_dbmigrate.sh script.
kubectl logs demo-iom-0 -n iom -f -c config
...
{"tenant":"company-name","environment":"system-name","logHost":"demo-iom-0","logVersion":"1.0","appName":"iom-config","appVersion":"3.0.0.0-SNAPSHOT@12345","logType":"script","timestamp":"2020-08-06T11:35:51+00:00","level":"INFO","processName":"load_dbmigrate.sh","message":"success","configName":"env-name"}

# If init-containers have finished successfully, the iom-pod is now in "Running" state too. But it is not "READY"
# yet. Now the IOM applications and project customizations are deployed into the Wildfly application server.
kubectl get pods -n iom
NAME                                                  READY   STATUS    RESTARTS   AGE
demo-iom-0                                            0/1     Running   0          3m50s
demo-mailhog-5dd4565b98-jphkm                         1/1     Running   0          3m50s
demo-ingress-nginx-controller-f5bf56d64-cp9b5         1/1     Running   0          3m50s
demo-postgres-7b796887fb-j4hdr                        1/1     Running   0          3m50s

# If all pods are "Running" and "READY" the installation process of IOM is finished.
kubectl get pods -n iom
NAME                                                  READY   STATUS    RESTARTS   AGE
demo-iom-0                                            1/1     Running   0          7m20s
demo-mailhog-5dd4565b98-jphkm                         1/1     Running   0          7m20s
demo-ingress-nginx-controller-f5bf56d64-cp9b5         1/1     Running   0          7m20s
demo-postgres-7b796887fb-j4hdr                        1/1     Running   0          7m20s

If all pods are Running and Ready, the installation process is finished. You should check the first terminal window, where the installation process was running.

Now we can access the web GUI of the new IOM installation. In fact, there are two Web GUIs, one for IOM and one for Mailhog. According to our configuration, all requests dedicated to localhost will be forwarded to the IOM application server, any other requests are meant for an integrated SMTP server (Mailhog). Just open the URL "https://localhost/omt" in a web-browser on your Mac. After accepting the self-signed certificate (the configuration did not include a valid certificate), you will see the login page of IOM. Login as admin/!InterShop00! to go further.

Any other request, that is not dedicated to localhost, will be forwarded to Mailhog. To access the web-GUI of Mailhog, just open the URL "https://127.0.0.1/" in your web browser. Once again you have to accept the self-signed certificate and after that, you will see the MailhogGUI.

4.1.5 Upgrade IOM

From a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or new docker images of a new IOM release are rolled out. The example shown here will demonstrate how to change the Java options used by WildFly application server.

Before the start, think about Restrictions on upgrade. A change of Java options is an uncritical change that can be applied without downtime. But we have decided to use a single IOM application server only (see Requirement #8). When using a single IOM application server only, an upgrade process without downtime is inevitable. Hence, we do not have to think about the setting of parameter downtime.

  1. Modify values.yaml by adding the following lines to the file:

    change values.yaml
    jboss:
      javaOpts: "-Xms512M -Xmx1024M"
    

    These changes are now rolled out by running Helm's upgrade process to the existing IOM installation.

  2. Start the upgrade process within a terminal window.

    upgrade IOM
    helm upgrade demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
    

    The upgrade process will take some minutes before finished. 

  3. Enter the following commands in a second terminal window to watch the progress.
    As we did it for the installation process before, this example is restricted to the status of pods only.

    observe progress
    # Only the Kubernetes object of IOM has changed. Therefore Helm only upgrades IOM, the integrated SMTP server,
    # integrated postgresql server and integrated NGINX are running unchanged. A few seconds after starting the
    # upgrade process, the only existing iom-pod is stopped.
    kubectl get pods -n iom
    NAME                                                  READY   STATUS        RESTARTS   AGE
    demo-iom-0                                            1/1     Terminating   0          40m
    demo-mailhog-5dd4565b98-jphkm                         1/1     Running       0          40m
    demo-ingress-nginx-controller-f5bf56d64-cp9b5         1/1     Running       0          40m
    demo-postgres-7b796887fb-j4hdr                        1/1     Running       0          40m
    
    # After the iom-pod is terminated, a new iom-pod is started with new configuration. Init containers are 
    # partially executed again.
    kubectl get pods -n iom
    NAME                                                  READY   STATUS     RESTARTS   AGE
    demo-iom-0                                            0/1     Init:2/3   0          6s
    demo-mailhog-5dd4565b98-jphkm                         1/1     Running    0          41m
    demo-ingress-nginx-controller-f5bf56d64-cp9b5         1/1     Running    0          41m
    demo-postgres-7b796887fb-j4hdr                        1/1     Running    0          41m
    
    # Finally the pod is "Running" and "READY" again, which means, IOM is up again.
    kubectl get pods -n iom
    NAME                                                  READY   STATUS    RESTARTS   AGE
    demo-iom-0                                            1/1     Running   0          5m4s
    demo-mailhog-5dd4565b98-jphkm                         1/1     Running   0          46m
    demo-ingress-nginx-controller-f5bf56d64-cp9b5         1/1     Running   0          46m
    demo-postgres-7b796887fb-j4hdr                        1/1     Running   0          46m

4.1.6 Uninstall IOM

The last process demonstrates how to uninstall IOM.

uninstall IOM
helm uninstall demo -n iom
release "demo" uninstalled

kubectl delete namespace iom
namespace "iom" deleted

Since database data and shared file system of IOM were stored in local directories of the current host, they still exist after uninstalling IOM. In fact, this data represents the complete state of IOM. If we would install IOM again, with the same directories for shared file system and database data, the old IOM installation would be reincarnated.

4.2 CI System Running in Minikube on Virtualized Linux

4.2.1 Preconditions

Docker, Minikube, kubectl, and Helm are external tools. This document does not cover how to install and use these tools.

If there are any traps or pitfalls you need to know, they will be explained here.

4.2.2 Minikube Traps, Pitfalls, Restrictions

Minikube provides an easy way to set up a local Kubernetes cluster. Local means, that the whole cluster is running on a single machine only. Therefore it is a good playground for developers or for the setup of the demo- and CI-installations, but of course not for any kind of serious service.

Running Minikube in an already virtualized environment raises the bar a bit higher. For such type of environment, originally the none driver was recommended (Minikube Documentation | Drivers | none), which was at the time of writing replaced by docker driver (Minikube Documentation | Drivers | docker). Therefore the docker driver was chosen for the following examples.

4.2.2.1 Access from Public Network

Minikube supports access to applications through services of type LoadBalancer. However, a service of this type cannot be directly accessed from outside the Minikube Cluster. To do so, the execution of an additional command is required (minikube tunnel), see: Minikube Documentation | Handbook | Accessing apps | LoadBalancer access. As long as minikube tunnel is not running, the external IP of the LoadBalancer service remains in the state pending. This has an impact on the installation process of IOM. The installation process of IOM has to be started with command line argument --wait (see parameter downtime). But when using --wait, IOM installation will not finish until external IP is available too, which means, the installation process will run into a timeout.

For this reason, for this example, another way was chosen to make IOM and Mailhog-GUI accessible from the public network. Instead of using the combination of service of type LoadBalancer and minikube tunnel, to provide access to services, service type ClusterIP will be used and after IOM has started, kubectl port-forward will enable access from outside. 

Finally, to get access to the port providing web-GUIs of IOM and Mailhog, you have to configure firewalld.

4.2.2.2 Persistent Data

Minikube supports persistent data in general but has restrictions for some drivers, e.g. for docker driver, which is used in our example: Minikube Documentation | Handbook | Persistent Volumes. When using a docker driver, we could use the mount command to get access to a local directory within Minikube. This requires the '9P' file system kernel extension, which is not available on all systems and is missing on our test systems too.

In summary, this means: for our example, we can use persistent data, but we cannot access the according directories from the host directly. Instead, these persistent volumes are hidden somewhere within the internal Docker/Minikube data structures. According to our requirements, this is fully sufficient.

4.2.3 Requirements and Characteristics of IOM Installation

Requirements and characteristics are numbered again. You will find these numbers in the values-file, which is listed below, in order to see the relation between requirement and current configuration.

  1. At least two IOM application servers must run in parallel to test a distributed system.
  2. Usage of integrated PostgreSQL server.
  3. PostgreSQL data are stored persistently in safe main memory.
  4. No reset of PostgreSQL data during the installation process. Due to limitations of persistent data, a reset of PostgreSQL data during the installation process would work, but would not be sufficient, since the shared file system of IOM cannot be reset as easy. To delete all persistent data between CI runs, the Minikube cluster has been deleted.
  5. Usage of an integrated SMTP server (Mailhog).
  6. The shared file-system of IOM must be stored persistently. Required if more than one IOM application server is running.
  7. Usage of integrated NGINX controller for direct access to GUIs of IOM and Mailhog.
  8. At least two instances of integrated NGINX controller should run.
  9. Access from other computers to IOM is required.
  10. The system should be able to be upgraded without downtime.

4.2.4 Execution of Tests

The requirements listed above, do not show any traces of test executions. That's contradictory to the goals of CI systems in general. IOM Helm charts support the execution of tests, but Intershop does not deliver any ready-to-run test images. It is up to you, to develop images for testing your specific IOM project.

In order to enable you to reproduce the installation of the CI system, the execution of tests was skipped in the example. Nevertheless, this section will give you a short overview on how the integration of tests could look like.

For the execution of tests, the following things and information are required:

  • A docker image, that executes the tests and that collects and provides the results. The easiest way to access the results would be to provide them via HTTP.
  • The docker image used for the test may need the following information for test execution:
    • Base URL of IOM to be able to send HTTP requests.
    • Database connection information to prepare test data for customizations.
    • Base URL of Mailhog to check the results of mail-tests.

Since test-images have to be provided in the project context, the integration into IOM Helm charts is very generic. IOM Helm charts simply provide a default deployment created by Helm. 

Please be aware, that the following cutout of a values-file is purely fictitious. Names of docker images and names of environment variables fully depend on the project-specific docker images. Resource usage is realistic, it is oriented on our own tests, using Geb tests in combination with Firefox.

values.yaml test integration
iom-tests:
  enabled: true
  imagePullSecrets:
    - name: intershop-pull-secret
  image:
    repository: mycompany/iom-tests
    tag: 1.0.0.0
    pullPolicy: Always
  env:
    # name of service of integrated NGINX controller
    - name: IOM_HOST
      value: ci-ingress-iom-ingress-nginx-controller
    # name of service of integrated PostgreSQL server
    - name: DB_HOST
      value: ci-postgres
    - name: DB_PORT
      value: '5432'
    - name: OMS_DB_NAME
      value: oms_db
    - name: OMS_DB_USER
      value: oms_user
    - name: OMS_DB_PASSWD
      value: OmsDB
    # name of service of integrated mailhog server
    - name: MAILHOG_HOST
      value: ci-mailhog
    - name: MAILHOG_PORT
      value: 1025
  containerPort: 8080
  livenessProbe:
    httpGet:
      path: /
      port: http
  readinessProbe:
    httpGet:
      path: /
      port: http
  resources:
    limits:
      cpu: 2
      memory: 11000Mi
    requests:
      cpu: 2
      memory: 11000Mi
  ingress:
    enabled: true
    annotations:
      nginx.ingress.kubernetes.io/ssl-redirect: "false"
    hosts:
      # hostname to be used to get access to test results
      - host: tests.iomci.com
        path: ["/"]

4.2.5 Values File

This values file cannot be copied as it is. Before it can be used, all occurrences of jdevoms11.rnd.j.intershop.de have to be replaced with the hostname of your CI-system.

values file
# start 2 IOM application servers (requirement #1)
replicaCount: 2

# run upgrade processes without downtime (requirement #10)
downtime: false

imagePullSecrets:
  - intershop-pull-secret

image:
  repository: "docker.intershop.de/intershop/iom-app"
  tag: "3.0.0.0"

# configure ingress to forward requests to IOM, which were sent to jdevoms11.rnd.j.intershop.de (requirement #9).
# since integrated NGNIX controller should be used, its class has to be specified (requirement #7)
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx-iom"
  hosts:
    - host: jdevoms11.rnd.j.intershop.de
      paths: ["/"]

# IOM has to know its own public URL
oms:
  publicUrl: "https://jdevoms11.rnd.j.intershop.de:8443/"

# store data of shared file system into local directory (requirement #6)
persistence:
  hostPath: /mnt/share

config:
  image:
    repository: "docker.intershop.de/intershop/iom-config"
    tag: "3.0.0.0"

# create IOM database and according user before starting IOM
dbaccount:
  enabled: true
  image:
    repository: docker.intershop.de/intershop/iom-dbaccount
    tag: "1.1.0.0"

# use integrated PostgreSQL server
# store database data persistently into local directory (requirements #2, #3)
postgres:
  enabled: true
  persistence:
    enabled: true
    hostPath: /mnt/pgdata

# enable integrated NGINX ingress controller
# this controller should not act proxy (requirement #7)
nginx:
  enabled: true
  proxy:
    enabled: false

# enable integrated SMTP server
# allow access to Web-GUI of mailhog. all requests should be sent to Web-GUI of mailhog,
# unless a more specific rule exists. (requirement #5)
# since integrated NGNIX controller should be used, its class has to be specified (requirement #7).
mailhog:
  enabled: true
  ingress:
    enabled: true
    annotations:
      kubernetes.io/ingress.class: "nginx-iom"
    hosts:
      - host:
        paths: ["/"]

4.2.6 Installation of IOM

According to requirement #4, the Minikube cluster has to be deleted after each test run. For this reason, the creation of the Minikube cluster was added to the current section too.

create Minikube cluster
# Minikube, using vm-driver "docker", must run in user space.
# Hence, an according user has to be created.
sudo useradd -m -U oms

# Get permission to access docker daemon socket
sudo usermod -aG docker oms && newgrp docker
 
# Change user.
# All commands, also from following sections, have to be executed by user oms!
su - oms
 
# Start minikube as oms user.
minikube start --vm-driver=docker

Create a file values.yaml and fill it with the content listed above. Adapt the hostname in values.yaml.

Once the Minikube cluster is running and the values file is prepared, the installation of IOM can be started.

Install IOM
# create namespace iom
kubectl create namespace iom

# install IOM into namespace iom
helm install ci intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait

# make port 443 of integrated NGNIX controller available on all interfaces at port 8443
kubectl port-forward service/ci-ingress-nginx-controller 8443:443 -n iom --address 0.0.0.0

This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl you can see the status of every Kubernetes object. But for simplicity, the following example shows the status of pods only.

Just open a second terminal window and enter the following commands:

Observe progress
# A few seconds after starting IOM, only the integrated postgresql is in "Init" phase yet. All other
# pods are in earlier phases.
kubectl get pods -n iom
NAME                                                READY   STATUS              RESTARTS   AGE
ci-iom-0                                            0/1     Pending             0          3s
ci-mailhog-5744886bb8-x78l7                         0/1     ContainerCreating   0          3s
ci-ingress-nginx-controller-749c5f8f4-7fk6j         0/1     ContainerCreating   0          3s
ci-ingress-nginx-controller-749c5f8f4-d4htz         0/1     ContainerCreating   0          3s
ci-postgres-575976b886-ph6s4                        0/1     Init:0/1            0          3s

# A few seconds later, the IOM pod is in "Init" phase as well, which means the init-containers are
# currently executed.
kubectl get pods -n iom
NAME                                                READY   STATUS              RESTARTS   AGE
ci-iom-0                                            0/1     Init:0/3            0          16s
ci-mailhog-5744886bb8-x78l7                         1/1     Running             0          16s
ci-ingress-nginx-controller-749c5f8f4-7fk6j         0/1     ContainerCreating   0          16s
ci-ingress-nginx-controller-749c5f8f4-d4htz         0/1     ContainerCreating   0          16s
ci-postgres-575976b886-ph6s4                        0/1     PodInitializing     0          16s

# The first init-container executed in iom-pod is dbaccount. Log messages of this init-container can be seen
# by executing the following command. If everything works well, the last message will announce the successful execution
# of create_dbaccount.sh script.
kubectl logs ci-iom-0 -c dbaccount -f -n iom
...
{"tenant":"company-name","environment":"system-name","logHost":"ci-iom-0","logVersion":"1.0","appName":"iom-dbaccount","appVersion":"1.1.0.0-SNAPSHOT","logType":"script","timestamp":"2020-08-07T10:41:15+00:00","level":"INFO","processName":"create_dbaccount.sh","message":"success","configName":null}

# The second init-container executed by iom pod is config, which is filling the database, applying
# migrations and configurations. The last message of config container will announce successful execution
# of load_dbmigrate.sh script.
kubectl logs ci-iom-0 -c config -f -n iom
...
{"tenant":"company-name","environment":"system-name","logHost":"ci-iom-0","logVersion":"1.0","appName":"iom-config","appVersion":"3.0.0.0-SNAPSHOT@18918","logType":"script","timestamp":"2020-08-07T10:42:03+00:00","level":"INFO","processName":"load_dbmigrate.sh","message":"success","configName":"env-name"}

# If init-containers have finished successfully, the iom-pod is now in "Running" state too. But it is not "READY"
# yet. Now the IOM applications and project customizations are deployed into the Wildfly application server.
kubectl get pods -n iom
NAME                                                READY   STATUS    RESTARTS   AGE
ci-iom-0                                            0/1     Running   0          2m26s
ci-mailhog-5744886bb8-x78l7                         1/1     Running   0          2m26s
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running   0          2m26s
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running   0          2m26s
ci-postgres-575976b886-ph6s4                        1/1     Running   0          2m26s

# If the first iom-pod is "Running" and "READY", the second IOM pod will be started. From
# now on, the IOM is accessible from outside.
kubectl get pods -n iom
NAME                                                READY   STATUS     RESTARTS   AGE
ci-iom-0                                            1/1     Running    0          4m52s
ci-iom-1                                            0/1     Init:0/3   0          2s
ci-mailhog-5744886bb8-x78l7                         1/1     Running    0          4m52s
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running    0          4m52s
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running    0          4m52s
ci-postgres-575976b886-ph6s4                        1/1     Running    0          4m52s

# If all pods are "Running" and "READY", the installation process of IOM has finished.
kubectl get pods -n iom
NAME                                                READY   STATUS    RESTARTS   AGE
ci-iom-0                                            1/1     Running   0          6m15s
ci-iom-1                                            1/1     Running   0          85s
ci-mailhog-5744886bb8-x78l7                         1/1     Running   0          6m15s
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running   0          6m15s
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running   0          6m15s
ci-postgres-575976b886-ph6s4                        1/1     Running   0          6m15s

If all pods are Running and Ready, the installation process has finished. You should check the first terminal window, where the installation process was running. As the last step of the IOM installation, the port-forwarding of NGINX service has to be done.

Now we can access the web GUI of the new IOM installation. As already shown in the first example, there are two web-GUIs, one of IOM and one of Mailhog. According to the configuration, all requests dedicated to jdevoms11.rnd.j.intershop.de (please replace it with the hostname of your CI system) will be forwarded to the IOM application server. Any other requests are meant for the integrated SMTP server (Mailhog). Just open the URL https://jdevoms11.rnd.j.intershop.de:8443/omt in a web browser. After accepting the self-signed certificate (the configuration did not include a valid certificate), you will see the login page of IOM. Login as admin/!InterShop00! to go further.

Any other request, that is not dedicated to jdevoms11.rnd.j.intershop.de, will be forwarded to Mailhog. To access the web-GUI of Mailhog, just use the IP instead of the hostname. Open URL "https://10.0.29.69:8443/" (please replace it with the IP of your CI system) in your web browser. Once again you have to accept the self-signed certificate and after that, you will see the Mailhog-GUI.

4.2.7 Upgrade of IOM

Now we repeat the upgrade process which was already shown in the previous example. This simple example was chosen since from a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or new docker images of a new IOM release are rolled out. 

This example includes setting the downtime parameter (see: Restrictions on upgrade). A change of Java options is an uncritical change, that can be applied without downtime. Since there is more than one IOM application server now, the upgrade process can be executed without downtime.

For upgrading IOM, values.yaml has to be changed. Just add the following lines to the file:

change values.yaml
jboss:
  javaOpts: "-Xms512M -Xmx1024M"

These changes are now rolled out by running Helms upgrade process to the existing IOM installation. Start the process within a terminal window.

upgrade IOM
helm upgrade ci intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait

The upgrade process will take some minutes before finished. In the meantime, we can watch the progress. As we did it in the installation process before, this example is restricted to the status of pods only. Just enter the following commands in a second terminal window.

observe progress
# Right away after starting the upgrade process, the first iom-pod is terminated.
kubectl get pods -n iom
NAME                                                READY   STATUS        RESTARTS   AGE
ci-iom-0                                            1/1     Running       0          12m
ci-iom-1                                            1/1     Terminating   0          7m44s
ci-mailhog-5744886bb8-x78l7                         1/1     Running       0          12m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running       0          12m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running       0          12m
ci-postgres-575976b886-ph6s4                        1/1     Running       0          12m

# If this pod has finished, it will be started again, now using the new configuration.
# Initialization is mostly identical to the install process.
kubectl get pods -n iom
NAME                                                READY   STATUS     RESTARTS   AGE
ci-iom-0                                            1/1     Running    0          13m
ci-iom-1                                            0/1     Init:0/3   0          2s
ci-mailhog-5744886bb8-x78l7                         1/1     Running    0          13m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running    0          13m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running    0          13m
ci-postgres-575976b886-ph6s4                        1/1     Running    0          13m

# If initialization has finished, deployment of IOM- and customization-apps is executed.
# During this time, the pod is "Running" but not "READY".
kubectl get pods -n iom
NAME                                                READY   STATUS    RESTARTS   AGE
ci-iom-0                                            1/1     Running   0          13m
ci-iom-1                                            0/1     Running   0          15s
ci-mailhog-5744886bb8-x78l7                         1/1     Running   0          13m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running   0          13m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running   0          13m
ci-postgres-575976b886-ph6s4                        1/1     Running   0          13m

# If the pod is "Running" and "READY", it is able to handle incoming requests. The other
# iom-pod can now be terminated.
kubectl get pods -n iom
NAME                                                READY   STATUS        RESTARTS   AGE
ci-iom-0                                            1/1     Terminating   0          14m
ci-iom-1                                            1/1     Running       0          88s
ci-mailhog-5744886bb8-x78l7                         1/1     Running       0          14m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running       0          14m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running       0          14m
ci-postgres-575976b886-ph6s4                        1/1     Running       0          14m

# After termination, it is started again with new configuration. Init containers are
# mostly executed as during installation process.
kubectl get pods -n iom
NAME                                                READY   STATUS     RESTARTS   AGE
ci-iom-0                                            0/1     Init:0/3   0          0s
ci-iom-1                                            1/1     Running    0          119s
ci-mailhog-5744886bb8-x78l7                         1/1     Running    0          15m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running    0          15m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running    0          15m
ci-postgres-575976b886-ph6s4                        1/1     Running    0          15m

# IOM- and customization-apps are deployed.
kubectl get pods -n iom
NAME                                                READY   STATUS    RESTARTS   AGE
ci-iom-0                                            0/1     Running   0          13s
ci-iom-1                                            1/1     Running   0          2m12s
ci-mailhog-5744886bb8-x78l7                         1/1     Running   0          15m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running   0          15m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running   0          15m
ci-postgres-575976b886-ph6s4                        1/1     Running   0          15m

# Both IOM pods are "Running" and "READY" again, the upgrade process is finished now.
kubectl get pods -n iom
NAME                                                READY   STATUS    RESTARTS   AGE
ci-iom-0                                            1/1     Running   0          2m38s
ci-iom-1                                            1/1     Running   0          4m37s
ci-mailhog-5744886bb8-x78l7                         1/1     Running   0          17m
ci-ingress-nginx-controller-749c5f8f4-7fk6j         1/1     Running   0          17m
ci-ingress-nginx-controller-749c5f8f4-d4htz         1/1     Running   0          17m
ci-postgres-575976b886-ph6s4                        1/1     Running   0          17m

4.2.8 Uninstall of IOM

The last process demonstrates how to uninstall IOM. In order to get rid of persistent data too (see requirement #4), the whole Minikube cluster has to be deleted. In fact, it would be sufficient to only delete the Minikube cluster, but for completeness, the other commands are listed as well.

uninstall IOM
# uninstall IOM release
helm uninstall ci -n iom
release "ci" uninstalled
 
# delete Kubernetes namespace used for IOM
kubectl delete namespace iom
namespace "iom" deleted

# delete whole Minikube cluster
minikube delete

4.3 Production System running in Azure Cloud

4.3.1 Preconditions

Please have in mind, that these preconditions reflect the use case described in section IOM Helm-charts. When using Intershop CaaS Environment, these preconditions are all covered by Intershop.

4.3.2 Requirements and Characteristics of IOM Installation

Requirements and characteristics are numbered again. You will find these numbers in the values-file listed below in order to see the relation between requirement and current configuration.

  1. Two IOM application servers must run in parallel.
  2. Usage of an external PostgreSQL server (Azure Database for PostgreSQL server).
  3. No reset of PostgreSQL data during the installation process. 
  4. Usage of an external SMTP server.
  5. Shared file system of IOM located on externally provided resources.
  6. Usage of an external Ingress controller, which is not based on NGINX. In this case, the integrated NGINX ingress controller has to act as a proxy, which is providing load-balancing and sticky sessions.
  7. Two instances of integrated NGINX controller must run.
  8. The system should be able to be upgraded without downtime.

4.3.3 Values File

The values file shown below, reflects the requirements of the straight Helm approach, as described in section IOM Helm-charts. When running in an Intershop CaaS Environment, most settings are made by Intershop Operations within the second values file. On the customer's side, only few parameters remain, that have to be set. These are downtime, image, config, and oms.smtp and maybe some other parameters, depending on the type of installation.

Of course, this values file cannot be copied as it is. It references external resources and external services, which do not exist in your environment, additionally the hostname iom.mycompany.com does also not match your requirements.

values file
# start 2 IOM application servers (requirement #1)
replicaCount: 2

# run upgrade processes without downtime (requirement #8)
downtime: false

imagePullSecrets:
  - intershop-pull-secret

image:
  repository: "docker.intershop.de/intershop/iom-app"
  tag: "3.0.0.0"

# configure ingress to forward requests to IOM, which are sent to 
# to host iom.mycompany.com. (requirement #6)
ingress:
  enabled: true
  hosts:
    - host: iom.mycompany.com
      paths: ["/"]
  tls:
    - secretName: mycompany-com-tls
      hosts:
        - iom.mycompany.com

# information about external postgresql service (requirement #2)
pg:
  host: postgres-prod.postgres.database.azure.com
  port: 5432
  userConnectionSuffix: "@postgres-prod"

  # root-database and superuser information. The very first installation initializes
  # the database of IOM. After that, these information should be removed from values
  # file completely (and dbaccount should be disabled/removed too)
  user: postgres
  passwdSecretKeyRef:
    name: mycompany-prod-secrets
    key: pgpasswd
  db: postgres

# IOM has to know its own public URL
oms:
  publicUrl: "https://iom.mycompany.com/"
  db:
    name: oms_db
    user: oms_user
    passwdSecretKeyRef:
      name: mycompany-prod-secrets
      key: dbpasswd
  # configuration of external smtp server (requirement #4)
  smtp:
    host: smpt.external-provider.com
    port: 25
    user: my-company-prod
    passwdSecretKeyRef:
      name: mycompany-prod-secrets
      key: smtppasswd

log:
  metadata:
    tenant: mycompany
    environment: prod

caas:
  envName: prod

resources:
  limits:
    cpu: 1000m
    memory: 3000Mi
  requests:
    cpu: 1000m
    memory: 3000Mi

# store data of shared file system at azurefile service (requirement #5)
persistence:
  storageClass: azurefile
  storageSize: 60G

config:
  image:
    repository: "docker.intershop.de/intershop/iom-config"
    tag: "3.0.0.0"

# Create IOM database and according user before starting IOM. Creates IOM database
# while running install process. After that, dbaccount should be completely removed
# from the values file. Without set explicitly, data are not reseted during start
# (requirement #3).
dbaccount:
  enabled: true
  image:
    repository: docker.intershop.de/intershop/iom-dbaccount
    tag: "1.1.0.0"

# enable integrated NGINX ingress controller. Without any further configuration, 
# it acts as a proxy (requirement #6).
nginx:
  enabled: true

4.3.4 Installation of IOM

Create a file values.yaml and fill it with the content listed above. Adapt all the changes to the file, that is required by your environment. After that, the installation process can be started.

Install IOM
# create namespace mycompany-iom
kubectl create namespace mycompany-iom
 
# install IOM into namespace mycompany-iom
helm install ci intershop/iom --values=values.yaml --namespace mycompany-iom --timeout 20m0s --wait

This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl, you can see the status of every Kubernetes object. For simplicity, the following example shows the status of pods only.

Just open a second terminal window and enter the following commands.

# One second after start, all pods are in very early phases.
kubectl get pods -n mycompany-iom
NAME                                                 READY   STATUS              RESTARTS   AGE
prod-iom-0                                           0/1     Pending             0          1s
prod-ingress-nginx-controller-76db7cfc6d-2h4w9       0/1     ContainerCreating   0          1s
prod-ingress-nginx-controller-76db7cfc6d-tzzsl       0/1     ContainerCreating   0          1s

# Little bit later, the integrated NGINX ingress is "Running" and "READY". IOM is in initialization phase,
# which means the init-containers are currently executed.
kubectl get pods -n mycompany-iom
NAME                                                 READY   STATUS     RESTARTS   AGE
prod-iom-0                                           0/1     Init:0/2   0          24s
prod-ingress-nginx-controller-76db7cfc6d-2h4w9       1/1     Running    0          24s
prod-ingress-nginx-controller-76db7cfc6d-tzzsl       1/1     Running    0          24s

# After a few minutes IOM is "Running", but not "READY" yet. The init-containers are finished
# now and the IOM- and project-applications are currently deployed into the Wildfly application server.
kubectl get pods -n mycompany-iom
NAME                                                 READY   STATUS    RESTARTS   AGE
prod-iom-0                                           0/1     Running   0          4m43s
prod-ingress-nginx-controller-76db7cfc6d-2h4w9       1/1     Running   0          4m43s
prod-ingress-nginx-controller-76db7cfc6d-tzzsl       1/1     Running   0          4m43s

# The first iom-pod is "Running" and "READY", which means the IOM System is usable now.
# The second iom-pod has just started and is currently initialized.
kubectl get pods -n mycompany-iom
NAME                                                 READY   STATUS     RESTARTS   AGE
prod-iom-0                                           1/1     Running    0          9m35s
prod-iom-1                                           0/1     Init:0/2   0          2s
prod-ingress-nginx-controller-76db7cfc6d-2h4w9       1/1     Running    0          9m35s
prod-ingress-nginx-controller-76db7cfc6d-tzzsl       1/1     Running    0          9m35s

# Both iom-pods are "Running" and "READY". Installation of IOM is finished.
kubectl get pods -n mycompany-iom
NAME                                                 READY   STATUS    RESTARTS   AGE
prod-iom-0                                           1/1     Running   0          15m
prod-iom-1                                           1/1     Running   0          5m49s
prod-ingress-nginx-controller-76db7cfc6d-2h4w9       1/1     Running   0          15m
prod-ingress-nginx-controller-76db7cfc6d-tzzsl       1/1     Running   0          15m

If all pods are Running and Ready, the installation process is finished. You should check the first terminal window, where the installation process was running.

4.3.5 Upgrade of IOM

Now we repeat the upgrade process, which was already shown in the previous example. This simple example was chosen since from a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or if new docker images of a new IOM release are rolled out. 

Also setting the downtime parameter (see: Restrictions on upgrade) is considered. A change of Java options is an uncritical change which can be applied without downtime. Since we have more than one IOM application server, the upgrade process can now be executed without downtime.

Add the following lines to the values.yaml:

change values.yaml
jboss:
  javaOpts: "-Xms512M -Xmx2048M"

These changes are now rolled out by running the Helm upgrade process to the existing IOM installation. Start the process within a terminal window.

Upgrade IOM
helm upgrade ci intershop/iom --values=values.yaml --namespace mycompany-iom --timeout 20m0s --wait

The upgrade process will take some minutes before being finished.

In the previous section you might have noticed, that the behavior of pods during the installation process is identical, no matter which Kubernetes environment was used (Docker Desktop, Minikube). The same applies for the upgrade process. For this reason, the box "Observe progress" will be skipped in the current section.

4.3.6 Uninstall of IOM

The last process demonstrates how to uninstall IOM. Please have in mind, that the uninstall process only covers the objects defined in IOM Helm-charts. In the current production example many external resources and external services are referenced. These resources and services remain untouched by the uninstall process of IOM.

Uninstall IOM
# uninstall IOM release
helm uninstall prod -n mycompany-iom
release "prod" uninstalled
  
# delete Kubernetes namespace used for IOM
kubectl delete namespace mycompany-iom
namespace "mycompany-iom" deleted

5 Parameters

Each example shown in the Examples section before used a values file to define the specific setup of IOM (see: demo, ci, prod). The current section now describes each parameter you have already used before in detail. There are also many more parameters that were not used in the examples.

In the Examples section, you have already learned that IOM Helm-charts also provide optional components: integrated PostgreSQL server, integrated SMTP server, integrated NGNIX controller, and support for the execution of tests. These optional components are covered by separate sections.

5.1 IOM

Parameter

Description

Default Value

replicaCountThe number of IOM application server instances to run in parallel.2

downtime

The downtime parameter is a very critical one. Its goal and behavior is already described in section Restrictions on upgrade.

Additional information:

  • If downtime is set to false, the DBmigrate process, as part of the process the config init-container is executing, is skipped. This has no impact on the project configuration.
  • For the downtime parameter to work correctly, the --wait command line parameter must always be set when running helm.
true
image.repository

Repository of the IOM app product/project image. 

docker.intershop.de/intershop/iom-app
image.pullPolicyPull policy, to be applied when getting IOM product/project Docker image. For more information, see official Kubernetes documentation.IfNotPresent
image.tag

The tag of IOM app product/project image.

3.0.0.0
dbaccount

Parameters, bundled by dbaccount, are used to control the dbaccount init-container, which creates the IOM database-user and the IOM database itself. To enable the dbaccount init-container to do this, it needs to get superuser access to the PostgreSQL server and it requires the according information about the IOM database. This information is not contained in dbaccount parameters. Instead, the general connection and superuser information are retrieved from pg or postgres.pg parameters (depending on postgres.enabled). All information about the IOM database user and database are provided by oms.db parameters.

Once the IOM database is created, the dbaccount init-container is not needed any longer. Hence, all IOM installations, except really non-critical demo- and CI-setups, should enable dbaccount init-container only temporarily to initialize the database account.


dbaccount.enabled

Controls if the dbaccount init-container should be executed or not. If enabled, dbaccount will only be executed when installing IOM, not on upgrade operations.

false
dbaccount.image.repository

Repository of the dbaccount image. 

docker.intershop.de/intershop/iom-dbaccount
dbaccount.image.pullPolicyPull policy, to be applied when getting dbaccount Docker image. For more information, see official Kubernetes documentation.IfNotPresent
dbaccount.image.tag

The tag of dbaccount image.

1.1.0.0
dbaccount.resetDataControls if dbaccount init-container should reset an already existing IOM database during the installation process of IOM. If set to true, existing data is deleted without backup and further warning. false

dbaccount.options

When creating the IOM database, more options added to OWNER are required. Depending on the configuration of the PostgreSQL server, these options may differ. The default values can be used as they are for integrated PostgreSQL server, for Azure Database for PostgreSQL service, and for most other servers too.

See Options and Requirements of IOM database for details.

"ENCODING='UTF8' LC_COLLATE='en_US.utf8' LC_CTYPE='en_US.utf8' CONNECTION LIMIT=-1 TEMPLATE=template0"

dbaccount.searchPath

In some circumstances, the search path for database objects has to be extended. This is the case if custom schemas are used for customizations or tests. To add more schemas to the search-path, set the current parameter to a string containing all additional schemas, separated by a comma. E.g. "tests, customschema". The additional entries are inserted at the beginning of the search-path, hence objects with the same name as standard objects of IOM are found first.

dbaccount.tablespace

Use the passed tablespace as default for IOM database user and IOM database. Tablespace has to exist, it will not be created.

Section Options and Requirements of IOM database will give you some more information.

  • Ignored, if postgres.enabled is true, since the integrated PostgreSQL server can never create a custom tablespace prior initialization of the IOM database user and IOM database.

dbaccount.resourcesResource requests & limits{}
config

Parameters, bundled by config, are used to control the config init-container which fills the IOM database, to apply database migrations, and to roll out project configurations into the IOM database. To enable the config init-container to do this, it requires access to the IOM database. This information is not contained in config parameters. Instead, the general connection information is retrieved from pg or postgres.pg parameters. All information about the IOM database user and database are provided by oms.db parameters.


config.image.repository

Repository of the IOM config product/project image. 

docker.intershop.de/intershop/iom-config
config.image.pullPolicyPull policy, to be applied when getting the IOM config product/project Docker image. For more information, see official Kubernetes documentation.IfNotPresent
config.image.tag

The tag of IOM config product/project image.

3.0.0.0
config.resourcesResource requests & limits{}

pg

This group of parameters bundles the information required to connect the PostgreSQL server, information about the superuser, and default database (management database, not the IOM database). 

Not all clients need all information:

The dbaccount init-container is the only client that needs access to the PostgreSQL server as a superuser. Hence, if you do not enable dbaccount, the parameters pg.user(SecretKeyRef), pg.passwd(SecretKeyRef) and pg.db should not be set at all.

If integrated PostgreSQL server is enabled (postgres.enabled set to true), all parameters defined by pg are ignored completely. In this case, parameters defined by postgres.pg are used instead.


pg.user

Name of the superuser.

  • Required only if dbaccount.enabled is set to true.
  • Ignored if postgres.enabled is set to true.
  • Ignored if pg.userSecretKeyRef is set.
postgres
pg.userSecretKeyRef

Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Required only if dbaccount.enabled is set to true and pg.user is not set.
  • Ignored if postgres.enabled is set to true.

pg.passwd

The password of the superuser. 

  • Required only if dbaccount.enabled is set to true.
  • Ignored if postgres.enabled is set to true.
  • Ignored if pg.passwdSecretKeyRef is set.
postgres
pg.passwdSecretKeyRef

Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Required only, if dbaccount.enabled is set to true and pg.passwd is not set.
  • Ignored if postgres.enabled is set to true.

pg.db

Name of the default (management) database.

  • Required only if dbaccount.enabled is set to true.
  • Ignored if postgres.enabled is set to true.
postgres

pg.host

The hostname of the PostgreSQL server.postgres-service
pg.portPort of the PostgreSQL server."5432"
pg.userConnectionSuffixWhen using the Azure Database for PostgreSQL service, user names have to be extended by a suffix, beginning with '@'. For more information, refer to the official Azure Database for PostgreSQL documentation.

This suffix is not a part of the user name. It has to be used only when connecting to the database. For this reason, the parameter pg.userConnectionSuffix was separated from pg.user and oms.db.user.

Example: "@mydemoserver"


pg.sslModepg.sslMode has to contain one of the following values: disable, allow, prefer, require, verify-ca, verify-full. For a detailed description of settings, please see official PostgreSQL documentation.prefer
pg.sslCompression

If set to "1", data sent over SSL connections will be compressed. If set to "0", compression will be disabled. For a detailed description, please see official PostgreSQL documentation.

"0"

pg.sslRootCert

Azure Database for PostgreSQL service might require verification of server certificate, see official Azure Database for PostgreSQL documentation. To handle this case, it is possible to pass the SSL root certificate in pg.sslRootCert. The certificate has to be provided as a string with newlines quoted as '\\n'.

set to the content of BaltimoreCyberTrustRoot.crt.pem.

See  official Azure Database for PostgreSQL documentation.

omsParameters of group oms are all related to the configuration of IOM.
oms.publicUrl

The publicly accessible base URL of IOM which could be the DNS name of the load balancer etc. It is used internally for link generation.

https://localhost
oms.mailResourcesBaseUrlThe base path for e-mail resources that are loaded from the e-mail client, e.g., images or stylesheets. Also, see Concept - IOM Customer E-mails .https://localhost/mailimages/customers
oms.jwtSecret

The shared secret for a JSON Web Tokens (JWT) creation/validation. JWTs will be generated with the HMAC algorithm (HS256)

Intershop strongly recommends to change the default shared secret used for the JSON Web Tokens creation/validation.

To secure the JWT, a key of the same size as the hash output or larger must be used with the JWS HMAC SHA-2 algorithms (i.e, 256 bits for "HS256"), see JSON Web Algorithms (JWA) | 3.2. HMAC with SHA-2 Functions.

  • Ignored if oms.jwtSecretKeyRef is set.
length_must_be_at_least_32_chars
oms.jwtSecretKeyRef

Instead of storing the jwt-secret as plain text in values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Only required if oms.jwtSecret is empty.

oms.archiveOrderMessageLogMinAge

Number of days after which the entries in table "OrderMessageLogDO" should be exported
and the columns "request" and "response" set to 'archived' in order to reduce the table size.
Min. accepted value: 10

Exported data are stored under share/archive

  • Available since IOM Helm charts 1.1.0, requires IOM 3.1.0.0 or newer
  • Value has to match ^[1-9]([0-9]+)?
"90"
oms.deleteOrderMessageLogMinAge

Number of days after which the entries in table "OrderMessageLogDO" will definitely be deleted in order to reduce the table size. Must be greater than oms.archiveOrderMessageLogMinAge.

  • Available since IOM Helm charts 1.1.0, requires IOM 3.1.0.0 or newer
  • Value has to match ^[1-9]([0-9]+)?
"180"
oms.archiveShopCustomerMailMinAge

Number of days after which the entries in table "ShopCustomerMailTransmissionDO" should be exported (Quartz job "ShopCustomerMailTransmissionArchive") and the column "message" set to ''deleted'' in order to reduce the table size. Default is 1826 for 5 years. However, the export will not take place if this property and oms.archiveShopCustomerMailMaxCount are not set. Min. accepted value: 10

Exported data are stored under share/archive

  • Available since IOM Helm charts 1.1.0, requires IOM 3.1.0.0 or newer
  • Value has to match ^[1-9]([0-9]+)$
"1826"
oms.archiveShopCustomerMailMaxCount

Maximum Number of entries in table "ShopCustomerMailTransmissionDO" to be exported per run of the Quartz job "ShopCustomerMailTransmissionArchive".  Default is 10000, however, the export will not take place if this property and ''archive_ShopCustomerMailMinAge'' are not set.  Min. accepted value: 10

  • Available since IOM Helm charts 1.1.0, requires IOM 3.1.0.0 or newer
  • Value has to match ^[1-9]([0-9]+)$
"10000"
oms.deleteShopCustomerMailMinAge

The number of days after which the entries in table "ShopCustomerMailTransmissionDO" will definitely be deleted in order to reduce the table size.  (Quartz job"ShopCustomerMailTransmissionArchive")  Default is 2190 for 6 years. However, the deletion will not take place if this property is not set.

  • Available since IOM Helm charts 1.1.0, requires IOM 3.1.0.0 or newer
  • Value has to match ^[1-9]([0-9]+)$
"2190"
oms.secureCookiesEnabled

If set to true, cookies will be sent with secure flag. In this case OMT requires fully encrypted HTTP traffic in order to work properly.

  • Available since IOM Helm charts 1.2.0, requires IOM 3.2.0.0 or newer
false

oms.db

Group oms.db bundles all parameters, which are required to access the IOM database. General information required to connect the PostgreSQL server are stored at group pg.
oms.db.nameThe name of the IOM database.oms_db
oms.db.user

The IOM database user .

  • Ignored if oms.db.userSecretKeyRef is set.
oms_user
oms.db.userSecretKeyRef

Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Only required if oms.db.user is not set.

oms.db.passwdThe password of the IOM database user.OmsDB
oms.db.passwdSecretKeyRef

Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Only required if oms.db.passwd is not set.

oms.db.hostlist

A comma-separated list of database servers. Each server entry consists of a hostname and port, separated by a colon. Setting the port is optional. If not set, standard port 5432 will be used.

  • Only required, if a high availability cluster of PostgreSQL servers is used, to list all possible connecting possibilities to this cluster.
  • Affects IOM application servers only. All other database clients (config and dbaccount) are using connect information from pg parameters group only. The same is true for the IOM application server if oms.db.hostlist is empty.

oms.smtp

Parameters in oms.smtp are bundling the information required to connect SMTP server. 

If an integrated SMTP server is enabled (mailhog.enabled set to true), all parameters defined by oms.smtp are ignored completely. In this case, IOM will be automatically configured to use the integrated SMTP server.


oms.smtp.host

The hostname of the mail server IOM uses to send e-mails.

  • Ignored if mailhog.enabled is set to true.
mail-service
oms.smtp.port

The port of the mail server IOM uses to send e-mails.

  • Ignored if mailhog.enabled is set to true.
"1025"
oms.smtp.user

The user name for mail server authentication.

  • Only required if the SMTP server requires authentication.
  • Ignored if mailhog.enabled is set to true.

oms.smtp.userSecretKeyRef

Instead of storing the user name as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Only required if oms.smtp.user is not set and the SMTP server requires authentication.
  • Ignored if mailhog.enabled is set to true.

oms.smtp.passwd

The password for mail server authentication.

  • Only required if the SMTP server requires authentication.
  • Ignored if mailhog.enabled is set to true.

oms.smtp.passwdSecretKeyRef

Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.

  • Only required if oms.smtp.passwd is not set and the SMTP server requires authentication.
  • Ignored if mailhog.enabled is set to true.

livenessProbeGroup of parameters, to fine tune liveness-probe of Kubernetes. The basic kind of probe is fixed and cannot be changed.
livenessProbe.periodSeconds

How often (in seconds) to perform the probe. Minimum value is 1.

  • Available since IOM Helm charts 1.1.1
10
livenessProbe.initialDelaySeconds

Number of seconds after the container has started before liveness probes are initiated. Minimum value is 0.

  • Available since IOM Helm charts 1.1.1
60
livenessProbe.timeoutSeconds

Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.

  • Available since IOM Helm charts 1.1.1
5
livenessProbe.failureThreshold

When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of liveness probe means restarting the container. Minimum value is 1.

  • Available since IOM Helm charts 1.1.1
3
readinessProbeGroup of parameters, to fine tune readiness-probe of Kubernetes. The basic kind of probe is fixed and cannot be changed.
readinessProbe.periodSeconds

How often (in seconds) to perform the probe. Minimum value is 1.

  • Available since IOM Helm charts 1.1.1
10
readinessProbe.initialDelaySeconds

Number of seconds after the container has started before readiness probes are initiated. Minimum value is 0.

  • Available since IOM Helm charts 1.1.1
60
readinessProbe.timeoutSecondsNumber of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.8
readinessProbe.failureThreshold

When a probe fails, Kubernetes will try failureThreshold times before giving up. Giving up in case of readiness probe the Pod will be marked as Unready. Minimum value is 1.

  • Available since IOM Helm charts 1.1.1
1
readinessProbe.successThreshold

Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1.

  • Available since IOM Helm charts 1.1.1
1
jbossParameters of group jboss are all related to the configuration of Wildfly/JBoss.
jboss.javaOpts

The value of jboss.javaOpts is passed to Java options of the WildFly application server .

"-Xms512M -Xmx2048M"
jboss.opts

Additional command-line arguments to be used, when starting the WildFly application server.

Example: "--debug *:8787"


jboss.xaPoolsizeMin

The minimum value of the pool-size of XA-datasources."50"

jboss.xaPoolsizeMax

The maximum value of the pool-size of XA-datasources."125"
logParameters of group log are all related to the configuration of the logging of IOM.
log.access.enabled

Controls creation of access log messages.

Allowed values are: true, false

  • Available since IOM Helm charts 1.2.0
  • Requires IOM 3.2.0.0 or newer
true
log.level.scripts

Controls log-level of all shell-scripts running in one of the IOM related containers (as defined in image, dbaccount.image and config.image)

Allowed values are: ERROR, WARN, INFO, DEBUG

INFO
log.level.iom

Controls log-level of IOM log-handler, which covers all Java-packages beginning with bakery, com.intershop.oms, com.theberlinbakery, org.jboss.ejb3.invocation.

Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL

WARN
log.level.hibernate

Controls log-level of HIBERNATE log-handler, which covers all Java-packages beginning with org.hibernate.

Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL

WARN
log.level.quartz

Controls log-level of QUARTZ log-handler, which covers all Java-packages beginning with org.quartz.

Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL

WARN
log.level.activeMQ

Controls log-level of ACTIVEMQ log-handler, which covers all Java-packages beginning with org.apache.activemq.

Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL

WARN
log.level.console

The CONSOLE handler has no explicit assignments of Java packages. This handler is assigned to root-loggers, which do not need assignments. Instead, this log-handler handles all unassigned Java packages too.

Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL

WARN
log.level.customization

Another handler without package assignments is CUSTOMIZATION. In difference to CONSOLE, this handler will not log any messages, as long no Java packages are assigned. The assignment of Java packages has to be done in the project configuration and is described in section Logging in Guide - IOM Standard Project Structure.

Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL

WARN
log.metadata

log.metadata bundles parameters required to configure additional information to appear in log-messages.

Note

Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM.


log.metadata.tenant

Name of the tenant is added to every log-message.

Example: Intershop

Note

Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM.

company-name
log.metadata.environment

Name of the environment is added to every log-message.

Example: production

Note

Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM.

system-name
datadogApm

datadogApm bundles parameters, required to configure datadog Application Performance Monitoring (APM).

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.enabled

This parameter is mapped to environment variable DD_APM_ENABLED. For more information, please consult official datadog documentation.

If set to true, IOM will be started with -javaagent parameter, loading the datadog javaagent library. This will not be the case, when set to false.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
false
datadogApm.backendOnly

If set to true and datadog APM is enabled, tracing will be executed on the one IOM application-server only, that is running the backend applications (singleton applications). If set to true and datadog APM is enabled, tracing will be executed on all IOM application servers.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
true
datadogApm.traceAgentHost

This parameter is mapped to environment variable DD_AGENT_HOST. For more information, please consult official datadog documentation.

Normally this environment variable is injected with the right value by the locally installed datadog daemon-set.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.traceAgentPort

This parameter is mapped to environment variable DD_TRACE_AGENT_PORT. For more information, please consult official datadog documentation.

Normally this environment variable is injected with the right value by the locally installed datadog daemon-set.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.traceAgentTimeout

This parameter is mapped to environment variable DD_TRACE_AGENT_TIMEOUT. For more information, please consult official datadog documentation. 

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.logsInjection

This parameter is mapped to environment variable DD_LOGS_INJECTION. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
false
datadogApm.debug

This parameter is mapped to environment variable DD_TRACE_DEBUG. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
false
datadogApm.startupLogs

This parameter is mapped to environment variable DD_TRACE_STARTUP_LOGS. For more information, please consult official datadog documentation. 

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
true
datadogApm.tags

This parameter is mapped to environment variable DD_TAGS. For more information, please consult official datadog documentation. 

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.serviceMapping

This parameter is mapped to environment variable DD_SERVICE_MAPPING. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.writerType

This parameter is mapped to environment variable DD_WRITER_TYPE. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.partialFlushMinSpan

This parameter is mapped to environment variable DD_TRACE_PARTIAL_FLUSH_MIN_SPANS. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.dbClientSplitByInstance

This parameter is mapped to environment variable DD_TRACE_DB_CLIENT_SPLIT_BY_INSTANCE. For more information, please consult official datadog documentation. 

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer

datadogApm.healthMetricsEnabled

This parameter is mapped to environment variable DD_TRACE_HEALTH_METRICS_ENABLED. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
false
datadogApm.servletAsyncTimeoutError

This parameter is mapped to environment variable DD_TRACE_SERVLET_ASYNC_TIMEOUT_ERROR. For more information, please consult official datadog documentation. 

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
true
datadogApm.sampleRate

This parameter is mapped to environment variable DD_TRACE_SAMPLE_RATE. For more information, please consult official datadog documentation. 

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
'1.0'
datadogApm.jmsFetchEnabled

This parameter is mapped to environment variable DD_JMXFETCH_ENABLED. For more information, please consult official datadog documentation.

  • Available since IOM Helm Charts 1.3.0
  • Requires IOM 3.4.0.0 or newer
true
caasWithin caas group of parameters, configuration of CaaS projects can be controlled.

caas.envName

CaaS projects support different settings for different environments. caas.envName defines, which one has to be used. See section SQL Configuration in Guide - IOM Standard Project Structure for more information.

env-name
caas.importTestData

Controls the import of test data, which are part of the project.  See section Test Data in Guide - IOM Standard Project Structure for more information. If enabled, test-data is only imported during the installation process, not when executing an upgrade process.

false
caas.importTestDataTimeout

Timeout in seconds for import of test data. If the import was not finished before the according amount of seconds is passed, the container will end with an error. This parameter replaces the deprecated file import.properties which resides in the directory containing the test data. The content of import.properties has precedence over helm parameter caas.importTestDataTimeout. You have to delete import.properties to make sure that the value set at Helm parameter takes effect.

  • Available since IOM Helm charts 1.2.0
  • Requires IOM 3.2.0.0 or newer
"300"
persistenceParameters of group persistence control how IOM's shared data is persisted.
persistence.storageClass

Name of existing storage class to be used for IOM's shared data.

  • Ignored if persistence.hostPath is set.
azurefile
persistence.storageSizeRequested storage size. For more information, see official Kubernetes documentation.1Gi
persistence.hostPathFor very simple installations, persistent data can be directly stored at a local disk. In this case, the path on local host has to be stored at this parameter.
ingressGroup ingress bundles configuration of IOM's ingress, which is required to get access to IOM from outside of Kubernetes.
ingress.enabledEnables ingress for IOM. If not enabled, IOM cannot be accessed from outside of Kubernetes.true
ingress.annotations

Annotations for the ingress.

There is one important annotation: kubernetes.io/ingress.class. This annotation controls on which ingress controller the ingress should be created. When using the standard ingress controller which is running in a Kubernetes cluster to serve all incoming requests for all services, setting kubernetes.io/ingress.class in ingress.annotations is not required.

If the integrated NGINX controller should be used to serve incoming requests, the annotation kubernetes.io/ingress.class: nginx-iom has to be added. An example can be seen in values.yaml of the second example in this document.

{}
ingress.hosts

A list of ingress hosts.

The default value grants access to IOM.

{ host: iom.example.local,
   paths: ["/"] }

ingress.tlsA list of IngressTLS items[]
resourcesResource requests & limits{}
imagePullSecretsName of the secret to get credentials from.[]
nameOverrideOverwrites chart name.
fullnameOverrideOverwrites complete name, constructed from release, and chart name.
serviceAccount.createIf true , create a backend service account. Only useful if you need a pod security policy to run the backend.true
serviceAccount.annotationsAnnotations for the service account. Only used if create is true.{}
serviceAccount.nameThe name of the backend service account to use. If not set and create is true, a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend.
podAnnotationsAnnotations to be added to pods.{}
podSecurityContextSecurity context policies to add to the iom-tests pod.{}
securityContextList of required privileges.{}
service.typeType of service to create.ClusterIP
service.portPort to be exposed by service.80
nodeSelectorNode labels for pod assignment.{}
tolerationsNode taints to tolerate (requires Kubernetes >=1.6).[]
affinityNode/pod affinities (requires Kubernetes >=1.6).{}

5.2 Integrated SMTP Server

A complete list of parameters can be found here: https://github.com/codecentric/helm-charts/tree/master/charts/mailhog

The table below only lists parameters that have to be changed for different operation options of IOM.

Parameter

Description

Default Value

mailhog.enabledControls whether an integrated SMTP server should be used or not. This SMTP server is not intended to be used for any kind of serious IOM installation. It should only be used for demo-, CI- or similar types of setups.false
mailhog.resourcesResource requests & limits.{}
mailhog.ingress.hostsA list of ingress hosts.

{ host: mailhog.example.com,
paths: ["/"] }

mailhog.ingress.tlsA list of IngressTLS items.[]
mailhog.ingress.annotations

Annotations for the ingress.

There is one important annotation: kubernetes.io/ingress.class. This annotation controls on which ingress controller the ingress should be created. When using the standard ingress controller, which runs in a Kubernetes cluster to serve all incoming requests for all services, setting kubernetes.io/ingress.class in mailhog.ingress.annotations is not required.

If the integrated NGINX controller should be used to serve incoming requests, the annotation kubernetes.io/ingress.class: nginx-iom has to be added. An example can be seen in values.yaml of the second example in this document.

{}

5.3 Integrated NGINX Ingress Controller

A complete list of parameters can be found here: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx

The table below only lists parameters that have to be changed for different operation options of IOM and also those, that must not be changed at all.

ParameterDescriptionDefault Value
nginx.enabled

Control whether an integrated NGINX ingress controller should be installed or not. This ingress controller can serve two purposes:

false
nginx.proxy.enabledControls if the integrated NGINX ingress controller should act as a proxy between cluster-wide ingress controller and IOM or as an ingress controller used instead of the cluster-wide one.true
nginx.proxy.annotations

Annotations for the ingress.

  • Ignored, if nginx.proxy.enabled is set to false.

{}

ingress-nginx.controller.replicaCountDesired number of controller pods.2
ingress-nginx.controller.service.type

Type of controller service to create.

When using the integrated NGINX controller as a proxy, ClusterIP is the right choice, since the proxy must not be publicly accessible. If it should be used instead of the cluster-wide ingress controller, it has to be publicly accessible. In this case ingress-nginx.controller.service.type has to be set to LoadBalancer. See examples Local Demo System running in Docker-Desktop on Mac OS X and CI System running in Minikube on virtualized Linux.

ClusterIP
ingress-nginx.controller.extraArgs

Additional controller container arguments.

Example to increase verbosity: { v: 3 }


ingress-nginx.controller.configAdds custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/{ use-forwarded-headers: "true",
proxy-add-original-uri-header: "true" }
ingress-nginx.rbac.createIf true, create & use RBAC resources.
  • Must not be changed.
true
ingress-nginx.rbac.scope

If true, do not create & use clusterrole and -binding. Set to true in combination with controller.scope.enabled=true to disable load-balancer status updates and scope the ingress entirely.

  • Must not be changed.
true
ingress-nginx.controller.ingressClass

Name of the ingress class to route through this controller.

  • Must not be changed.
nginx-iom
nfginx-ingress.controller.scope.enabled

Limit the scope of the ingress controller. If set to true, only the release namespace is watched for ingress.

  • Must not be changed.
true

5.4 Integrated PostgreSQL Server

Parameter

Description

Default Value

postgres.enabledControls whether an integrated PostgreSQL server should be used or not. This PostgreSQL server is not intended to be used for any kind of serious IOM installation. It should only be used for demo-, CI- or similar types of setups.false
postgres.argsAn array containing command-line arguments, which are passed to the Postgres server at start. For more information, see official PostgreSQL 11 documentation.["-N", "200", "-c", "max_prepared_transactions=100"]
postgres.image.repositoryRepository of the PostgreSQL image. For more information, see official Docker hub.postgres
postgres.image.tagTag of PostgreSQL image. For more information, see official Docker hub."11"
postgres.image.pullPolicyPull policy, to be applied when getting PostgreSQL Docker images. For more information, see official Kubernetes documentation.IfNotPresent

postgres.pg

This group of parameters bundles the information about the superuser and default database (management database, not the IOM database).

This information is used to configure the Postgres server on start, but is also used by clients, which require superuser access to the Postgres server. The only client that needs this kind of access is the dbaccount init-image that creates/updates the IOM database.


postgres.pg.user

Name of the superuser. The superuser will be created, when starting the Postgres server. 

  • Ignored, if postgres.pg.userSecretKeyRef is defined.
postgres
postgres.pg.userSecretKeyRef

Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.


postgres.pg.passwd

The password of the superuser. Password will be set, when starting the Postgres server.

  • Ignored, if postgres.pg.passwdSecretKeyRef is defined.
postgres
postgres.pg.passwdSecretKeyRefInstead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.
postgres.pg.db

Name of default (management) database, which will be created when starting the Postgres server.

postgres
postgres.persistenceParameters of group postgres.persistence are controlling if and how the database data are persisted.
postgres.persistence.enabledIf set to false, data of the PostgreSQL server are not persisted at all. They are only written to memory and get lost if the Postgres pod ends.false
postgres.persistence.accessMode

The default value allows binding the persistent volume in read/write mode to a single pod only, which is exactly what should be done for the PostgreSQL server. For more information, see official Kubernetes documentation.

  • Ignored if postgres.persistence.hostPath is set.
ReadWriteOnce
postgres.persistence.storageClass

Name of existing storage class to be used by the PostgreSQL server.

  • Ignored if postgres.persistence.hostPath is set.

postgres.persistence.annotations

Annotations to be added to the according to PersistentVolumeClaim. For more information, see official Kubernetes documentation.

  • Ignored if postgres.persistence.hostPath is set.
{}
postgres.persistence.storageSizeRequested storage size. For more information, see official Kubernetes documentation.20Gi
postgres.persistence.hostPathFor very simple installations, persistent data can be directly stored at a local disk. In this case, the path on local host has to be stored at this parameter.
postgres.resourcesResource requests & limits.{}
postgres.imagePullSecretsThe name of the secret to get credentials from.[]
postgres.nameOverrideOverwrites chart name.
postgres.fullnameOverrideOverwrites complete name, constructed from release, and chart name.
postgres.nodeSelectorNode labels for pod assignment.{}
postgres.tolerationsNode taints to tolerate (requires Kubernetes >=1.6).[]
postgres.affinityNode/pod affinities (requires Kubernetes >=1.6).{}

5.5 IOM Tests

The iom-tests sub-chart provides a very generic way to run tests on an IOM installation. The sub-chart and the according parameters are simply the pure skeleton, resulting from a helm create call. The section Execution of tests, which is part of the example CI System running in Minikube on virtualized Linux demonstrates how this could be used.

ParameterDescriptionDefault Value
iom-tests.enabledEnables rollout of iom-tests sub-chart.false
iom-tests.envList of environment variables, required by the tests pod.
iom-tests.replicaCountDesired number of iom-tests pods.1
iom-tests.image.repositoryDocker image repository.iom-tests
iom-tests.image.pullPolicyDocker image pull policy.IfNotPresent
iom-tests.image.tagDocker image tag.
iom-tests.imagePullSecretsName of the secret to get credentials from.

[]

iom-tests.nameOverrideOverwrites chart name.
iom-tests.fullnameOverrideOverwrites complete name, constructed from release, and chart name.
iom-tests.serviceAccount.createIf true , create a backend service account. Only useful if you need a pod security policy to run the backend.true
iom-tests.serviceAccount.annotationsAnnotations for the service account. Only used if create is true.{}
iom-tests.serviceAccount.nameThe name of the backend service account to use. If not set and create is true, a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend.
iom-tests.podAnnotationsAnnotations to be added to pods.{}
iom-tests.podSecurityContextSecurity context policies to add to the iom-tests pod.{}
iom-tests.securityContextList of required privileges.{}
iom-tests.service.typeType of service to create.ClusterIP
iom-tests.service.portPort to be exposed by service.80
iom-tests.ingress.enabledEnables ingress for iom-tests. It is suggested to get access to test results this way.true
iom-tests.ingress.annotations

Annotations for the ingress.

There is one important annotation: kubernetes.io/ingress.class. This annotation controls which ingress controller should be created. When using the standard ingress controller that runs a Kubernetes cluster to serve all incoming requests for all services, setting kubernetes.io/ingress.class in iom-tests.ingress.annotations is not required.

If the integrated NGINX controller should be used to serve incoming requests, the annotation kubernetes.io/ingress.class: nginx-iom has to be added. An example can be seen in section Execution of tests, which is part of the example CI System running in Minikube on virtualized Linux.

{}
iom-tests.ingress.hostsA list of ingress hosts.

{ host: chart-example.local,
paths [] }

iom-tests.ingress.tlsA list of IngressTLS items.[]
iom-tests.containerPortPort used by the container to provide its service.8080
iom-tests.resourcesResource requests & limits.{}
iom-tests.autoscaling.enabledIf true, creates Horizontal Pod Autoscaler.false
iom-tests.autoscaling.minReplicasIf autoscaling enabled, this field sets the minimum replica count.1
iom-tests.autoscaling.maxReplicasIf autoscaling enabled, this field sets the maximum replica count.100
iom-tests.autoscaling.targetCPUUtilizationPercentageTarget CPU utilization percentage to scale.80
iom-tests.nodeSelectorNode labels for pod assignment.{}
iom-tests.tolerationsNode taints to tolerate (requires Kubernetes >=1.6).[]
iom-tests.affinityNode/pod affinities (requires Kubernetes >=1.6).{}

6 Selected Configuration Aspects in Depth

6.1 References to Kubernetes Secrets

All parameters ending with SecretKeyRef serve as an alternative way to provide secret information. Instead of storing entries as plain text in values file, these parameters allow reference entries within Kubernetes secrets. For more information about secrets see public Kubernetes documentation.

SecretKeyRef parameters require a hash structure, consisting of two entries with the following hash-keys:

  • name: is the name of the Kubernetes secret, containing the referenced key
  • key: is the name of the entry within the secret

The following two boxes show an example, which consists of two parts:

  • The definition of the Kubernetes secret, which contains entries for different secret values, and
  • The values file, which references these values.
Example: Kubernetes secret
apiVersion: v1
kind: Secret
metadata:
  name: pgsecrets
type: Opaque
data:
  pguser:   cG9zdGdyZXM=
  pgpasswd: ZGJ1c2VycGFzc3dk
Example: values file
...
# general postgres settings, required to connect to postgres server
# and root db.
pg:
  userSecretKeyRef:
    name: pgsecrets
    key:  pguser
  passwdSecretKeyRef:
    name: pgsecrets
    key:  pgpasswd
  db:                 postgres
  sslMode:            prefer
  sslCompression:     "1"
  sslRootCert:
...

6.2 PostgreSQL Server Configuration

The ideal configuration mainly depends on the server resources and on the activity. Therefore we can only provide a general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid-size system with about 32 GB RAM and 24 cores.

If PostgreSQL is used as a service (e.g. Azure Database for PostgreSQL servers), not all PostgreSQL server parameters can be set. When using a service, the method of how to change PostgreSQL server parameters might be different too.

To achieve the best performance, almost all the required data (tables and indexes) for the ongoing workload should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.

The IOM is built with Hibernate as an API between the application logic and the database. This mainly results in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.

The following main parameters in $PGDATA/data/postgresql.conf should be adapted, see PostgreSQL 12 | Chapter 19. Server Configuration.

You can consider PGConfig 2.0 as a guideline (using the OLTP Model).

Some aspects of data reliability are discussed here PostgreSQL 12 | Chapter 29. Reliability and the Write-Ahead Log. Understanding VACUUM is also essential when configuring/monitoring Postgres, see PostgreSQL 12 | Chapter 24. Routine Database Maintenance Tasks.

ParameterDescription
max_connections

The number of concurrent connections from the application is controlled by the parameters jboss.xaPoolsizeMin and jboss.xaPoolsizeMax.
Some connections will take place beside this pool, mainly for job tasks like import/export. Make sure that max_connection is set higher here than the according to IOM parameters.

Info

Highly concurrent connections have a negative impact on performance. It is more efficient to queue the requests than to process them all in parallel.

max_prepared_transactions

Required for IOM installations. Set its value to about 150% of max_connections.
shared_buffersBetween 1/4 and 1/3 of the total RAM, but not more than about 8 GB. Otherwise, the cache management will use too many resources. The remaining RAM is more valuable as a file system cache.
work_memHigher work_mem can increase performance significantly. The default is way too low. Consider using 100-400 MB.
maintenance_work_memIncrease the default similar to with work_mem to favor quicker vacuums. With IOM, this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem).
Consider something like 2% of your total RAM per autovacuum_max_workers. e.g., 32GB RAM * 2% * 3 workers = 2GB.
vacuum_cost_*The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load.
wal_levelDepends on your backup, recovery, and failover strategy. Should be at least archive.
wal_sync_methodDepends on your platform, check PostgreSQL 12 | 19.5. Write Ahead Log | wal_sync_method (enum).

max_wal_size

8 (small system) - 128 (large system)
max_parallel_workers
(since Postgres 9.6)
0
checkpoint_completion_targetUse 0.8 or 0.9.
archive_* and REPLICATIONDepends on your backup & failover strategy.
random_page_costThe default (4) is usually too high. Better choose 2.5 or 3.
effective_cache_sizeIndicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB.
log_min_duration_statement

Set it between 1 and 5 seconds to help track long-running queries.

log_filename

Better use an explicit name to help when communicating, e.g., pg-IOM_host_port-%Y%m%d_%H%M.log.

Not applicable if the PostgreSQL server is running in Kubernetes since all messages as written to stdout in this case.

log_rotation_age

Set it to 60 min or less.

Not applicable if the PostgreSQL server is running in Kubernetes since all messages as written to stdout in this case.

log_line_prefixBetter use a more verbose format than the default, e.g., %m|%a|%c|%p|%u|%h|.
log_lock_waits

Activate it (=on).

stats_temp_directoryBetter redirect it to a RAM disk.
log_autovacuum_min_durationSet it to a few seconds to monitor the vacuum activity.
idle_in_transaction_session_timeout.
(since Postgres 9.6)

An equivalent parameter exists for the WildFly connection pool (query-timeout) where it is set to 1 hour per default. Set idle_in_transaction_session_timeout to a larger value, e.g., 9 hours, to clean up possible leftover sessions.

6.3 Options and Requirements of IOM Database

6.3.1 Tablespace

The database initialization made by dbaccount image is creating a user and database, which uses the system-wide default tablespace pg_default. If you want to use a custom tablespace, you have to create it prior to the database initialization, see PostgreSQL: Documentation: 12: CREATE TABLESPACE.

To make the database initialization process aware of this newly created tablespace, the parameter dbaccount.tablespace has to be set to its name. If this is done, this tablespace will be set as default tablespace for the IOM database user and for the IOM database during the initialization process.

6.3.2 Timezone

All database clients and the IOM database have to use the same timezone. For this reason, all IOM Docker images are configured on OS-level to use timezone Etc/UTC. The process, executed by dbaccount init-image, sets this timezone for the IOM database user as well.

6.3.3 Locale/Encoding

The locale of database clients and the locale of the IOM database have to be identical. For this reason, all IOM Docker images are setting environment variable LANG to en_US.utf8.

According to the setting on the database is made by dbaccount init-image. Using parameter dbaccount.options, it is possible to configure this process.

When creating the IOM database by dbaccount init-image, using the wrong Encoding, Collate or Ctype is the most common reason for failed initialization of the IOM database. The according values have to be exactly identical to the values used by template databases. Hence, if there are any problems with Encoding, Collate, or Ctype when creating the IOM database, the existing databases should be listed to get the right values. To do so, just use psql database client with parameter -l to list them.

The following box shows how to do this after an initialization error, if IOM is running on Docker-Desktop.

# get name of PostgreSQL pod
kubectl get pods -n iom
NAME                                             READY   STATUS       RESTARTS   AGE
demo-ingress-nginx-controller-6c6f5b88cc-6wsfh   1/1     Running      0          67s
demo-iom-0                                       0/1     Init:Error   3          67s
demo-mailhog-5d7677c7c5-zl8gl                    1/1     Running      0          67s
demo-postgres-96676f4b-mt8nl                     1/1     Running      0          67s

# execute psql -U postgres -l within PostgreSQL pod
kubectl exec demo-postgres-96676f4b-mt8nl -n iom -t -- psql -U postgres -l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges
-----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
(3 rows)

6.3.4 Search-Path

In some circumstances, the search path for database objects has to be extended. Search-Path is set by dbaccount init-image. This process can be configured by parameter dbaccount.searchPath.


Disclaimer

The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources
Tickets