The present guide is addressed to administrators and software developers who want to operate IOM 3. It enables them to understand what the components of IOM are, how to configure them, and to run processes like installations and updates.
For a technical overview please see References.
Wording | Description |
---|---|
Docker | An operating system-level virtualization software. See also Kubernetes and Helm. |
Helm | A package manager for Kubernetes. See also Docker. |
CLI | Command Line Interface |
IOM | The abbreviation for Intershop Order Management |
JBoss | Synonym for WildFly (former name of the WildFly application server) |
Kubernetes | An open-source system for automating deployment, scaling, and management of containerized applications. See also Docker and Helm. |
OMS | The abbreviation for Order Management System, the technical name of IOM |
URL | Uniform Resource Locator |
WildFly | The application server that IOM runs on |
Production systems for Intershop Order Management (IOM) are usually provided as a service in the Azure Cloud. This service is part of the corresponding Intershop Commerce Platform. Non-production environments require separate agreements with Intershop.
For the purpose of adapting the software to specific customer requirements and/or customer-specific environments, it is also possible to operate IOM (for example for corresponding CI environments, test systems etc) outside the Azure Cloud and independently of Azure Kubernetes (AKS). In support of this, this document is intended for IOM administrators and software developers.
The exact required version of Kubernetes can be found in the system requirements (see References).
IOM requires a Kubernetes runtime environment. Intershop cannot provide support on how to set up, maintain or operate a Kubernetes runtime environment.
When using the Intershop Commerce Platform, Kubernetes is included. In this case, Intershop is fully responsible to set up, maintain, and operate the Kubernetes cluster as part of Intershop Commerce Platform.
The exact required version of Helm can be found in the system requirements (see References).
IOM requires Helm to be operated in a Kubernetes environment. Intershop cannot provide support on how to set up and use Helm properly.
When using the Intershop Commerce Platform, Helm is included. In this case, Intershop is fully responsible to set up and use Helm as part of Intershop Commerce Platform.
The exact requirements of the mail server can be found in the system requirements (see References).
IOM requires an existing mail server that processes e-mails sent from IOM via the SMTP protocol. Intershop cannot provide support on how to set up, maintain, or operate a mail server. A mail server is not part of the Intershop Commerce Platform.
Exact requirements of PostgreSQL server can be found in the system requirements (see References).
IOM requires a PostgreSQL database hosted by a PostgreSQL database server. Intershop cannot provide support on how to set up and operate a PostgreSQL server. Some configuration hints will be given as part of this document in section PostgreSQL Server Configuration.
When using the Intershop Commerce Platform, a PostgreSQL database is included. In this case, Intershop is fully responsible to set up and maintain the database as well as set up and operate the according PostgreSQL server as part of Intershop Commerce Platform.
In order to understand this document, it is essential to know some basic concepts and tools. It is not the goal of this document to teach you all these tools and concepts. However, it is intended to provide an insight into how these tools and concepts are used in the context of Intershop Order Management.
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions. (https://en.wikipedia.org/wiki/Kubernetes).
Since Kubernetes is a standard for cloud operations, using it for IOM promises the best compatibility with a wide range of cloud providers. Nevertheless, functionality is guaranteed for Microsoft Azure Kubernetes service as part of the Intershop Commerce Platform only. You can use other environments at your own risk.
A full description of Kubernetes can be found at https://kubernetes.io/docs/home/.
Kubectl is a command-line interface to control Kubernetes clusters. It is part of Kubernetes, see https://kubernetes.io/docs/reference/kubectl/overview/.
Since it is a client which runs on the machine used to control the Kubernetes-cluster, it has to be installed separately. For this reason, it is listed as a separate tool. In the narrow sense, it is not required to operate IOM, but it is used in this document within the section Examples, to view the status of Kubernetes-objects.
Helm (https://helm.sh) sits on top of Kubernetes. Helm is a tool to manage the life cycle (install, upgrade, rollback, uninstall) of complex Kubernetes applications. To do so, it enables the development and provision of so-called Helm charts, which are basically descriptions of Kubernetes objects, combined by a template and scripting language.
IOM is provided in form of Docker-images. These images can be used directly, as shown in section Examples of this document, or can be the base for further customization in the context of projects.
The images are available at:
Note
Adapt the tag (version number) if you use a newer version of IOM. For a full list of available versions see Overview - IOM Public Release Notes.
docker.intershop.de is a private docker registry. Private Docker registries require authentication and sufficient rights to pull images from them. The according authentication data can be passed in a Kubernetes secret object, which has to be set using the Helm parameter imagePullSecrets
.
The document Pull an Image from a Private Registry from Kubernetes documentation explains in general how to create Kubernetes secret objects, suitable to authenticate at a private Docker registry. Pull images from an Azure container registry to a Kubernetes cluster from Microsoft Azure documentation explains how to apply this concept to private Azure Container Registries.
The following box shows an example of how to create a Kubernetes secret to be used to access the private Docker Registry docker.intershop.de. The name of the newly created secret is intershop-pull-secret
, which has to be passed to Helm parameter imagePullSecrets
. It has to reside within the same Kubernetes namespace as the IOM cluster which uses the secret.
kubectl create secret docker-registry intershop-pull-secret \ --docker-server=docker.intershop.de \ --docker-username='<user name>' \ --docker-password='<password>' \ -n <kubernetes namespace>
IOM Helm-charts is a package containing the description of all Kubernetes-objects required to run IOM in Kubernetes. IOM Helm-charts are provided by Intershop at https://repository.intershop.de/helm. To use IOM Helm-charts, you have to execute the following commands (you may also have to pass credentials, which is not shown in the example).
# Add all Intershop charts helm repo add intershop https://repository.intershop.de/helm # Now the repo can be used to install IOM. # The following command was taken from the examples section. Without the preconditions described there, it will not work. # It is shown here only for demonstration how to reference the IOM Helm-chart after adding the according repository. helm install demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
The following illustration shows the most important components and personas when operating IOM with Helm. The project owner has to define a values file (available configuration parameters are explained in section Parameters), which can be used along with IOM Helm-charts to install, upgrade, rollback, and uninstall IOM within a Kubernetes runtime environment.
This is a very generalized view, which has some restrictions when used with IOM. The next section explains these restrictions in detail.
IOM uses a database that is constantly evolving along with new releases of IOM. For this reason, every version of IOM brings its own migration scripts, which are lifting the database to the new level. in general, old versions of the IOM database are not compatible with new versions of IOM application servers and vice versa. Also, projects change the database, when rolling out new or changed project configurations.
Helm does not know anything about changes inside the database. When rolling back a release, only the changes in values and IOM Helm-packages are rolled back. To avoid inconsistencies and failures (e.g. rollback to an old IOM application server version after updating the database structures to the new version), it is strongly recommended to avoid rollback in general.
The same reasons that make the rollback process problematic also limit the upgrade process.
When executing the upgrade process, the standard behavior of Helm is to keep the application always online. The different IOM application servers are updated one after another. In case of incompatible database changes, this would lead to problems, since one of the following cases is unavoidable: an old IOM application server tries to work with an already updated IOM database or vice versa.
To overcome this problem, IOM Helm-charts provide the parameter downtime
, which controls the behavior of the upgrade process. If downtime
is set to true
, the whole IOM cluster will be stopped during the upgrade process. The IOM database will be upgraded first and after that, the IOM application servers are started again. This setting should always be used when upgrading to a new IOM version, unless stated otherwise.
Within the context of projects, many changes can be applied to the running IOM cluster without requiring a downtime. In this case, the value of downtime
has to be set to false
before starting the upgrade process.
For security reasons, the default value of downtime
is true
to avoid any inconsistencies. Once you have understood the concept of the downtime
parameter, you should set it to false
to avoid downtimes as often as possible, and only set it to true
when really required.
The previous section IOM Helm charts gave you a general view on Helm, the IOM Helm-charts, and the according processes. The Intershop Commerce Platform environment modifies this concept a little bit, as shown in the following illustration.
Project owners are not able to trigger any processes directly. They can only manage a sub-set of values to be applied along with the IOM Helm-chart. The processes are triggered by a flux-controller that observes the Git repository holding the values files. Depending on the type of IOM installation (INT, Pre-PROD, PROD, etc.) processes might need to be triggered manually by Intershop Operations. Intershop Operations maintains a values file too, which has higher precedence than the file of the project owner. This way it is ensured that the project owner is not able to change any critical settings. Which ones are affected, depends on the type of IOM installation (INT, Pre-PROD, PROD, etc.). For example, a project owner should never be able to set log-level
to DEBUG
or TRACE
on PROD environments.
Despite the fact that Kubernetes and IOM Helm-charts make it very easy to set up and upgrade IOM installations, a reference to all the exiting parameters that are available to control IOM Helm-charts is a very uncomfortable starting point. For this reason, three typical usage scenarios were chosen to provide an easy-to-understand entry point into IOM Helm-charts. All examples were designed in a way that Intershop Commerce Platform is not required. The following examples strictly follow the concept described in section IOM Helm-Charts.
In order to understand the optional and required components defined in IOM Helm-charts, it is strongly recommended to read Guide - Intershop Order Management - Technical Overview first.
Requirements and characteristics are numbered. You will find these numbers also in the values file listed below in order to see the relation between requirement and current configuration.
This values file cannot be copied as it is. Before it can be used, persistence.hostPath
and postgres.persistence.hostPath
have to be changed to existing paths, which are shared with Docker Desktop.
The values file contains minimal settings only, except dbaccount.resetData
, which was listed explicitly, even if it contains the default value only.
# use one IOM server only (requirement #8). replicaCount: 1 imagePullSecrets: - intershop-pull-secret image: repository: "docker.intershop.de/intershophub/iom-app" tag: "3.6.0.0" # configure ingress to forward requests for host "localhost" to IOM (requirements #9, #11). # since integrated NGINX controller should be used, its class has to be set explicitly. ingress: enabled: true className: nginx-iom hosts: - host: localhost paths: - path: "/" pathType: Prefix # IOM has to know its own public URL oms: publicUrl: "https://localhost/" # store data of shared-FS into local directory (requirement #6, #7) persistence: hostPath: /Users/username/iom-share config: image: repository: "docker.intershop.de/intershophub/iom-config" tag: "3.6.0.0" # create IOM database and according database user before starting IOM. # do not reset existing data during installation (requirement #3) dbaccount: enabled: true resetData: false # optional, since false is default image: repository: "docker.intershop.de/intershophub/iom-dbaccount" tag: "1.3.0.0" # use integrated PostgreSQL server (requirement #1). # store database data persistently into local directory (requirement #2). postgres: enabled: true persistence: enabled: true hostPath: /Users/username/pgdata # enable integrated NGINX ingress controller. # this controller should not act proxy (requirement #9). nginx: enabled: true proxy: enabled: false # configure integrated NGINX ingress controller. # one instance of NGINX is sufficient for demo scenario (requirement #10). # set type to LoadBalancer to be accessible from public network (requirement #9). ingress-nginx: controller: replicaCount: 1 service: type: LoadBalancer # enable integrated SMTP server (requirement #4). # configure ingress to forward requests for any host to mailhog GUI (requirements #9). # since ingress for IOM defined a more specific rule, mailhog GUI can be reached using any hostname except localhost. # since integrated NGINX controller should be used, its class has to be set explicitly. mailhog: enabled: true ingress: enabled: true className: nginx-iom hosts: - host: paths: - path: "/" pathType: Prefix
Windows: IOM Share
The current example just works when using Docker Desktop on Windows. When working on Windows, you have to take care to use Unix-Style path names, e.g., if the IOM share is located at C:\Users\username\iom-share, the according entry in values.yaml has to be noted as /c/Users/unsername/iom-share.
Windows: persistent PostgreSQL data
Setting postgresql.persistence.hostPath
to a local directory does not work on Windows, even if the directory is correctly shared with Docker Desktop. When starting the PostgreSQL server, it tries to take ownership of the data directory, which is not working in this case. There are two possibilities to overcome this problem:
postgres.persistence.enabled
to false.
# create docker volume "iom-pgdata" docker volume create —name=iom-pgdata -d local # get mount-point of newly created docker volume # use mount-point as value for helm-parameter postgres.persistence.hostPath docker volume inspect —format='{{.Mountpoint}}' iom-pgdata /var/lib/docker/volumes/iom-pgdata/_data # to remove docker volume, execute the following command docker volume rm iom-pgdata
Create a file values.yaml,and fill it with the content listed above. Adapt the settings of persistence.hostPath
and postgres.persistence.hostPath
to point to directories on your computer, which is shared with Docker Desktop. After that, the installation process of IOM can be started.
# create namespace "iom" kubectl create namespace iom # install IOM into namespace "iom" helm install demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl
you can see the status of every Kubernetes object. For simplicity, the following example is showing the status of pods only.
Just open a second terminal window and enter the following commands.
# A few seconds after start of IOM, only the integrated Postgres server is in "Init" phase. All other # pods are in earlier phases. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Pending 0 2s demo-mailhog-5dd4565b98-jphkm 0/1 ContainerCreating 0 2s demo-ingress-nginx-controller-f5bf56d64-cp9b5 0/1 ContainerCreating 0 2s demo-postgres-7b796887fb-j4hdr 0/1 Init:0/1 0 2s # After some seconds all pods except IOM are "Running" and READY (integrated Postgresql server, integrated # SMTP server, intergrated NGINX). IOM is in Init-phase, which means the init-containers are currently executed. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Init:1/3 0 38s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 38s demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 38s demo-postgres-7b796887fb-j4hdr 1/1 Running 0 38s # The first init-container executed in iom-pod is dbaccount. Log messages can be seen # by executing the following command. If everything works well, the last message will announce the # successful execution of create_dbaccount.sh script. kubectl logs demo-iom-0 -n iom -f -c dbaccount ... {"tenant":"company-name","environment":"system-name","logHost":"demo-iom-0","logVersion":"1.0","appName":"iom-dbaccount","appVersion":"1.3.0.0","logType":"script","timestamp":"2020-08-06T11:33:17+00:00","level":"INFO","processName":"create_dbaccount.sh","message":"success","configName":null} # The second init-container executed by iom-pod is config, which fills the database and applies # migrations and configurations. The last message of config container will announce successful execution # of load_dbmigrate.sh script. kubectl logs demo-iom-0 -n iom -f -c config ... {"tenant":"company-name","environment":"system-name","logHost":"demo-iom-0","logVersion":"1.0","appName":"iom-config","appVersion":"3.6.0.0","logType":"script","timestamp":"2020-08-06T11:35:51+00:00","level":"INFO","processName":"load_dbmigrate.sh","message":"success","configName":"env-name"} # If init-containers have finished successfully, the iom-pod is now in "Running" state, too. But it is not "READY" # yet. Now the IOM applications and project customizations are deployed into the Wildfly application server. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Running 0 3m50s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 3m50s demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 3m50s demo-postgres-7b796887fb-j4hdr 1/1 Running 0 3m50s # If all pods are "Running" and "READY" the installation process of IOM is finished. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 1/1 Running 0 7m20s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 7m20s demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 7m20s demo-postgres-7b796887fb-j4hdr 1/1 Running 0 7m20s
If all pods are Running
and Ready
, the installation process is finished. You should check the first terminal window, where the installation process was running.
Now we can access the web GUI of the new IOM installation. In fact, there are two Web GUIs, one for IOM and one for Mailhog. According to our configuration, all requests dedicated to localhost will be forwarded to the IOM application server, any other requests are meant for an integrated SMTP server (Mailhog). Just open the URL https://localhost/omt in a web browser on your Mac. After accepting the self-signed certificate (the configuration did not include a valid certificate), you will see the login page of IOM. Login as admin/!InterShop00! to proceed.
Any other request that is not dedicated to localhost will be forwarded to Mailhog. To access the web-GUI of Mailhog, just open the URL https://127.0.0.1/ in your web browser. Once again you have to accept the self-signed certificate and after that, you will see the MailhogGUI.
From a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or new docker images of a new IOM release are rolled out. The example shown here will demonstrate how to change the log-level of the Quartz subsystem, running in the WildFly application server.
Before the start, keep the restrictions on upgrade in mind. A change of a log-level is an uncritical change that can be applied without downtime. But we have decided to use a single IOM application server only (see Requirement #8). When using a single IOM application server only, an upgrade process with downtime is inevitable. Hence, we do not have to think about the setting of parameter downtime.
Modify values.yaml by adding the following lines to the file:
log: level: quartz: INFO
These changes are now rolled out by running Helm's upgrade process to the existing IOM installation.
Start the upgrade process within a terminal window.
helm upgrade demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
The upgrade process will take some minutes before finished.
Enter the following commands in a second terminal window to watch the progress.
As we did it for the installation process before, this example is restricted to the status of pods only.
# Only the Kubernetes object of IOM has changed. Therefore Helm only upgrades IOM, the integrated SMTP server, # integrated postgresql server and integrated NGINX are running unchanged. A few seconds after starting the # upgrade process, the only existing iom-pod is stopped. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 1/1 Terminating 0 40m demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 40m demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 40m demo-postgres-7b796887fb-j4hdr 1/1 Running 0 40m # After the iom-pod is terminated, a new iom-pod is started with new configuration. Init containers are # partially executed again. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Init:2/3 0 6s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 41m demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 41m demo-postgres-7b796887fb-j4hdr 1/1 Running 0 41m # Finally the pod is "Running" and "READY" again, which means, IOM is up again. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 1/1 Running 0 5m4s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 46m demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 46m demo-postgres-7b796887fb-j4hdr 1/1 Running 0 46m
The last process demonstrates how to uninstall IOM.
helm uninstall demo -n iom release "demo" uninstalled kubectl delete namespace iom namespace "iom" deleted
Since database data and shared file system of IOM were stored in local directories of the current host, they still exist after uninstalling IOM. In fact, this data represents the complete state of IOM. If we would install IOM again, with the same directories for shared file system and database data, the old IOM installation would be reincarnated.
Docker, Minikube, kubectl, and Helm are external tools. This document does not cover how to install and use these tools.
If there are any traps or pitfalls you need to know, they will be explained here.
Minikube provides an easy way to set up a local Kubernetes cluster. Local means that the whole cluster is running on a single machine only. Therefore, it is a good playground for developers or for the setup of the demo- and CI-installations, but of course not for any kind of serious service.
Running Minikube in an already virtualized environment raises the bar a bit higher. For such type of environment, originally the none driver was recommended (Minikube Documentation | Drivers | none), which was at the time of writing replaced by docker driver (Minikube Documentation | Drivers | docker). Therefore, the docker driver was chosen for the following examples.
Minikube supports access to applications through services of type LoadBalancer. However, a service of this type cannot be directly accessed from outside the Minikube Cluster. To do so, the execution of an additional command is required (minikube tunnel
), see: Minikube Documentation | Handbook | Accessing apps | LoadBalancer access. As long as minikube tunnel
is not running, the external IP of the LoadBalancer service remains in the state pending. This has an impact on the installation process of IOM. The installation process of IOM has to be started with command line argument --wait
(see parameter downtime). But when using --wait
, the IOM installation will not finish until the external IP is available, too, which means that the installation process will run into a timeout.
Thus, for this example, another way was chosen to make IOM and Mailhog-GUI accessible from the public network. Instead of using the combination of service of type LoadBalancer and minikube tunnel
to provide access to services, service type ClusterIP will be used and after IOM has started, kubectl port-forward
will enable access from outside.
Finally, to get access to the port providing web-GUIs of IOM and Mailhog, you have to configure firewalld.
Minikube supports persistent data in general but has restrictions for some drivers, e.g. for docker driver, which is used in our example: Minikube Documentation | Handbook | Persistent Volumes. When using a docker driver, we could use the mount
command to get access to a local directory within Minikube. This requires the '9P' file system kernel extension, which is not available on all systems and is missing on our test systems, too.
In summary, this means: for our example, we can use persistent data, but we cannot access the according directories from the host directly. Instead, these persistent volumes are hidden somewhere within the internal Docker/Minikube data structures. According to our requirements, this is fully sufficient.
Requirements and characteristics are numbered again. You will find these numbers in the values-file, which is listed below, in order to see the relation between requirement and current configuration.
The requirements listed above do not show any traces of test executions. That is contradictory to the goals of CI systems in general. IOM Helm charts support the execution of tests, but Intershop does not deliver any ready-to-run test images. It is up to you to develop images for testing your specific IOM project.
In order to enable you to reproduce the installation of the CI system, the execution of tests was skipped in the example. Nevertheless, this section will give you a short overview of how the integration of tests could look like.
For the execution of tests, the following things and information are required:
Since test-images have to be provided in the project context, the integration into IOM Helm charts is very generic. IOM Helm charts simply provide a default deployment created by Helm.
Please be aware that the following extract of a values-file is purely fictitious. Names of docker images and names of environment variables fully depend on the project-specific docker images. Resource usage is realistic, it is oriented to our own tests, using Geb tests in combination with Firefox.
iom-tests: enabled: true imagePullSecrets: - name: intershop-pull-secret image: repository: mycompany/iom-tests tag: 1.0.0.0 pullPolicy: Always env: # name of service of integrated NGINX controller - name: IOM_HOST value: ci-ingress-iom-ingress-nginx-controller # name of service of integrated PostgreSQL server - name: DB_HOST value: ci-postgres - name: DB_PORT value: '5432' - name: OMS_DB_NAME value: oms_db - name: OMS_DB_USER value: oms_user - name: OMS_DB_PASSWD value: OmsDB # name of service of integrated mailhog server - name: MAILHOG_HOST value: ci-mailhog - name: MAILHOG_PORT value: 1025 containerPort: 8080 livenessProbe: httpGet: path: / port: http readinessProbe: httpGet: path: / port: http resources: limits: cpu: 2 memory: 11000Mi requests: cpu: 2 memory: 11000Mi ingress: enabled: true annotations: nginx.ingress.kubernetes.io/ssl-redirect: "false" hosts: # hostname to be used to get access to test results - host: tests.iomci.com paths: - path: / pathType: Prefix
This values file cannot be copied as it is. Before it can be used, all occurrences of jdevoms11.rnd.j.intershop.de
have to be replaced with the hostname of your CI-system.
# start 2 IOM application servers (requirement #1) replicaCount: 2 # run upgrade processes without downtime (requirement #10) downtime: false imagePullSecrets: - intershop-pull-secret image: repository: "docker.intershop.de/intershophub/iom-app" tag: "3.6.0.0" # configure ingress to forward requests to IOM, which were sent to jdevoms11.rnd.j.intershop.de (requirement #9). # since integrated NGNIX controller should be used, its class has to be specified (requirement #7) ingress: enabled: true className: nginx-iom hosts: - host: jdevoms11.rnd.j.intershop.de paths: - path: / pathType: Prefix # IOM has to know its own public URL oms: publicUrl: "https://jdevoms11.rnd.j.intershop.de:8443/" # store data of shared file system into local directory (requirement #6) persistence: hostPath: /mnt/share config: image: repository: "docker.intershop.de/intershophub/iom-config" tag: "3.6.0.0" # create IOM database and according user before starting IOM dbaccount: enabled: true image: repository: docker.intershop.de/intershophub/iom-dbaccount tag: "1.3.0.0" # use integrated PostgreSQL server # store database data persistently into local directory (requirements #2, #3) postgres: enabled: true persistence: enabled: true hostPath: /mnt/pgdata # enable integrated NGINX ingress controller # this controller should not act proxy (requirement #7) nginx: enabled: true proxy: enabled: false # enable integrated SMTP server # allow access to Web-GUI of mailhog. all requests should be sent to Web-GUI of mailhog, # unless a more specific rule exists. (requirement #5) # since integrated NGNIX controller should be used, its class has to be specified (requirement #7). mailhog: enabled: true ingress: enabled: true className: nginx-iom hosts: - host: paths: - path: / pathType: Prefix
According to requirement #4, the Minikube cluster has to be deleted after each test run. For this reason, the creation of the Minikube cluster was added to the current section, too.
# Minikube, using vm-driver "docker", must run in user space. # Hence, an according user has to be created. sudo useradd -m -U oms # Get permission to access docker daemon socket sudo usermod -aG docker oms && newgrp docker # Change user. # All commands, also from following sections, have to be executed by user oms! su - oms # Start minikube as oms user. minikube start --vm-driver=docker
Create a file values.yaml and fill it with the content listed above. Adapt the hostname in values.yaml.
Once the Minikube cluster is running and the values file is prepared, the installation of IOM can be started.
# create namespace iom kubectl create namespace iom # install IOM into namespace iom helm install ci intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait # make port 443 of integrated NGNIX controller available on all interfaces at port 8443 kubectl port-forward service/ci-ingress-nginx-controller 8443:443 -n iom --address 0.0.0.0
This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl you can see the status of every Kubernetes object. But for simplicity, the following example shows the status of pods only.
Just open a second terminal window and enter the following commands:
# A few seconds after starting IOM, only the integrated postgresql is in "Init" phase. All other # pods are in earlier phases. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 0/1 Pending 0 3s ci-mailhog-5744886bb8-x78l7 0/1 ContainerCreating 0 3s ci-ingress-nginx-controller-749c5f8f4-7fk6j 0/1 ContainerCreating 0 3s ci-ingress-nginx-controller-749c5f8f4-d4htz 0/1 ContainerCreating 0 3s ci-postgres-575976b886-ph6s4 0/1 Init:0/1 0 3s # A few seconds later, the IOM pod is in "Init" phase as well, which means the init-containers are # currently executed. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 0/1 Init:0/3 0 16s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 16s ci-ingress-nginx-controller-749c5f8f4-7fk6j 0/1 ContainerCreating 0 16s ci-ingress-nginx-controller-749c5f8f4-d4htz 0/1 ContainerCreating 0 16s ci-postgres-575976b886-ph6s4 0/1 PodInitializing 0 16s # The first init-container executed in iom-pod is dbaccount. Log messages of this init-container can be seen # by executing the following command. If everything works well, the last message will announce the successful execution # of create_dbaccount.sh script. kubectl logs ci-iom-0 -c dbaccount -f -n iom ... {"tenant":"company-name","environment":"system-name","logHost":"ci-iom-0","logVersion":"1.0","appName":"iom-dbaccount","appVersion":"1.3.0.0","logType":"script","timestamp":"2020-08-07T10:41:15+00:00","level":"INFO","processName":"create_dbaccount.sh","message":"success","configName":null} # The second init-container executed by iom pod is config, which is filling the database, applying # migrations and configurations. The last message of config container will announce successful execution # of load_dbmigrate.sh script. kubectl logs ci-iom-0 -c config -f -n iom ... {"tenant":"company-name","environment":"system-name","logHost":"ci-iom-0","logVersion":"1.0","appName":"iom-config","appVersion":"3.6.0.0","logType":"script","timestamp":"2020-08-07T10:42:03+00:00","level":"INFO","processName":"load_dbmigrate.sh","message":"success","configName":"env-name"} # When init-containers have finished successfully, the iom-pod is now in "Running" state too. But it is not "READY" # yet. Now the IOM applications and project customizations are deployed into the Wildfly application server. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 0/1 Running 0 2m26s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 2m26s ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 2m26s ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 2m26s ci-postgres-575976b886-ph6s4 1/1 Running 0 2m26s # When the first iom-pod is "Running" and "READY", the second IOM pod will be started. From # now on, the IOM is accessible from outside. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Running 0 4m52s ci-iom-1 0/1 Init:0/3 0 2s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 4m52s ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 4m52s ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 4m52s ci-postgres-575976b886-ph6s4 1/1 Running 0 4m52s # When all pods are "Running" and "READY", the installation process of IOM has finished. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Running 0 6m15s ci-iom-1 1/1 Running 0 85s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 6m15s ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 6m15s ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 6m15s ci-postgres-575976b886-ph6s4 1/1 Running 0 6m15s
When all pods are Running and Ready, the installation process has finished. You should check the first terminal window, where the installation process was running. As the last step of the IOM installation, the port-forwarding of NGINX service has to be done.
Now we can access the web GUI of the new IOM installation. As already shown in the first example, there are two web-GUIs, one of IOM and one of Mailhog. According to the configuration, all requests dedicated to jdevoms11.rnd.j.intershop.de
(please replace it with the hostname of your CI system) will be forwarded to the IOM application server. Any other requests are meant for the integrated SMTP server (Mailhog). Just open the URL https://jdevoms11.rnd.j.intershop.de:8443/omt in a web browser. After accepting the self-signed certificate (the configuration did not include a valid certificate), you will see the login page of IOM. Login as admin/!InterShop00! to proceed.
Any other request that is not dedicated to jdevoms11.rnd.j.intershop.de, will be forwarded to Mailhog. To access the web-GUI of Mailhog, just use the IP instead of the hostname. Open URL https://10.0.29.69:8443/ (please replace it with the IP of your CI system) in your web browser. Once again you have to accept the self-signed certificate and after that, you will see the Mailhog-GUI.
Now we repeat the upgrade process which was already shown in the previous example. This simple example was chosen since from a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or new docker images of a new IOM release are rolled out.
This example includes setting the downtime
parameter (see: Restrictions on upgrade). A change of log-level options is an uncritical change that can be applied without downtime. Since there is more than one IOM application server now, the upgrade process can be executed without downtime.
For upgrading IOM, values.yaml has to be changed. Just add the following lines to the file:
log: level: quartz: INFO
These changes are now being rolled out by running the Helms upgrade process to the existing IOM installation. Start the process within a terminal window.
helm upgrade ci intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
The upgrade process will take some minutes before it is finished. In the meantime, we can watch the progress. As we did it in the installation process before, this example is restricted to the status of pods only. Just enter the following commands in a second terminal window.
# Right away after starting the upgrade process, the first iom-pod is terminated. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Running 0 12m ci-iom-1 1/1 Terminating 0 7m44s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 12m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 12m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 12m ci-postgres-575976b886-ph6s4 1/1 Running 0 12m # When this pod has finished, it will be started again, now using the new configuration. # Initialization is mostly identical to the install process. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Running 0 13m ci-iom-1 0/1 Init:0/3 0 2s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 13m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 13m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 13m ci-postgres-575976b886-ph6s4 1/1 Running 0 13m # When initialization has finished, deployment of IOM- and customization-apps is executed. # During this time, the pod is "Running" but not "READY". kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Running 0 13m ci-iom-1 0/1 Running 0 15s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 13m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 13m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 13m ci-postgres-575976b886-ph6s4 1/1 Running 0 13m # When the pod is "Running" and "READY", it is able to handle incoming requests. The other # iom-pod can now be terminated. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Terminating 0 14m ci-iom-1 1/1 Running 0 88s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 14m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 14m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 14m ci-postgres-575976b886-ph6s4 1/1 Running 0 14m # After termination, it is started again with new configuration. Init containers are # mostly executed as during installation process. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 0/1 Init:0/3 0 0s ci-iom-1 1/1 Running 0 119s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 15m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 15m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 15m ci-postgres-575976b886-ph6s4 1/1 Running 0 15m # IOM- and customization-apps are deployed. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 0/1 Running 0 13s ci-iom-1 1/1 Running 0 2m12s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 15m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 15m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 15m ci-postgres-575976b886-ph6s4 1/1 Running 0 15m # Both IOM pods are "Running" and "READY" again, the upgrade process is finished now. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE ci-iom-0 1/1 Running 0 2m38s ci-iom-1 1/1 Running 0 4m37s ci-mailhog-5744886bb8-x78l7 1/1 Running 0 17m ci-ingress-nginx-controller-749c5f8f4-7fk6j 1/1 Running 0 17m ci-ingress-nginx-controller-749c5f8f4-d4htz 1/1 Running 0 17m ci-postgres-575976b886-ph6s4 1/1 Running 0 17m
The last process demonstrates how to uninstall IOM. In order to get rid of persistent data, too (see requirement #4), the whole Minikube cluster has to be deleted. In fact, it would be sufficient to only delete the Minikube cluster, but for completeness, the other commands are listed as well.
# uninstall IOM release helm uninstall ci -n iom release "ci" uninstalled # delete Kubernetes namespace used for IOM kubectl delete namespace iom namespace "iom" deleted # delete whole Minikube cluster minikube delete
Please keep in mind that these preconditions reflect the use case described in section IOM Helm-charts. When using Intershop Commerce Platform, these preconditions are all covered by Intershop.
Requirements and characteristics are numbered again. You will find these numbers in the values-file listed below in order to see the relation between requirement and current configuration.
The values file shown below reflects the requirements of the straight Helm approach, as described in section IOM Helm-charts. When running on Intershop Commerce Platform, most settings are made by Intershop Operations within the second values file. On the customer's side, only few parameters remain that have to be set. These are downtime
, image
, config
, and oms.smtp
and maybe some other parameters, depending on the type of installation.
Of course, this values file cannot be copied as it is. It references external resources and external services, which do not exist in your environment, additionally the hostname iom.mycompany.com
does also not match your requirements.
# start 2 IOM application servers (requirement #1) replicaCount: 2 # run upgrade processes without downtime (requirement #8) downtime: false imagePullSecrets: - intershop-pull-secret image: repository: "docker.intershop.de/intershophub/iom-app" tag: "3.6.0.0" # configure ingress to forward requests to IOM, which are sent to # to host iom.mycompany.com. (requirement #6) ingress: enabled: true hosts: - host: iom.mycompany.com paths: - path: / pathType: Prefix tls: - secretName: mycompany-com-tls hosts: - iom.mycompany.com # information about external postgresql service (requirement #2) pg: host: postgres-prod.postgres.database.azure.com port: 5432 userConnectionSuffix: "@postgres-prod" # root-database and superuser information. The very first installation initializes # the database of IOM. After that, these information should be removed from values # file completely (and dbaccount should be disabled/removed too) user: postgres passwdSecretKeyRef: name: mycompany-prod-secrets key: pgpasswd db: postgres # IOM has to know its own public URL oms: publicUrl: "https://iom.mycompany.com/" db: name: oms_db user: oms_user passwdSecretKeyRef: name: mycompany-prod-secrets key: dbpasswd # configuration of external smtp server (requirement #4) smtp: host: smpt.external-provider.com port: 25 user: my-company-prod passwdSecretKeyRef: name: mycompany-prod-secrets key: smtppasswd log: metadata: tenant: mycompany environment: prod caas: envName: prod resources: limits: cpu: 1000m memory: 3000Mi requests: cpu: 1000m memory: 3000Mi # store data of shared file system at azurefile service (requirement #5) persistence: storageClass: azurefile storageSize: 60G config: image: repository: "docker.intershop.de/intershophub/iom-config" tag: "3.6.0.0" # Create IOM database and according user before starting IOM. Creates IOM database # while running install process. After that, dbaccount should be completely removed # from the values file. Without set explicitly, data are not reset during start # (requirement #3). dbaccount: enabled: true image: repository: docker.intershop.de/intershophub/iom-dbaccount tag: "1.3.0.0" # enable integrated NGINX ingress controller. Without any further configuration, # it acts as a proxy (requirement #6). nginx: enabled: true
Create a file values.yaml and fill it with the content listed above. Adapt all the changes to the file that are required by your environment. After that, the installation process can be started.
# create namespace mycompany-iom kubectl create namespace mycompany-iom # install IOM into namespace mycompany-iom helm install ci intershop/iom --values=values.yaml --namespace mycompany-iom --timeout 20m0s --wait
This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl, you can see the status of every Kubernetes object. For simplicity, the following example shows the status of pods only.
Just open a second terminal window and enter the following commands.
# One second after start, all pods are in very early phases. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 0/1 Pending 0 1s prod-ingress-nginx-controller-76db7cfc6d-2h4w9 0/1 ContainerCreating 0 1s prod-ingress-nginx-controller-76db7cfc6d-tzzsl 0/1 ContainerCreating 0 1s # Little bit later, the integrated NGINX ingress is "Running" and "READY". IOM is in initialization phase, # which means the init-containers are currently executed. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 0/1 Init:0/2 0 24s prod-ingress-nginx-controller-76db7cfc6d-2h4w9 1/1 Running 0 24s prod-ingress-nginx-controller-76db7cfc6d-tzzsl 1/1 Running 0 24s # After a few minutes IOM is "Running", but not "READY" yet. The init-containers are finished # now and the IOM- and project-applications are currently deployed into the Wildfly application server. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 0/1 Running 0 4m43s prod-ingress-nginx-controller-76db7cfc6d-2h4w9 1/1 Running 0 4m43s prod-ingress-nginx-controller-76db7cfc6d-tzzsl 1/1 Running 0 4m43s # The first iom-pod is "Running" and "READY", which means the IOM System is usable now. # The second iom-pod has just started and is currently initialized. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 1/1 Running 0 9m35s prod-iom-1 0/1 Init:0/2 0 2s prod-ingress-nginx-controller-76db7cfc6d-2h4w9 1/1 Running 0 9m35s prod-ingress-nginx-controller-76db7cfc6d-tzzsl 1/1 Running 0 9m35s # Both iom-pods are "Running" and "READY". Installation of IOM is finished. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 1/1 Running 0 15m prod-iom-1 1/1 Running 0 5m49s prod-ingress-nginx-controller-76db7cfc6d-2h4w9 1/1 Running 0 15m prod-ingress-nginx-controller-76db7cfc6d-tzzsl 1/1 Running 0 15m
When all pods are Running and Ready, the installation process has finished. You should check the first terminal window, where the installation process was running.
Now we repeat the upgrade process, which was already shown in the previous example. This simple example was chosen since from a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or if new docker images of a new IOM release are rolled out.
Also setting the downtime
parameter (see: Restrictions on upgrade) is considered. A change of a log-level is an uncritical change which can be applied without downtime. Since we have more than one IOM application server, the upgrade process can now be executed without downtime.
Add the following lines to the values.yaml:
log: level: quartz: INFO
These changes are now rolled out by running the Helm upgrade process to the existing IOM installation. Start the process within a terminal window.
helm upgrade ci intershop/iom --values=values.yaml --namespace mycompany-iom --timeout 20m0s --wait
The upgrade process will take some minutes before being finished.
In the previous section you might have noticed that the behavior of pods during the installation process is identical no matter which Kubernetes environment was used (Docker Desktop, Minikube). The same applies to the upgrade process. For this reason, the box "Observe progress" will be skipped in the current section.
The last process demonstrates how to uninstall IOM. Please keep in mind that the uninstall process only covers the objects defined in IOM Helm-charts. In the current production example many external resources and external services are referenced. These resources and services remain untouched by the uninstall process of IOM.
# uninstall IOM release helm uninstall prod -n mycompany-iom release "prod" uninstalled # delete Kubernetes namespace used for IOM kubectl delete namespace mycompany-iom namespace "mycompany-iom" deleted
Each example shown in the Examples section before used a values file to define the specific setup of IOM (see: demo, ci, prod). The current section now describes each parameter you have already used before in detail. There are also many more parameters that were not used in the examples.
In the Examples section, you have already learned that IOM Helm-charts also provide optional components: integrated PostgreSQL server, integrated SMTP server, integrated NGNIX controller, and support for the execution of tests. These optional components are covered by separate sections.
Parameter | Description | Default Value |
---|---|---|
replicaCount | The number of IOM application server instances to run in parallel. | 2 |
downtime | The downtime parameter is a very critical one. Its goal and behavior is already described in Restrictions on Upgrade. Additional information:
| true |
image.repository | Repository of the IOM app product/project image. | docker.intershop.de/intershophub/iom-app |
image.pullPolicy | Pull policy, to be applied when getting IOM product/project Docker image. For more information, see official Kubernetes documentation. | IfNotPresent |
image.tag | The tag of IOM app product/project image. | 3.6.0.0 |
dbaccount | Parameters bundled by Once the IOM database is created, the | |
dbaccount.enabled | Controls if the dbaccount init-container should be executed or not. If enabled, dbaccount will only be executed when installing IOM, not on upgrade operations. | false |
dbaccount.image.repository | Repository of the dbaccount image. | docker.intershop.de/intershophub/iom-dbaccount |
dbaccount.image.pullPolicy | Pull policy, to be applied when getting dbaccount Docker image. For more information, see official Kubernetes documentation. | IfNotPresent |
dbaccount.image.tag | The tag of dbaccount image. | 1.3.0.0 |
dbaccount.resetData | Controls if dbaccount init-container should reset an already existing IOM database during the installation process of IOM. If set to true , existing data is deleted without backup and further warning. | false |
dbaccount.options | When creating the IOM database, more options added to OWNER are required. Depending on the configuration of the See Options and Requirements of IOM database for details. | "ENCODING='UTF8' LC_COLLATE='en_US.utf8' LC_CTYPE='en_US.utf8' CONNECTION LIMIT=-1 TEMPLATE=template0" |
dbaccount.searchPath | In some circumstances, the search path for database objects has to be extended. This is the case if custom schemas are used for customizations or tests. To add more schemas to the search-path, set the current parameter to a string containing all additional schemas, separated by a comma, e.g. "tests, customschema". The additional entries are inserted at the beginning of the search-path, hence objects with the same name as standard objects of IOM are found first. | |
dbaccount.tablespace | Use the passed tablespace as default for IOM database user and IOM database. Tablespace has to exist, it will not be created. Section Options and Requirements of IOM database will give you some more information.
| |
dbaccount.resources | Resource requests & limits | {} |
config | Parameters, bundled by | |
config.image.repository | Repository of the IOM config product/project image. | docker.intershop.de/intershophub/iom-config |
config.image.pullPolicy | Pull policy, to be applied when getting the IOM config product/project Docker image. For more information, see official Kubernetes documentation. | IfNotPresent |
config.image.tag | The tag of IOM config product/project image. | 3.6.0.0 |
config.resources | Resource requests & limits | {} |
config.skipProcedures | Normally, when updating the config image of IOM, stored procedures, migration scripts and project configuration are executed. Setting parameter
| false |
config.skipMigration | Normally, when updating the config image of IOM, stored procedures, migration scripts and project configuration are executed. Setting parameter
| false |
config.skipConfig | Normally, when updating the config image of IOM, stored procedures, migration scripts and project configuration are executed. Setting parameter
| false |
pg | This group of parameters bundles the information required to connect the PostgreSQL server, information about the superuser, and default database (management database, not the IOM database). Not all clients need all information: The dbaccount init-container is the only client that needs access to the PostgreSQL server as a superuser. Hence, if you do not enable If integrated PostgreSQL server is enabled ( | |
pg.user | Name of the superuser.
| postgres |
pg.userSecretKeyRef | Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.
| |
pg.passwd | The password of the superuser.
| postgres |
pg.passwdSecretKeyRef | Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.
| |
pg.db | Name of the default (management) database.
| postgres |
pg.host | The hostname of the PostgreSQL server. | postgres-service |
pg.port | Port of the PostgreSQL server. | "5432" |
pg.userConnectionSuffix | When using the Azure Database for PostgreSQL service, user names have to be extended by a suffix, beginning with '@'. For more information, refer to the official Azure Database for PostgreSQL documentation. This suffix is not a part of the user name. It has to be used only when connecting to the database. For this reason, the parameter Example: "@mydemoserver" | |
pg.sslMode | pg.sslMode has to contain one of the following values: disable , allow , prefer , require , verify-ca , verify-full . For a detailed description of settings, please see official PostgreSQL documentation. | prefer |
pg.sslCompression | If set to | "0" |
pg.sslRootCert | Azure Database for PostgreSQL service might require verification of server certificate, see official Azure Database for PostgreSQL documentation. To handle this case, it is possible to pass the SSL root certificate in | set to the content of BaltimoreCyberTrustRoot.crt.pem. |
oms | Parameters of group oms are all related to the configuration of IOM. | |
oms.publicUrl | The publicly accessible base URL of IOM which could be the DNS name of the load balancer, etc. It is used internally for link generation. | https://localhost |
oms.mailResourcesBaseUrl | The base path for e-mail resources that are loaded from the e-mail client, e.g., images or stylesheets. Also, see Concept - IOM Customer E-Mails . | https://localhost/mailimages/customers |
oms.jwtSecret | The shared secret for JSON Web Token (JWT) creation/validation. JWTs will be generated with the HMAC algorithm (HS256). Intershop strongly recommends to change the default shared secret used for the JSON Web Tokens creation/validation. To secure the JWT, a key of the same size as the hash output or larger must be used with the JWS HMAC SHA-2 algorithms (i.e, 256 bits for "HS256"), see JSON Web Algorithms (JWA) | 3.2. HMAC with SHA-2 Functions.
| length_must_be_at_least_32_chars |
oms.jwtSecretKeyRef | Instead of storing the jwt-secret as plain text in values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.archiveOrderMessageLogMinAge | Number of days after which the entries in table "OrderMessageLogDO" should be exported Exported data are stored under share/archive
| "90" |
oms.deleteOrderMessageLogMinAge | Number of days after which the entries in table "OrderMessageLogDO" will definitely be deleted in order to reduce the table size. Must be greater than
| "180" |
oms.archiveShopCustomerMailMinAge | Number of days after which the entries in table "ShopCustomerMailTransmissionDO" should be exported (Quartz job "ShopCustomerMailTransmissionArchive") and the column "message" set to ''deleted'' in order to reduce the table size. Default is Exported data are stored under share/archive
| "1826" |
oms.archiveShopCustomerMailMaxCount | Maximum Number of entries in table "ShopCustomerMailTransmissionDO" to be exported per run of the Quartz job "ShopCustomerMailTransmissionArchive". Default is 10000, however, the export will not take place if this property and ''archive_ShopCustomerMailMinAge'' are not set. Min. accepted value: 10
| "10000" |
oms.deleteShopCustomerMailMinAge | The number of days after which the entries in table "ShopCustomerMailTransmissionDO" will definitely be deleted in order to reduce the table size. (Quartz job"ShopCustomerMailTransmissionArchive") Default is
| "2190" |
oms.secureCookiesEnabled | If set to
| < IOM Helm charts 1.5.0: false >= IOM Helm charts 1.5.0: true |
oms.execBackendApps | If set to
| true |
oms.db | Group oms.db bundles all parameters, which are required to access the IOM database. General information required to connect the PostgreSQL server are stored at group pg . | |
oms.db.name | The name of the IOM database. | oms_db |
oms.db.user | The IOM database user .
| oms_user |
oms.db.userSecretKeyRef | Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.db.passwd | The password of the IOM database user. | OmsDB |
oms.db.passwdSecretKeyRef | Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.db.hostlist | A comma-separated list of database servers. Each server entry consists of a hostname and port, separated by a colon. Setting the port is optional. If not set, standard port 5432 will be used.
| |
oms.db.connectionMonitor | Parameters in Example:
| |
oms.db.connectionMonitor.enabled | Enables/disables Kubernetes cronjob providing the connection monitoring messages.
| false |
oms.db.connectionMonitor.schedule | Controls frequency of Kubernetes cronjob providing the connection monitoring messages.
| "*/1 * * * *" |
oms.db.connectTimeout | Controls connect timeout of database connections (jdbc and psql initiated connections). Value is defined in seconds. A value of 0 means to wait infinitely.
| 10 |
oms.smtp | Parameters in If an integrated SMTP server is enabled ( | |
oms.smtp.host | The hostname of the mail server IOM uses to send e-mails.
| mail-service |
oms.smtp.port | The port of the mail server IOM uses to send e-mails.
| "1025" |
oms.smtp.user | The user name for mail server authentication.
| |
oms.smtp.userSecretKeyRef | Instead of storing the user name as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.smtp.passwd | The password for mail server authentication.
| |
oms.smtp.passwdSecretKeyRef | Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
livenessProbe | Group of parameters to fine tune liveness-probe of Kubernetes. The basic kind of probe is fixed and cannot be changed. | |
livenessProbe.periodSeconds | How often (in seconds) to perform the probe. Minimum value is 1.
| 10 |
livenessProbe.initialDelaySeconds | Number of seconds after the container has started before liveness probes are initiated. Minimum value is 0.
| 60 |
livenessProbe.timeoutSeconds | Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.
| 5 |
livenessProbe.failureThreshold | When a probe fails, Kubernetes will try
| 3 |
readinessProbe | Group of parameters, to fine-tune readiness-probe of Kubernetes. The basic kind of probe is fixed and cannot be changed. | |
readinessProbe.periodSeconds | How often (in seconds) to perform the probe. Minimum value is 1.
| 10 |
readinessProbe.initialDelaySeconds | Number of seconds after the container has started before readiness probes are initiated. Minimum value is 0.
| 60 |
readinessProbe.timeoutSeconds | Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. | 8 |
readinessProbe.failureThreshold | When a probe fails, Kubernetes will try
| 1 |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1.
| 1 |
jboss | Parameters of group jboss are all related to the configuration of Wildfly/JBoss. | |
jboss.javaOpts | The value of The default value used by Helm charts 1.5.0 and newer allows for not having to care about java memory settings any longer. Just set the memory size in parameter resources and the JVM will recognize this and adapt its memory configuration to this value. | < IOM Helm charts 1.5.0:
>= IOM Helm charts 1.5.0:
|
jboss.opts | Additional command-line arguments to be used when starting the Example: " | |
jboss.xaPoolsizeMin | The minimum value of the pool-size of XA-datasources. | "50" |
jboss.xaPoolsizeMax | The maximum value of the pool-size of XA-datasources. | "125" |
jboss.activemqClientPoolSizeMax | Maximum size of ActiveMQ client thread pool.
| "50" |
jboss.nodePrefix |
If There are two use-cases which might make it necessary to define
| |
log | Parameters of group log are all related to the configuration of the logging of IOM. | |
log.access.enabled | Controls creation of access log messages. Allowed values are: true, false
| true |
log.level.scripts | Controls log-level of all shell-scripts running in one of the IOM-related containers (as defined in Allowed values are: ERROR, WARN, INFO, DEBUG | INFO |
log.level.iom | Controls log-level of IOM log-handler, which covers all Java-packages beginning with bakery, com.intershop.oms, com.theberlinbakery, org.jboss.ejb3.invocation. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.hibernate | Controls log-level of HIBERNATE log-handler, which covers all Java-packages beginning with Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.quartz | Controls log-level of QUARTZ log-handler, which covers all Java-packages beginning with org.quartz. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.activeMQ | Controls log-level of ACTIVEMQ log-handler, which covers all Java-packages beginning with Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.console | The CONSOLE handler has no explicit assignments of Java packages. This handler is assigned to root-loggers which do not need any assignments. Instead, this log-handler handles all unassigned Java packages, too. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.customization | Another handler without package assignments is CUSTOMIZATION. In difference to CONSOLE, this handler will not log any messages as long as no Java packages are assigned. The assignment of Java packages has to be done in the project configuration and is described in Guide - IOM Standard Project Structure | Configuration of Logging. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.metadata |
Note Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM. | |
log.metadata.tenant | Name of the tenant is added to every log-message. Example: Intershop Note Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM. | company-name |
log.metadata.environment | Name of the environment is added to every log-message. Example: production Note Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM. | system-name |
log.rest | This parameter can hold a list of operation-IDs of REST interfaces. If the operation-ID of a REST interface is listed here, information about request and response of according REST calls are written into DEBUG messages. Operation IDs are part of the YAML specification of IOM REST interfaces. Example:
| \[\] |
datadogApm | datadogApm bundles parameters required to configure datadog Application Performance Monitoring (APM).
| |
datadogApm.enabled | This parameter is mapped to environment variable DD_APM_ENABLED. For more information, please consult the official datadog documentation. If set to
| false |
datadogApm.backendOnly | If set to
| true |
datadogApm.traceAgentHost | This parameter is mapped to environment variable DD_AGENT_HOST. For more information, please consult the official datadog documentation. Normally this environment variable is injected with the right value by the locally installed datadog daemon-set.
| |
datadogApm.traceAgentPort | This parameter is mapped to environment variable DD_TRACE_AGENT_PORT. For more information, please consult the official datadog documentation. Normally this environment variable is injected with the right value by the locally installed datadog daemon-set.
| |
datadogApm.traceAgentTimeout | This parameter is mapped to environment variable DD_TRACE_AGENT_TIMEOUT. For more information, please consult the official datadog documentation.
| |
datadogApm.logsInjection | This parameter is mapped to environment variable DD_LOGS_INJECTION. For more information, please consult the official datadog documentation.
| false |
datadogApm.debug | This parameter is mapped to environment variable DD_TRACE_DEBUG. For more information, please consult the official datadog documentation.
| false |
datadogApm.startupLogs | This parameter is mapped to environment variable DD_TRACE_STARTUP_LOGS. For more information, please consult the official datadog documentation.
| true |
datadogApm.tags | This parameter is mapped to environment variable DD_TAGS. For more information, please consult the official datadog documentation.
| |
datadogApm.serviceMapping | This parameter is mapped to environment variable DD_SERVICE_MAPPING. For more information, please consult the official datadog documentation.
| |
datadogApm.writerType | This parameter is mapped to environment variable DD_WRITER_TYPE. For more information, please consult the official datadog documentation.
| |
datadogApm.partialFlushMinSpan | This parameter is mapped to environment variable DD_TRACE_PARTIAL_FLUSH_MIN_SPANS. For more information, please consult the official datadog documentation.
| |
datadogApm.dbClientSplitByInstance | This parameter is mapped to environment variable DD_TRACE_DB_CLIENT_SPLIT_BY_INSTANCE. For more information, please consult the official datadog documentation.
| |
datadogApm.healthMetricsEnabled | This parameter is mapped to environment variable DD_TRACE_HEALTH_METRICS_ENABLED. For more information, please consult the official datadog documentation.
| false |
datadogApm.servletAsyncTimeoutError | This parameter is mapped to environment variable DD_TRACE_SERVLET_ASYNC_TIMEOUT_ERROR. For more information, please consult the official datadog documentation.
| true |
datadogApm.sampleRate | This parameter is mapped to environment variable DD_TRACE_SAMPLE_RATE. For more information, please consult the official datadog documentation.
| '1.0' |
datadogApm.jmsFetchEnabled | This parameter is mapped to environment variable DD_JMXFETCH_ENABLED. For more information, please consult the official datadog documentation.
| true |
caas | Within caas group of parameters, configuration of Intershop Commerce Platform (previously known as CaaS) projects can be controlled. | |
caas.envName | Intershop Commerce Platform (previously known as CaaS) projects support different settings for different environments. | env-name |
caas.importTestData | Controls the import of test data, which are part of the project. See Guide - IOM Standard Project Structure | Test Data for more information. If enabled, test-data is only imported during the installation process, not when executing an upgrade process. | false |
caas.importTestDataTimeout | Timeout in seconds for import of test data. If the import has not finished before the according amount of seconds is passed, the container will end with an error. This parameter replaces the deprecated file
| "300" |
persistence | Parameters of group persistence control how IOM's shared data is persisted. | |
persistence.storageClass | Name of existing storage class to be used for IOM's shared data.
| azurefile |
persistence.annotations | Annotations for persistence volume claim to be created. See https://helm.sh/docs/topics/charts_hooks/ for more information about default annotations. The default value of Helm parameter persistence.annotations was changed in IOM Helm charts 1.6.0 in order to avoid deletion of according storage on helm delete. According to https://helm.sh/docs/topics/charts_hooks/ a second annotation helm.sh/resource-policy: keep was added.
| < IOM Helm charts 1.6.0 (fixed value with no ability to overwrite):
>= IOM Helm charts 1.6.0:
|
persistence.storageSize | Requested storage size. For more information, see official Kubernetes documentation. | 1Gi |
persistence.hostPath | For very simple installations, persistent data can be directly stored at a local disk. In this case, the path on local host has to be stored at this parameter.
| |
persistence.pvc | For transregional installations of IOM, it has to be possible to define the Persistence Volume Claim (pvc) directly. This way IOM's shared data can be persisted at one place by two or more IOM clusters.
| |
ingress | Group ingress bundles configuration of IOM's ingress, which is required to get access to IOM from outside of Kubernetes. | |
ingress.enabled | Enables ingress for IOM. If not enabled, IOM cannot be accessed from outside of Kubernetes. | true |
ingress.className | Ingress class has to be specified by If the integrated NGINX controller should be used to serve incoming requests, the parameter
| nginx |
ingress.annotations | Annotations for the ingress. | {} |
ingress.hosts | A list of ingress hosts. The default value grants access to IOM.
| < IOM Helm charts 1.5.0:
>= IOM Helm charts 1.5.0:
|
ingress.tls | A list of IngressTLS items | [] |
resources | Resource requests & limits | < IOM Helm charts 1.5.0:
>= IOM Helm charts 1.5.0:
|
imagePullSecrets | Name of the secret to get credentials from. | [] |
nameOverride | Overwrites chart name. | |
fullnameOverride | Overwrites complete name, constructed from release, and chart name. | |
serviceAccount.create | If true , create a backend service account. Only useful if you need a pod security policy to run the backend. | true |
serviceAccount.annotations | Annotations for the service account. Only used if create is true . | {} |
serviceAccount.name | The name of the backend service account to use. If not set and create is true , a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. | |
podAnnotations | Annotations to be added to pods. | {} |
podSecurityContext | Security context policies to add to the iom-tests pod. | {} |
securityContext | List of required privileges. | {} |
service.type | Type of service to create. | ClusterIP |
service.port | Port to be exposed by service. | 80 |
nodeSelector | Node labels for pod assignment. | {} |
tolerations | Node taints to tolerate (requires Kubernetes >=1.6). | [] |
affinity | Node/pod affinities (requires Kubernetes >=1.6). | {} |
A complete list of parameters can be found here: https://github.com/codecentric/helm-charts/tree/master/charts/mailhog
The table below only lists parameters that have to be changed for different operation options of IOM.
Parameter | Description | Default Value |
---|---|---|
mailhog.enabled | Controls whether an integrated SMTP server should be used or not. This SMTP server is not intended to be used for any kind of serious IOM installation. It should only be used for demo-, CI- or similar types of setups. | false |
mailhog.resources | Resource requests & limits. | {} |
mailhog.ingress.hosts | A list of ingress hosts. | { host: mailhog.example.com, |
mailhog.ingress.tls | A list of IngressTLS items. | [] |
mailhog.ingress.annotations | Annotations for the ingress. There is one important annotation: If the integrated NGINX controller should be used to serve incoming requests, the annotation | {} |
A complete list of parameters can be found here: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx
The table below only lists parameters that have to be changed for different operation options of IOM and also those that must not be changed at all.
Parameter | Description | Default Value |
---|---|---|
nginx.enabled | Controls whether an integrated NGINX ingress controller should be installed or not. This ingress controller can serve two purposes:
| false |
nginx.proxy.enabled | Controls if the integrated NGINX ingress controller should act as a proxy between cluster-wide ingress controller and IOM, or as an ingress controller used instead of the cluster-wide one. | true |
nginx.proxy.annotations | Annotations for the ingress.
| {} |
ingress-nginx.controller.replicaCount | Desired number of controller pods. | 2 |
ingress-nginx.controller.service.type | Type of controller service to create. When using the integrated NGINX controller as a proxy, | ClusterIP |
ingress-nginx.controller.extraArgs | Additional command line arguments to pass to nginx-ingress-controller. Example to increase verbosity: { v: 3 } | |
ingress-nginx.controller.config | Adds custom configuration options to Nginx https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/ | { use-forwarded-headers: "true", proxy-add-original-uri-header: "true" } |
ingress-nginx.rbac.create | If true , create & use RBAC resources.
| true |
ingress-nginx.rbac.scope | If
| true |
ingress-nginx.controller.ingressClass | Name of the ingress class to route through this controller.
| nginx-iom |
nfginx-ingress.controller.scope.enabled | Limit the scope of the ingress controller. If set to
| true |
Parameter | Description | Default Value |
---|---|---|
postgres.enabled | Controls whether an integrated PostgreSQL server should be used or not. This PostgreSQL server is not intended to be used for any kind of serious IOM installation. It should only be used for demo-, CI- or similar types of setups. | false |
postgres.args | An array containing command-line arguments, which are passed to the Postgres server at start. For more information, see official PostgreSQL 11 documentation. | ["-N", "200", "-c", "max_prepared_transactions=100"] |
postgres.image.repository | Repository of the PostgreSQL image. For more information, see official Docker hub. | postgres |
postgres.image.tag | Tag of PostgreSQL image. For more information, see official Docker hub. | "11" |
postgres.image.pullPolicy | Pull policy to be applied when getting PostgreSQL Docker images. For more information, see official Kubernetes documentation. | IfNotPresent |
postgres.pg | This group of parameters bundles the information about the superuser and default database (management database, not the IOM database). This information is used to configure the Postgres server on start, but is also used by clients which require superuser access to the Postgres server. The only client that needs this kind of access is the dbaccount init-image that creates/updates the IOM database. | |
postgres.pg.user | Name of the superuser. The superuser will be created when starting the Postgres server.
| postgres |
postgres.pg.userSecretKeyRef | Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets. | |
postgres.pg.passwd | The password of the superuser. Password will be set when starting the Postgres server.
| postgres |
postgres.pg.passwdSecretKeyRef | Instead of storing the password as plain text in values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets. | |
postgres.pg.db | Name of default (management) database which will be created when starting the Postgres server. | postgres |
postgres.persistence | Parameters of group postgres.persistence are controlling if and how the database data are persisted. | |
postgres.persistence.enabled | If set to false , data of the PostgreSQL server are not persisted at all. They are only written to memory and get lost if the Postgres pod ends. | false |
postgres.persistence.accessMode | The default value allows binding the persistent volume in read/write mode to a single pod only, which is exactly what should be done for the PostgreSQL server. For more information, see official Kubernetes documentation.
| ReadWriteOnce |
postgres.persistence.storageClass | Name of existing storage class to be used by the PostgreSQL server.
| |
postgres.persistence.annotations | Annotations to be added to the according PersistentVolumeClaim. For more information, see official Kubernetes documentation.
| {} |
postgres.persistence.storageSize | Requested storage size. For more information, see official Kubernetes documentation. | 20Gi |
postgres.persistence.hostPath | For very simple installations, persistent data can be directly stored at a local disk. In this case, the path on local host has to be stored at this parameter. | |
postgres.resources | Resource requests & limits. | {} |
postgres.imagePullSecrets | The name of the secret to get credentials from. | [] |
postgres.nameOverride | Overwrites chart name. | |
postgres.fullnameOverride | Overwrites complete name, constructed from release, and chart name. | |
postgres.nodeSelector | Node labels for pod assignment. | {} |
postgres.tolerations | Node taints to tolerate (requires Kubernetes >=1.6). | [] |
postgres.affinity | Node/pod affinities (requires Kubernetes >=1.6). | {} |
The iom-tests sub-chart provides a very generic way to run tests on an IOM installation. The sub-chart and the according parameters are simply the pure skeleton, resulting from a helm create
call. The section Execution of tests, which is part of the example CI System running in Minikube on virtualized Linux, demonstrates how this could be used.
Parameter | Description | Default Value |
---|---|---|
iom-tests.enabled | Enables rollout of iom-tests sub-chart. | false |
iom-tests.env | List of environment variables, required by the tests pod. | |
iom-tests.replicaCount | Desired number of iom-tests pods. | 1 |
iom-tests.image.repository | Docker image repository. | iom-tests |
iom-tests.image.pullPolicy | Docker image pull policy. | IfNotPresent |
iom-tests.image.tag | Docker image tag. | |
iom-tests.imagePullSecrets | Name of the secret to get credentials from. | [] |
iom-tests.nameOverride | Overwrites chart name. | |
iom-tests.fullnameOverride | Overwrites complete name, constructed from release, and chart name. | |
iom-tests.serviceAccount.create | If true , create a backend service account. Only useful if you need a pod security policy to run the backend. | true |
iom-tests.serviceAccount.annotations | Annotations for the service account. Only used if create is true. | {} |
iom-tests.serviceAccount.name | The name of the backend service account to use. If not set and create is true, a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. | |
iom-tests.podAnnotations | Annotations to be added to pods. | {} |
iom-tests.podSecurityContext | Security context policies to add to the iom-tests pod. | {} |
iom-tests.securityContext | List of required privileges. | {} |
iom-tests.service.type | Type of service to create. | ClusterIP |
iom-tests.service.port | Port to be exposed by service. | 80 |
iom-tests.ingress.enabled | Enables ingress for iom-tests. It is suggested to get access to test results this way. | true |
iom-tests.ingress.className | Ingress class has to be specified by If the integrated NGINX controller should be used to serve incoming requests, the parameter
| nginx |
iom-tests.ingress.annotations | Annotations for the ingress. There is one important annotation: If the integrated NGINX controller should be used to serve incoming requests, the annotation | {} |
iom-tests.ingress.hosts | A list of ingress hosts. The default value grants access to IOM.
| < IOM Helm charts 1.5.0:
>= IOM Helm charts 1.5.0:
|
iom-tests.ingress.tls | A list of IngressTLS items. | [] |
iom-tests.containerPort | Port used by the container to provide its service. | 8080 |
iom-tests.resources | Resource requests & limits. | {} |
iom-tests.autoscaling.enabled | If true , creates Horizontal Pod Autoscaler . | false |
iom-tests.autoscaling.minReplicas | If autoscaling enabled, this field sets the minimum replica count. | 1 |
iom-tests.autoscaling.maxReplicas | If autoscaling enabled, this field sets the maximum replica count. | 100 |
iom-tests.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage to scale. | 80 |
iom-tests.nodeSelector | Node labels for pod assignment. | {} |
iom-tests.tolerations | Node taints to tolerate (requires Kubernetes >=1.6). | [] |
iom-tests.affinity | Node/pod affinities (requires Kubernetes >=1.6). | {} |
All parameters ending with SecretKeyRef
serve as an alternative way to provide secret information. Instead of storing entries as plain text in values file, these parameters allow reference entries within Kubernetes
secrets. For more information about secrets, see public Kubernetes documentation.
SecretKeyRef
parameters require a hash structure, consisting of two entries with the following hash-keys:
name
: is the name of the Kubernetes
secret, containing the referenced keykey
: is the name of the entry within the secretThe following two boxes show an example, which consists of two parts:
Kubernetes
secret, which contains entries for different secret values, andapiVersion: v1 kind: Secret metadata: name: pgsecrets type: Opaque data: pguser: cG9zdGdyZXM= pgpasswd: ZGJ1c2VycGFzc3dk
... # general postgres settings, required to connect to postgres server # and root db. pg: userSecretKeyRef: name: pgsecrets key: pguser passwdSecretKeyRef: name: pgsecrets key: pgpasswd db: postgres sslMode: prefer sslCompression: "1" sslRootCert: ...
To improve the observability of IOM in cloud environments, a number of metrics provided by the application server as well as custom metrics are exposed by WildFly in Prometheus format. They can be collected / scraped from the “http://[pod]:9990/metrics” endpoint. From an “Ops” perspective, the most important metrics can be found in the “base” and “application” namespaces - i.e. metrics prefixed by “base_” or “application_”.
Base metrics exposed by WildFly correspond to the “Required Metrics” as defined by the microprofile-metrics specification. This includes basic JVM parameters such as uptime, thread counts and garbage collector statistics.
Application metrics are “custom” metrics exposed by the IOM platform or project code. The list below includes metrics that can be used to monitor the application/infrastructure in case there are no feasible alternatives for external monitoring.
Metric Name | Description | Intended Usage |
---|---|---|
application_iom_shared_fs_disk_usage_ratio | Disk usage ratio - float value range [0..1] - 0 = 0%, 1 = 100%
| Monitoring of the current usage of the shared filesystem - define thresholds for warning / error notifications |
application_iom_shared_fs_disk_total_bytes | Total shared fs size in bytes
| Can be used to collect general statistics or for dashboards - available vs. total size |
application_iom_shared_fs_disk_used_bytes | Used shared fs size in bytes
| Can be used to collect general statistics or for dashboards - available vs. total size |
The ideal configuration mainly depends on the server resources and on the activity. Therefore, we can only provide a general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid-size system with about 32 GB RAM and 24 cores.
If PostgreSQL is used as a service (e.g. Azure Database for PostgreSQL servers), not all PostgreSQL server parameters can be set. When using a service, the method of how to change PostgreSQL server parameters might be different, too.
To achieve the best performance, almost all the required data (tables and indexes) for the ongoing workload should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.
The IOM is built with Hibernate as an API between the application logic and the database. This mainly results in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.
The following main parameters in $PGDATA/data/postgresql.conf should be adapted, see PostgreSQL 12 | Chapter 19. Server Configuration.
You can consider PGConfig 2.0 as a guideline (using the OLTP Model).
Some aspects of data reliability are discussed in PostgreSQL 12 | Chapter 29. Reliability and the Write-Ahead Log. Understanding VACUUM is also essential when configuring/monitoring Postgres, see PostgreSQL 12 | Chapter 24. Routine Database Maintenance Tasks.
Parameter | Description |
---|---|
max_connections | The number of concurrent connections from the application is controlled by the parameters Info Highly concurrent connections have a negative impact on performance. It is more efficient to queue the requests than to process them all in parallel. |
max_prepared_transactions | Required for IOM installations. Set its value to about 150% of max_connections . |
shared_buffers | Between 1/4 and 1/3 of the total RAM, but not more than about 8 GB. Otherwise, the cache management will use too many resources. The remaining RAM is more valuable as a file system cache. |
work_mem | Higher work_mem can increase performance significantly. The default is way too low. Consider using 100-400 MB. |
maintenance_work_mem | Increase the default similar to with work_mem to favor quicker vacuums. With IOM, this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem ).Consider something like 2% of your total RAM per autovacuum_max_workers . e.g., 32GB RAM * 2% * 3 workers = 2GB. |
vacuum_cost_* | The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load. |
wal_level | Depends on your backup, recovery, and failover strategy. Should be at least archive . |
wal_sync_method | Depends on your platform, check PostgreSQL 12 | 19.5. Write Ahead Log | wal_sync_method (enum). |
max_wal_size | 8 (small system) - 128 (large system) |
max_parallel_workers (since Postgres 9.6) | 0 |
checkpoint_completion_target | Use 0.8 or 0.9 . |
archive_* and REPLICATION | Depends on your backup & failover strategy. |
random_page_cost | The default (4 ) is usually too high. Better choose 2.5 or 3 . |
effective_cache_size | Indicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB. |
log_min_duration_statement | Set it between |
log_filename | Better use an explicit name to help when communicating, e.g., Not applicable if the PostgreSQL server is running in Kubernetes since all messages are written to stdout in this case. |
log_rotation_age | Set it to 60 min or less. Not applicable if the PostgreSQL server is running in Kubernetes since all messages are written to stdout in this case. |
log_line_prefix | Better use a more verbose format than the default, e.g., %m|%a|%c|%p|%u|%h| . |
log_lock_waits | Activate it (=on). |
stats_temp_directory | Better redirect it to a RAM disk. |
log_autovacuum_min_duration | Set it to a few seconds to monitor the vacuum activity. |
idle_in_transaction_session_timeout. (since Postgres 9.6) | An equivalent parameter exists for the WildFly connection pool (query-timeout) where it is set to 1 hour per default. Set idle_in_transaction_session_timeout to a larger value, e.g., 9 hours, to clean up possible leftover sessions. |
The database initialization made by dbaccount image is creating a user and database, which uses the system-wide default tablespace pg_default
. If you want to use a custom tablespace, you have to create it prior to the database initialization, see PostgreSQL: Documentation: 12: CREATE TABLESPACE.
To make the database initialization process aware of this newly created tablespace, the parameter dbaccount.tablespace
has to be set to its name. If this is done, this tablespace will be set as default tablespace for the IOM database user and for the IOM database during the initialization process.
All database clients and the IOM database have to use the same timezone. For this reason, all IOM Docker images are configured on OS-level to use timezone Etc/UTC. The process, executed by dbaccount init-image, sets this timezone for the IOM database user as well.
The locale of database clients and the locale of the IOM database have to be identical. For this reason, all IOM Docker images are setting environment variable LANG to en_US.utf8.
Accordingly, the setting on the database is made by dbaccount init-image. Using parameter dbaccount.options, it is possible to configure this process.
When creating the IOM database by dbaccount init-image, using the wrong Encoding, Collate or Ctype is the most common reason for failed initialization of the IOM database. The according values have to be exactly identical to the values used by template databases. Hence, if there are any problems with Encoding, Collate, or Ctype when creating the IOM database, the existing databases should be listed to get the correct values. To do so, just use psql
database client with parameter -l
to list them.
The following box shows how to do this after an initialization error if IOM is running on Docker-Desktop.
# get name of PostgreSQL pod kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-ingress-nginx-controller-6c6f5b88cc-6wsfh 1/1 Running 0 67s demo-iom-0 0/1 Init:Error 3 67s demo-mailhog-5d7677c7c5-zl8gl 1/1 Running 0 67s demo-postgres-96676f4b-mt8nl 1/1 Running 0 67s # execute psql -U postgres -l within PostgreSQL pod kubectl exec demo-postgres-96676f4b-mt8nl -n iom -t -- psql -U postgres -l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres (3 rows)
In some circumstances, the search path for database objects has to be extended. Search-Path is set by dbaccount init-image. This process can be configured by parameter dbaccount.searchPath
.
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.