The present guide is addressed to administrators and software developers who want to operate IOM. It enables them to understand what the components of IOM are, how to configure them, and to run processes like installations and updates.
For a technical overview please see References.
Wording | Description |
---|---|
Docker | An operating system-level virtualization software. See also Kubernetes and Helm. |
Helm | A package manager for Kubernetes. See also Docker. |
CLI | Command Line Interface |
IOM | The abbreviation for Intershop Order Management |
JBoss | Synonym for WildFly (former name of the WildFly application server) |
Kubernetes | An open-source system for automating deployment, scaling, and management of containerized applications. See also Docker and Helm. |
OMS | The abbreviation for Order Management System, the technical name of IOM |
URL | Uniform Resource Locator |
WildFly | The application server that IOM runs on |
Production systems for Intershop Order Management (IOM) are usually provided as a service in the Azure Cloud. This service is part of the corresponding Intershop Commerce Platform. Non-production environments require separate agreements with Intershop.
For the purpose of adapting the software to specific customer requirements and/or customer-specific environments, it is also possible to operate IOM (for example for corresponding CI environments, test systems, etc) outside the Azure Cloud and independently of Azure Kubernetes (AKS). In support of this, this document is intended for IOM administrators and software developers.
The exact required version of Kubernetes can be found in the system requirements (see References).
IOM requires a Kubernetes runtime environment. Intershop cannot provide support on how to set up, maintain, or operate a Kubernetes runtime environment.
When using the Intershop Commerce Platform, Kubernetes is included. In this case, Intershop is fully responsible to set up, maintain, and operate the Kubernetes cluster as part of the Intershop Commerce Platform.
The exact required version of Helm can be found in the system requirements (see References).
IOM requires Helm to be operated in a Kubernetes environment. Intershop cannot provide support on how to set up and use Helm properly.
When using the Intershop Commerce Platform, Helm is included. In this case, Intershop is fully responsible to set up and use Helm as part of the Intershop Commerce Platform.
The exact requirements of the mail server can be found in the system requirements (see References).
IOM requires an existing mail server that processes e-mails sent from IOM via the SMTP protocol. Intershop cannot provide support on how to set up, maintain, or operate a mail server. A mail server is not part of the Intershop Commerce Platform.
Exact requirements of PostgreSQL server can be found in the system requirements (see References).
IOM requires a PostgreSQL database hosted by a PostgreSQL database server. Intershop cannot provide support on how to set up and operate a PostgreSQL server. Some configuration hints will be given as part of this document in section PostgreSQL Server Configuration.
When using the Intershop Commerce Platform, a PostgreSQL database is included. In this case, Intershop is fully responsible to set up and maintain the database as well as set up and operate the according PostgreSQL server as part of the Intershop Commerce Platform.
In order to understand this document, it is essential to know some basic tools and concepts. It is not the goal of this document to teach you all these tools and concepts. However, it is intended to provide an insight into how these tools and concepts are used in the context of Intershop Order Management.
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. Many cloud services offer a Kubernetes-based platform or infrastructure as a service (PaaS or IaaS) on which Kubernetes can be deployed as a platform-providing service. Many vendors also provide their own branded Kubernetes distributions. (https://en.wikipedia.org/wiki/Kubernetes).
Since Kubernetes is a standard for cloud operations, using it for IOM promises the best compatibility with a wide range of cloud providers. Nevertheless, functionality is guaranteed for Microsoft Azure Kubernetes service as part of the Intershop Commerce Platform only. You can use other environments at your own risk.
A full description of Kubernetes can be found at https://kubernetes.io/docs/home/.
Kubectl is a command-line interface to control Kubernetes clusters. It is part of Kubernetes, see https://kubernetes.io/docs/reference/kubectl/overview/.
Since it is a client which runs on the machine used to control the Kubernetes-cluster, it has to be installed separately. For this reason, it is listed as a separate tool. In the narrow sense, it is not required to operate IOM, but it is used in this document within the section Examples, to view the status of Kubernetes-objects.
Helm (https://helm.sh) sits on top of Kubernetes. Helm is a tool to manage the life cycle (install, upgrade, rollback, uninstall) of complex Kubernetes applications. To do so, it enables the development and provision of so-called Helm charts, which are basically descriptions of Kubernetes objects, combined by a template- and scripting-language.
IOM is provided in the form of Docker images. These images can be used directly, as shown in section Examples of this document, or can be the base for further customization in the context of projects.
The images are available at:
Note
Adapt the tag (version number) if you use a newer version of IOM. For a full list of available versions see Overview - IOM Public Release Notes.
docker.intershop.de is a private Docker registry. Private Docker registries require authentication and sufficient rights to pull images from them. The according authentication data can be passed in a Kubernetes secret object, which has to be set using the Helm parameter imagePullSecrets
.
The document Pull an Image from a Private Registry from Kubernetes documentation explains in general how to create Kubernetes secret objects, suitable to authenticate at a private Docker registry. Pull images from an Azure container registry to a Kubernetes cluster from Microsoft Azure documentation explains how to apply this concept to private Azure container registries.
The following box shows an example of how to create a Kubernetes secret to be used to access the private Docker registry docker.intershop.de. The name of the newly created secret is intershop-pull-secret
, which has to be passed to Helm parameter imagePullSecrets
. It has to reside within the same Kubernetes namespace as the IOM cluster which uses the secret.
kubectl create secret docker-registry intershop-pull-secret \ --docker-server=docker.intershop.de \ --docker-username='<user name>' \ --docker-password='<password>' \ -n <kubernetes namespace>
IOM Helm-charts is a package containing the description of all Kubernetes-objects required to run IOM in Kubernetes. IOM Helm-charts are provided by Intershop at https://repository.intershop.de/helm. To use IOM Helm-charts, you have to execute the following commands:
# Add all Intershop charts helm repo add intershop https://repository.intershop.de/helm \ --password '<password>' \ --username '<user name>' # Now the repo can be used to install IOM. # The following command was taken from the examples section. Without the preconditions described there, it will not work. # It is shown here only for demonstration of how to reference the IOM Helm-chart after adding the according repository. helm install demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
The following illustration shows the most important components and personas when operating IOM with Helm. The project owner has to define a values file (available configuration parameters are explained in section Parameters), which can be used along with IOM Helm-charts to install, upgrade, rollback, and uninstall IOM within a Kubernetes runtime environment.
This is a very generalized view which has some restrictions when used with IOM. The next section explains these restrictions in detail.
IOM uses a database that is constantly evolving along with new releases of IOM. For this reason, every version of IOM brings its own migration scripts, which are lifting the database to the new level. In general, old versions of the IOM database are not compatible with new versions of IOM application servers and vice versa. Also, projects change the database when rolling out new or changed project configurations.
Helm does not know anything about changes inside the database. When rolling back a release, only the changes in values and IOM Helm-packages are rolled back. To avoid inconsistencies and failures (e.g. rollback to an old IOM application server version after updating the database structures to the new version), it is strongly recommended to avoid rollback in general.
The same reasons that make the rollback process problematic also limit the upgrade process.
When executing the upgrade process, the standard behavior of Helm is to keep the application always online. The different IOM application servers are updated one after another. In case of incompatible database changes, this would lead to problems, since one of the following cases is unavoidable: an old IOM application server tries to work with an already updated IOM database or vice versa.
To overcome this problem, IOM Helm-charts provide the parameter downtime
, which controls the behavior of the upgrade process. If downtime
is set to true
, the whole IOM cluster will be stopped during the upgrade process. The IOM database will be upgraded first and after that, the IOM application servers are started again. This setting should always be used when upgrading to a new IOM version unless stated otherwise.
Within the context of projects, many changes can be applied to the running IOM cluster without requiring a downtime. In this case, the value of downtime
has to be set to false
before starting the upgrade process.
The previous section IOM Helm-Charts gave a general view on Helm, the IOM Helm-charts, and the according processes. The Intershop Commerce Platform environment modifies this concept a little bit, as shown in the following illustration.
Project owners are not able to trigger any processes directly. They can only manage a sub-set of values to be applied along with the IOM Helm-chart. The processes are triggered by a flux-controller that observes the Git repository holding the values files. Depending on the type of IOM installation (INT, Pre-PROD, PROD, etc.) processes might need to be triggered manually by Intershop Operations. Intershop Operations also maintains a values file, which has higher precedence than the file of the project owner. This way it is ensured that the project owner is not able to change any critical settings. Which ones are affected depends on the type of IOM installation (INT, Pre-PROD, PROD, etc.). For example, a project owner should never be able to set log-level
to DEBUG
or TRACE
on PROD environments.
In short, this concept is well known as GitOps.
Despite the fact that Kubernetes and IOM Helm-charts make it very easy to set up and upgrade IOM installations, a reference to all the exiting parameters that are available to control IOM Helm-charts is a very uncomfortable starting point. For this reason, three typical usage scenarios were chosen to provide an easy-to-understand entry point into IOM Helm-charts. All examples were designed in a way that Intershop Commerce Platform is not required. The following examples strictly follow the concept described in section IOM Helm-Charts.
In order to understand the optional and required components defined in IOM Helm-Charts, it is strongly recommended to read Guide - Intershop Order Management - Technical Overview first.
Requirements and characteristics are numbered. You will find these numbers also in the values file listed below in order to see the relation between requirement and current configuration.
This values file cannot be copied as it is. Before it can be used, persistence.hostPath
and postgres.persistence.hostPath
have to be changed to existing paths, which are shared with Docker Desktop.
The values file contains minimal settings only, except dbaccount.resetData
, which was listed explicitly, even if it contains the default value only.
# use one IOM server only (requirement #8). replicaCount: 1 imagePullSecrets: - name: intershop-pull-secret image: repository: "docker.intershop.de/intershophub/iom" tag: "4.0.0" # remove resource binding for cpu. This makes the system significantly # faster, especially the startup. resources: limits: cpu: requests: cpu: # configure ingress to forward requests for host "localhost" to IOM (requirements #9, #11). # since integrated NGINX controller should be used, its class has to be set explicitly. ingress: enabled: true className: nginx-iom hosts: - host: localhost paths: - path: "/" pathType: Prefix # IOM has to know its own public URL oms: publicUrl: "https://localhost/" # store data of shared-FS into local directory (requirement #6, #7) persistence: hostPath: /Users/username/iom-share # create IOM database and according database user before starting IOM. # do not reset existing data during installation (requirement #3) dbaccount: enabled: true resetData: false # optional, since false is default image: repository: "docker.intershop.de/intershophub/iom-dbaccount" tag: "1.4.0" # use integrated PostgreSQL server (requirement #1). # store database data persistently into local directory (requirement #2). postgres: enabled: true persistence: enabled: true hostPath: /Users/username/pgdata # enable integrated NGINX ingress controller. # this controller should not act proxy (requirement #9). nginx: enabled: true proxy: enabled: false # configure integrated NGINX ingress controller. # one instance of NGINX is sufficient for demo scenario (requirement #10). # set type to LoadBalancer to be accessible from public network (requirement #9). ingress-nginx: controller: replicaCount: 1 service: type: LoadBalancer # enable integrated SMTP server (requirement #4). # configure ingress to forward requests for any host to mailhog GUI (requirements #9). # since ingress for IOM defined a more specific rule, mailhog GUI can be reached using any hostname except localhost. # since integrated NGINX controller should be used, its class has to be set explicitly. mailhog: enabled: true ingress: enabled: true className: nginx-iom hosts: - host: paths: - path: "/" pathType: Prefix
Windows: IOM Share
The current example just works when using Docker Desktop on Windows. When working on Windows, you have to take care to use Unix-Style path names, e.g., if the IOM share is located at C:\Users\username\iom-share, the according entry in values.yaml has to be noted as /c/Users/unsername/iom-share.
Windows: persistent PostgreSQL data
Setting postgresql.persistence.hostPath
to a local directory does not work on Windows, even if the directory is correctly shared with Docker Desktop. When starting the PostgreSQL server, it tries to take ownership of the data directory, which is not working in this case. There are two possibilities to overcome this problem:
postgres.persistence.enabled
to false.
# create docker volume "iom-pgdata" docker volume create —name=iom-pgdata -d local # get mount-point of newly created docker volume # use mount-point as value for helm-parameter postgres.persistence.hostPath docker volume inspect —format='{{.Mountpoint}}' iom-pgdata /var/lib/docker/volumes/iom-pgdata/_data # to remove docker volume, execute the following command docker volume rm iom-pgdata
Create a file values.yaml and fill it with the content listed above. Adapt the settings of persistence.hostPath
and postgres.persistence.hostPath
to point to directories on your computer, which is shared with Docker Desktop. After that, the installation process of IOM can be started.
# create namespace "iom" kubectl create namespace iom # install IOM into namespace "iom" helm install demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl
you can see the status of every Kubernetes object. For simplicity, the following example is showing the status of pods only.
Open a second terminal window and enter the following commands.
# A few seconds after start of IOM, only the integrated Postgres server is in "Init" phase. All other # pods are in earlier phases. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Pending 0 2s demo-mailhog-5dd4565b98-jphkm 0/1 ContainerCreating 0 2s demo-ingress-nginx-controller-f5bf56d64-cp9b5 0/1 ContainerCreating 0 2s demo-postgres-7b796887fb-j4hdr 0/1 Init:0/1 0 2s # After some seconds all pods except IOM are "Running" and READY (integrated Postgresql server, integrated # SMTP server, intergrated NGINX). IOM is in Init-phase, which means the init-containers are currently executed. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Init:1/2 0 38s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 38s demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 38s demo-postgres-7b796887fb-j4hdr 1/1 Running 0 38s # The init-container executed in iom-pod is dbaccount. Log messages can be seen # by executing the following command. If everything works well, the last message will announce the # successful execution of create_dbaccount.sh script. kubectl logs demo-iom-0 -n iom -f -c dbaccount ... {"tenant":"company-name","environment":"system-name","logHost":"demo-iom-0","logVersion":"1.0","appName":"iom-dbaccount","appVersion":"1.4.0","logType":"script","timestamp":"2021-01-06T11:33:17+00:00","level":"INFO","processName":"create_dbaccount.sh","message":"success","configName":null} # When init-container is finished successfully, the iom-pod is now in "Running" state, too. But it is not "READY" # yet. Now the IOM database is set up, applications and project customizations are deployed into the Wildfly application server. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Running 0 1m50s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 1m50s demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 1m50s demo-postgres-7b796887fb-j4hdr 1/1 Running 0 1m50s # When all pods are "Running" and "READY" the installation process of IOM is finished. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 1/1 Running 0 3m20s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 3m20s demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 3m20s demo-postgres-7b796887fb-j4hdr 1/1 Running 0 3m20s
When all pods are Running
and Ready
, the installation process is finished. You should check the first terminal window, where the installation process was running.
Now the web GUI of the new IOM installation can be accessed. In fact, there are two Web GUIs, one for IOM and one for Mailhog. According to the configuration, all requests dedicated to localhost will be forwarded to the IOM application server, any other requests are meant for an integrated SMTP server (Mailhog). Open the URL https://localhost/omt in a web browser on your Mac. After accepting the self-signed certificate (the configuration did not include a valid certificate), you will see the login page of IOM. Login as admin/!InterShop00! to proceed.
Any other request that is not dedicated to localhost will be forwarded to Mailhog. To access the web-GUI of Mailhog, open the URL https://127.0.0.1/ in your web browser. Once again you have to accept the self-signed certificate and after that, you will see the Mailhog GUI.
From a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or new Docker images of a new IOM release are rolled out. The example shown here will demonstrate how to change the log-level of the Quartz subsystem, running in the WildFly application server.
Before the start, keep the restrictions on upgrade in mind. A change of a log-level is an uncritical change that can be applied without downtime. But we have decided to use a single IOM application server only (see Requirement #8). When using a single IOM application server only, an upgrade process with downtime is inevitable. Hence, we do not have to think about the setting of parameter downtime
.
Modify values.yaml by adding the following lines to the file:
log: level: quartz: INFO
These changes are now rolled out by running Helm's upgrade process to the existing IOM installation.
Start the upgrade process within a terminal window.
helm upgrade demo intershop/iom --values=values.yaml --namespace iom --timeout 20m0s --wait
The upgrade process will take some minutes before it is finished.
Enter the following commands in a second terminal window to watch the progress.
As already used in the installation process before, this example is restricted to the status of pods only.
# Only the Kubernetes object of IOM has changed. Therefore Helm only upgrades IOM, the integrated SMTP server, # integrated postgresql server and integrated NGINX are running unchanged. A few seconds after starting the # upgrade process, the only existing iom-pod is stopped. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 1/1 Terminating 0 40m demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 40m demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 40m demo-postgres-7b796887fb-j4hdr 1/1 Running 0 40m # After the iom-pod is terminated, a new iom-pod is started with new configuration. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 0/1 Running 0 56s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 41m demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 41m demo-postgres-7b796887fb-j4hdr 1/1 Running 0 41m # Finally the pod is "Running" and "READY" again, which means, IOM is up again. kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-iom-0 1/1 Running 0 2m40s demo-mailhog-5dd4565b98-jphkm 1/1 Running 0 46m demo-ingress-nginx-controller-f5bf56d64-cp9b5 1/1 Running 0 46m demo-postgres-7b796887fb-j4hdr 1/1 Running 0 46m
The last process demonstrates how to uninstall IOM.
helm uninstall demo -n iom release "demo" uninstalled kubectl delete namespace iom namespace "iom" deleted
Since database data and shared file system of IOM were stored in local directories of the current host, they still exist after uninstalling IOM. In fact, this data represents the complete state of IOM. If we would install IOM again, with the same directories for shared file system and database data, the old IOM installation would be reincarnated.
Please keep in mind that these preconditions reflect the use case described in section IOM Helm-Charts. When using the Intershop Commerce Platform, these preconditions are all covered by Intershop.
Requirements and characteristics are numbered again. You will find these numbers in the values file listed below in order to see the relation between requirement and current configuration.
The values file shown below reflects the requirements of the straight Helm approach as described in section IOM Helm-Charts to demonstrate this process in all its details. Within the Intershop Commerce Platform environment you would edit the values file only. Any further actions are triggered automatically when pushing changes made in the file.
Of course, this values file cannot be copied as it is. It references external resources and external services, which do not exist in other environments. Additionally, the hostname iom.mycompany.com needs to be replaced to match your requirements.
# start 2 IOM application servers (requirement #1) replicaCount: 2 # run upgrade processes without downtime (requirement #7) downtime: false imagePullSecrets: - name: project-pull-secret # an image containing the project-specific customizations and # configurations will be used. image: repository: "project-repository/iom-project" tag: "1.0.0" # increase the time that is available for initialization, migration, and # configuration (requirement #8) startupProbe: failureThreshold: 120 # configure Ingress to forward requests to IOM, which are sent to # host iom.mycompany.com. Integrated NGINX Ingress controller is # disabled per default (requirement #6). ingress: enabled: true hosts: - host: iom.mycompany.com paths: - path: / pathType: Prefix tls: - secretName: mycompany-com-tls hosts: - iom.mycompany.com # information about external postgresql service (requirement #2) pg: host: postgres-prod.postgres.database.azure.com port: 5432 userConnectionSuffix: "@postgres-prod" # root-database and superuser information. The very first installation initializes # the database of IOM. After that, these information should be removed from values # file completely (and dbaccount should be disabled/removed too) user: postgres passwdSecretKeyRef: name: mycompany-prod-secrets key: pgpasswd db: postgres # IOM has to know its own public URL oms: publicUrl: "https://iom.mycompany.com/" db: name: oms_db user: oms_user passwdSecretKeyRef: name: mycompany-prod-secrets key: dbpasswd # configuration of external smtp server (requirement #4) smtp: host: smpt.external-provider.com port: 25 user: my-company-prod passwdSecretKeyRef: name: mycompany-prod-secrets key: smtppasswd log: metadata: tenant: mycompany environment: prod project: envName: prod # store data of shared file system at azurefile service (requirement #5) persistence: storageClass: azurefile storageSize: 60G # Create IOM database and according user before starting IOM. Creates IOM database # while running install process. After that, dbaccount should be completely removed # from the values file. Without set explicitly, data are not reset during start # (requirement #3). dbaccount: enabled: true image: repository: docker.intershop.de/intershophub/iom-dbaccount tag: "1.4.0"
Create a file values.yaml and fill it with the content listed above. Adapt all the changes to the file that are required by your environment. After that, the installation process can be started.
# create namespace mycompany-iom kubectl create namespace mycompany-iom # install IOM into namespace mycompany-iom helm install ci intershop/iom --values=values.yaml --namespace mycompany-iom --timeout 20m0s --wait
This installation process will now take some minutes to finish. In the meantime, the progress of the installation process can be observed within a second terminal window. Using kubectl, you can see the status of every Kubernetes object. For simplicity, the following example shows the status of pods only.
Just open a second terminal window and enter the following commands.
# One second after start, all pods are in very early phases. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 0/1 Pending 0 1s # Little bit later, IOM is in initialization phase, which means the init-container is currently executed. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 0/1 Init:0/1 0 24s # After a few minutes IOM is "Running", but not "READY" yet. The init-container is finished # now and the database is initialized, migrated, configured, IOM- and project-applications are # deployed into the Wildfly application server. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 0/1 Running 0 2m43s # The first iom-pod is "Running" and "READY", which means the IOM System is usable now. # The second iom-pod has just started and is not ready yet. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 1/1 Running 0 5m35s prod-iom-1 0/1 Running 0 10s # Both iom-pods are "Running" and "READY". Installation of IOM is finished. kubectl get pods -n mycompany-iom NAME READY STATUS RESTARTS AGE prod-iom-0 1/1 Running 0 10m prod-iom-1 1/1 Running 0 5m49s
When all pods are Running and Ready, the installation process has finished. You should check the first terminal window where the installation process was started.
Now we repeat the upgrade process, which was already shown in the previous example. This simple example was chosen because from a Helm perspective, the rollout of any change in values or charts is an upgrade process. The process is identical, no matter if only a simple value is changed or if new Docker images of a new IOM release are rolled out.
Also setting the downtime
parameter (see: Restrictions on Upgrade) is considered. A change of a log-level is an uncritical change which can be applied without downtime. Since we have more than one IOM application server, the upgrade process can now be executed without downtime.
Add the following lines to the values.yaml:
log: level: quartz: INFO
These changes are now rolled out by running the Helm upgrade process to the existing IOM installation. Start the process within a terminal window.
helm upgrade ci intershop/iom --values=values.yaml --namespace mycompany-iom --timeout 20m0s --wait
The upgrade process will take some minutes before being finished.
In the previous section you may have noticed that the behavior of pods during the installation process is identical no matter which Kubernetes environment was used (Docker Desktop, AKS). The same applies to the upgrade process. For this reason, the box "Observe progress" will be skipped in the current section.
The last process demonstrates how to uninstall IOM. Please keep in mind that the uninstall process only covers the objects defined in IOM Helm-charts. In the current production example many external resources and external services are referenced. These resources and services remain untouched by the uninstall process of IOM.
# uninstall IOM release helm uninstall prod -n mycompany-iom release "prod" uninstalled # delete Kubernetes namespace used for IOM kubectl delete namespace mycompany-iom namespace "mycompany-iom" deleted
Each example shown in the Examples section before used a values file to define the specific setup of IOM (see: demo, prod). The current section now describes each parameter you have already used before in detail. There are also many more parameters that were not used in the examples.
In the Examples section, you have already learned that IOM Helm-Charts also provide optional components: integrated PostgreSQL server, integrated SMTP server, integrated NGINX controller and support for the execution of tests. These optional components are covered by separate sections.
Parameter | Description | Default Value |
---|---|---|
replicaCount | The number of IOM application server instances to run in parallel. | 2 |
downtime | The Additional information:
| true |
image.repository | Repository of the IOM app product/project image. | docker.intershop.de/intershophub/iom |
image.pullPolicy | Pull policy, to be applied when getting IOM product/project Docker image. For more information, see the official Kubernetes documentation. | IfNotPresent |
image.tag | The tag of IOM product/project image. | 4.0.0 |
dbaccount | Parameters bundled by Once the IOM database is created, the | |
dbaccount.enabled | Controls if the dbaccount init-container should be executed or not. If enabled, dbaccount will only be executed when installing IOM, not on upgrade operations. | false |
dbaccount.image.repository | Repository of the dbaccount image. | docker.intershop.de/intershophub/iom-dbaccount |
dbaccount.image.pullPolicy | Pull policy, to be applied when getting dbaccount Docker image. For more information, see the official Kubernetes documentation. | IfNotPresent |
dbaccount.image.tag | The tag of dbaccount image. | 1.4.0 |
dbaccount.resetData | Controls if dbaccount init-container should reset an already existing IOM database during the installation process of IOM. If set to true , existing data is deleted without backup and further warning. | false |
dbaccount.options | When creating the IOM database, more options added to OWNER are required. Depending on the configuration of the See Options and Requirements of IOM database for details. | "ENCODING='UTF8' LC_COLLATE='en_US.utf8' LC_CTYPE='en_US.utf8' CONNECTION LIMIT=-1 TEMPLATE=template0" |
dbaccount.searchPath | In some circumstances, the search path for database objects has to be extended. This is the case if custom schemas are used for customizations or tests. To add more schemas to the search-path, set the current parameter to a string containing all additional schemas, separated by a comma, e.g. "tests, customschema". The additional entries are inserted at the beginning of the search-path, hence objects with the same name as standard objects of IOM are found first. | |
dbaccount.tablespace | Use the passed tablespace as default for IOM database user and IOM database. Tablespace has to exist, it will not be created. Section Options and Requirements of IOM database will give you some more information.
| |
dbaccount.resources | Resource requests & limits | {} |
config | Parameters, bundled by The config init-container was removed along with IOM 4.0.0. The according functionality is now executed by the IOM container itself. The | |
config.enabled | The config init-container was removed along with IOM 4.0.0. For backward compatibility it can still be used, but has to be enabled explicitly now .
| false |
config.image.repository | Repository of the IOM config product/project image. | docker.intershop.de/intershophub/iom-config |
config.image.pullPolicy | Pull policy, to be applied when getting the IOM config product/project Docker image. For more information, see the official Kubernetes documentation. | IfNotPresent |
config.image.tag | The tag of IOM config product/project image. | |
config.resources | Resource requests & limits | {} |
oms.skipProcedures | Normally, when updating the config image of IOM, stored procedures, migration scripts, and project configuration are executed. Setting parameter
| false |
oms.skipMigration | Normally, when updating the config image of IOM, stored procedures, migration scripts, and project configuration are executed. Setting parameter
| false |
oms.skipConfig | Normally, when updating the config image of IOM, stored procedures, migration scripts, and project configuration are executed. Setting parameter
| false |
pg | This group of parameters bundles the information required to connect the PostgreSQL server, information about the superuser, and default database (management database, not the IOM database). Not all clients need all information: The dbaccount init-container is the only client that needs access to the PostgreSQL server as a superuser. Hence, if you do not enable If integrated PostgreSQL server is enabled ( | |
pg.user | Name of the superuser.
| postgres |
pg.userSecretKeyRef | Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.
| |
pg.passwd | The password of the superuser.
| postgres |
pg.passwdSecretKeyRef | Instead of storing the password as plain text in the values file, a reference to a key within a secret can be used. For more information see section References to entries of Kubernetes secrets.
| |
pg.db | Name of the default (management) database.
| postgres |
pg.host | The hostname of the PostgreSQL server. | postgres-service |
pg.port | Port of the PostgreSQL server. | "5432" |
pg.userConnectionSuffix | When using the Azure Database for PostgreSQL service, user names have to be extended by a suffix, beginning with '@'. For more information, refer to the official Azure Database for PostgreSQL documentation. This suffix is not a part of the user name. It has to be used only when connecting to the database. For this reason, the parameter Example: "@mydemoserver" | |
pg.sslMode | pg.sslMode has to contain one of the following values: disable , allow , prefer , require , verify-ca , verify-full . For a detailed description of settings, please see the official PostgreSQL documentation. | prefer |
pg.sslCompression | If set to | "0" |
pg.sslRootCert | Azure Database for PostgreSQL service might require verification of the server certificate, see the official Azure Database for PostgreSQL documentation. To handle this case, it is possible to pass the SSL root certificate in | |
oms | Parameters of group oms are all related to the configuration of IOM. | |
oms.publicUrl | The publicly accessible base URL of IOM which could be the DNS name of the load balancer, etc. It is used internally for link generation. | https://localhost |
oms.mailResourcesBaseUrl | The base path for e-mail resources that are loaded from the e-mail client, e.g., images or stylesheets. Also, see Concept - IOM Customer Emails. | https://localhost/mailimages/customers |
oms.jwtSecret | The shared secret for JSON Web Token (JWT) creation/validation. JWTs will be generated with the HMAC algorithm (HS256). Intershop strongly recommends to change the default shared secret used for the JSON Web Tokens creation/validation. To secure the JWT, a key of the same size as the hash output or larger must be used with the JWS HMAC SHA-2 algorithms (i.e, 256 bits for "HS256"), see JSON Web Algorithms (JWA) | 3.2. HMAC with SHA-2 Functions.
| length_must_be_at_least_32_chars |
oms.jwtSecretKeyRef | Instead of storing the JWT secret as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.archiveOrderMessageLogMinAge | Number of days after which the entries in table "OrderMessageLogDO" should be exported Exported data are stored under share/archive
| "90" |
oms.deleteOrderMessageLogMinAge | Number of days after which the entries in table "OrderMessageLogDO" will definitely be deleted in order to reduce the table size. Must be greater than
| "180" |
oms.archiveShopCustomerMailMinAge | Number of days after which the entries in table "ShopCustomerMailTransmissionDO" should be exported (Quartz job "ShopCustomerMailTransmissionArchive") and the column "message" set to ''deleted'' in order to reduce the table size. Default is Exported data are stored under share/archive
| "1826" |
oms.archiveShopCustomerMailMaxCount | Maximum number of entries in table "ShopCustomerMailTransmissionDO" to be exported per run of the Quartz job "ShopCustomerMailTransmissionArchive". Default is 10000, however, the export will not take place if this property and ''archive_ShopCustomerMailMinAge'' are not set. Min. accepted value: 10
| "10000" |
oms.deleteShopCustomerMailMinAge | The number of days after which the entries in table "ShopCustomerMailTransmissionDO" will definitely be deleted in order to reduce the table size. (Quartz job"ShopCustomerMailTransmissionArchive") Default is
| "2190" |
oms.secureCookiesEnabled | If set to
| true |
oms.execBackendApps | If set to | true |
oms.db | Group oms.db bundles all parameters which are required to access the IOM database. General information required to connect the PostgreSQL server are stored at group pg . | |
oms.db.name | The name of the IOM database. | oms_db |
oms.db.user | The IOM database user .
| oms_user |
oms.db.userSecretKeyRef | Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.db.passwd | The password of the IOM database user. | OmsDB |
oms.db.passwdSecretKeyRef | Instead of storing the password as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.db.hostlist | A comma-separated list of database servers. Each server entry consists of a hostname and port, separated by a colon. Setting the port is optional. If not set, standard port 5432 will be used.
| |
oms.db.connectionMonitor | Parameters in Example:
| |
oms.db.connectionMonitor.enabled | Enables/disables Kubernetes cronjob providing the connection monitoring messages.
| false |
oms.db.connectionMonitor.schedule | Controls frequency of Kubernetes cronjob providing the connection monitoring messages.
| "*/1 * * * *" |
oms.db.connectTimeout | Controls connect timeout of database connections (jdbc- and psql-initiated connections). Value is defined in seconds. A value of 0 means to wait infinitely.
| 10 |
oms.smtp | Parameters in If an integrated SMTP server is enabled ( | |
oms.smtp.host | The hostname of the mail server IOM uses to send e-mails.
| mail-service |
oms.smtp.port | The port of the mail server IOM uses to send e-mails.
| "1025" |
oms.smtp.user | The user name for mail server authentication.
| |
oms.smtp.userSecretKeyRef | Instead of storing the user name as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
oms.smtp.passwd | The password for mail server authentication.
| |
oms.smtp.passwdSecretKeyRef | Instead of storing the password as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets.
| |
startupProbe | Group of parameters to fine-tune the startup probe of Kubernetes. The basic kind of probe is fixed and cannot be changed. For an overview of probes and pod lifecycle, see the official Kubernetes documentation. Startup probe was introduced with IOM Helm charts 2.0.0, when IOM config image was removed. All the functionality that was executed by the config image before is in IOM version 4.0.0 and the newer part of the IOM image. The startup probe must now be used to observe all the tasks (create db account, roll out dump, execute stored procedures, run database migrations, apply project configuration) that are done before the Wildfly application server is started. The startup probe must not finally fail before the end of the startup phase, otherwise the pod will be ended and restarted. The startup phase ends when startup probe succeeds. To do so, you need to configure | |
startupProbe.enabled | Enables to switch on/off the startup probe.
| true |
startupProbe.periodSeconds | How often (in seconds) to perform the probe. Minimum value is 1.
| 10 |
startupProbe.initialDelaySeconds | Number of seconds after the container has started before startup probes are initiated. Minimum value is 0.
| 60 |
startupProbe.timeoutSeconds | Number of seconds after which the probe times out. Default is set to 1 second. Minimum value is 1.
| 5 |
startupProbe.failureThreshold | When a probe fails, Kubernetes will try
| 60 |
livenessProbe | Group of parameters to fine-tune the liveness probe of Kubernetes. The basic kind of probe is fixed and cannot be changed. For an overview of probes and pod lifecycle, see the official Kubernetes documentation. | |
livenessProbe.enabled | Enables to switch on/off the liveness probe. | true |
livenessProbe.periodSeconds | How often (in seconds) to perform the probe. Minimum value is 1. | 10 |
livenessProbe.initialDelaySeconds | Number of seconds after the container has started before liveness probes are initiated. Minimum value is 0. | 60 |
livenessProbe.timeoutSeconds | Number of seconds after which the probe times out. Default is set to 1 second. Minimum value is 1. | 5 |
livenessProbe.failureThreshold | When a probe fails, Kubernetes will try | 3 |
readinessProbe | Group of parameters, to fine-tune the readiness probe of Kubernetes. The basic kind of probe is fixed and cannot be changed. For an overview of probes and pod lifecycle, see the official Kubernetes documentation. | |
readinessProbe.enabled | Enables to switch on/off the readiness probe. | true |
readinessProbe.periodSeconds | How often (in seconds) to perform the probe. Minimum value is 1. | 10 |
readinessProbe.initialDelaySeconds | Number of seconds after the container has started before readiness probes are initiated. Minimum value is 0. | 60 |
readinessProbe.timeoutSeconds | Number of seconds after which the probe times out. Default is set to 1 second. Minimum value is 1. | 8 |
readinessProbe.failureThreshold | When a probe fails, Kubernetes will try | 1 |
readinessProbe.successThreshold | Minimum consecutive successes for the probe to be considered successful after having failed. Minimum value is 1. | 1 |
jboss | Parameters of group jboss are all related to the configuration of Wildfly/JBoss. | |
jboss.javaOpts | The value of The default value used by Helm charts 1.5.0 and newer allows for not having to care about Java memory settings any longer. Just set the memory size in parameter |
|
jboss.javaOptsAppend | Java options, to be passed to the application server, are built from the two parameters jboss.javaOpts and jboss.javaOptsAppend . It is recommended not to overwrite jboss.javaOpts, or to only overwrite it if really necessary. This way, the maintenance effort of your values file will be reduced, since it is not necessary to track changes of the default value of jboss.javaOpts that have to be reapplied to the overwritten value. | |
jboss.opts | Additional command-line arguments to be used when starting the WildFly application server. Example: " | |
jboss.xaPoolsizeMin | The minimum value of the pool size of XA datasources. | "50" |
jboss.xaPoolsizeMax | The maximum value of the pool size of XA datasources. | "125" |
jboss.activemqClientPoolSizeMax | Maximum size of the ActiveMQ client thread pool.
| "50" |
jboss.nodePrefix |
If There are two use cases which might make it necessary to define
| |
log | Parameters of group log are all related to the configuration of the logging of IOM. | |
log.access.enabled | Controls creation of access log messages. Allowed values are: true, false
| true |
log.level.scripts | Controls log level of all shell scripts running in one of the IOM-related containers (as defined in Allowed values are: ERROR, WARN, INFO, DEBUG | INFO |
log.level.iom | Controls log level of IOM log handler, which covers all Java packages beginning with bakery, com.intershop.oms, com.theberlinbakery, org.jboss.ejb3.invocation. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.hibernate | Controls log level of HIBERNATE log handler, which covers all Java packages beginning with Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.quartz | Controls log level of QUARTZ log handler, which covers all Java packages beginning with org.quartz. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.activeMQ | Controls log level of ACTIVEMQ log handler, which covers all Java packages beginning with Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.console | The CONSOLE handler has no explicit assignments of Java packages. This handler is assigned to root loggers which do not need any assignments. Instead, this log handler handles all unassigned Java packages, too. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.level.customization | Another handler without package assignments is CUSTOMIZATION. In difference to CONSOLE, this handler will not log any messages as long as no Java packages are assigned. The assignment of Java packages has to be done in the project configuration and is described in Guide - IOM Standard Project Structure. Allowed values are: FATAL, ERROR, WARN, INFO, DEBUG, TRACE, ALL | WARN |
log.metadata |
Note Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM. | |
log.metadata.tenant | The name of the tenant is added to every log message. Example: Intershop Note Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM. | company-name |
log.metadata.environment | The name of the environment is added to every log message. Example: production Note Deprecated since IOM Helm Charts 1.3.0. Datadog will inject according information in the future, without the need to loop them through IOM. | system-name |
log.rest | This parameter can hold a list of operation IDs of REST interfaces. If the operation ID of a REST interface is listed here, information about request and response of the according REST calls are written into DEBUG messages. Operation IDs are part of the YAML specification of IOM REST interfaces. Example:
| \[\] |
datadogApm |
| |
datadogApm.enabled | This parameter is mapped to environment variable DD_APM_ENABLED. For more information, please consult the official datadog documentation. If set to
| false |
datadogApm.backendOnly | If set to
| true |
datadogApm.traceAgentHost | This parameter is mapped to environment variable DD_AGENT_HOST. For more information, please consult the official datadog documentation. Normally this environment variable is injected with the right value by the locally installed datadog daemon-set.
| |
datadogApm.traceAgentPort | This parameter is mapped to environment variable DD_TRACE_AGENT_PORT. For more information, please consult the official datadog documentation. Normally this environment variable is injected with the right value by the locally installed datadog daemon-set.
| |
datadogApm.traceAgentTimeout | This parameter is mapped to environment variable DD_TRACE_AGENT_TIMEOUT. For more information, please consult the official datadog documentation.
| |
datadogApm.logsInjection | This parameter is mapped to environment variable DD_LOGS_INJECTION. For more information, please consult the official datadog documentation.
| false |
datadogApm.debug | This parameter is mapped to environment variable DD_TRACE_DEBUG. For more information, please consult the official datadog documentation.
| false |
datadogApm.startupLogs | This parameter is mapped to environment variable DD_TRACE_STARTUP_LOGS. For more information, please consult the official datadog documentation.
| true |
datadogApm.tags | This parameter is mapped to environment variable DD_TAGS. For more information, please consult the official datadog documentation.
| |
datadogApm.serviceMapping | This parameter is mapped to environment variable DD_SERVICE_MAPPING. For more information, please consult the official datadog documentation.
| |
datadogApm.writerType | This parameter is mapped to environment variable DD_WRITER_TYPE. For more information, please consult the official datadog documentation.
| |
datadogApm.partialFlushMinSpan | This parameter is mapped to environment variable DD_TRACE_PARTIAL_FLUSH_MIN_SPANS. For more information, please consult the official datadog documentation.
| |
datadogApm.dbClientSplitByInstance | This parameter is mapped to environment variable DD_TRACE_DB_CLIENT_SPLIT_BY_INSTANCE. For more information, please consult the official datadog documentation.
| |
datadogApm.healthMetricsEnabled | This parameter is mapped to environment variable DD_TRACE_HEALTH_METRICS_ENABLED. For more information, please consult the official datadog documentation.
| false |
datadogApm.servletAsyncTimeoutError | This parameter is mapped to environment variable DD_TRACE_SERVLET_ASYNC_TIMEOUT_ERROR. For more information, please consult the official datadog documentation.
| true |
datadogApm.sampleRate | This parameter is mapped to environment variable DD_TRACE_SAMPLE_RATE. For more information, please consult the official datadog documentation.
| '1.0' |
datadogApm.jmsFetchEnabled | This parameter is mapped to environment variable DD_JMXFETCH_ENABLED. For more information, please consult the official datadog documentation.
| true |
project | Within
| |
project.envName | Intershop Commerce Platform (previously known as CaaS) projects support different settings for different environments.
| env-name |
project.importTestData | Controls the import of test data, which are part of the project. See Guide - IOM Standard Project Structure for more information. If enabled, test data is imported during installation and upgrade processes.
| false |
project.importTestDataTimeout | Timeout in seconds for the import of test data. If the import has not finished before the according amount of seconds has passed, the container will end with an error.
| "300" |
persistence | Parameters of group persistence control how IOM's shared data is persisted. | |
persistence.storageClass | Name of the existing storage class to be used for IOM's shared data.
| azurefile |
persistence.annotations | Annotations for persistence volume claim to be created. See https://helm.sh/docs/topics/charts_hooks/ for more information about default annotations.
|
|
persistence.storageSize | Requested storage size. For more information, see the official Kubernetes documentation. | 1Gi |
persistence.hostPath | For very simple installations, persistent data can be stored directly at a local disk. In this case, the path on local host has to be stored at this parameter.
| |
persistence.pvc | For transregional installations of IOM, it has to be possible to define the Persistence Volume Claim (pvc) directly. This way IOM's shared data can be persisted at one place by two or more IOM clusters. | |
ingress | Group ingress bundles configuration of IOM's ingress, which is required to get access to IOM from outside of Kubernetes. | |
ingress.enabled | Enables ingress for IOM. If not enabled, IOM cannot be accessed from outside of Kubernetes. | true |
ingress.className | Ingress class has to be specified by If the integrated NGINX controller should be used to serve incoming requests, the parameter | nginx |
ingress.annotations | Annotations for the ingress. | {} |
ingress.hosts | A list of ingress hosts. The default value grants access to IOM. The syntax of ingress objects has to match the requirements of Kubernetes 1.19 (see https://kubernetes.io/docs/concepts/services-networking/ingress/). |
|
ingress.tls | A list of IngressTLS items | [] |
resources | Resource requests & limits |
|
imagePullSecrets | Name of the secret to get credentials from. | [] |
nameOverride | Overwrites the chart name. | |
fullnameOverride | Overwrites the complete name, constructed from release, and chart name. | |
serviceAccount.create | If true , creates a backend service account. Only useful if you need a pod security policy to run the backend. | true |
serviceAccount.annotations | Annotations for the service account. Only used if create is true . | {} |
serviceAccount.name | The name of the backend service account to use. If not set and create is true , a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. | |
podAnnotations | Annotations to be added to pods. | {} |
podSecurityContext | Security context policies to add to the iom-tests pod. | {} |
securityContext | List of required privileges. | {} |
service.type | Type of service to create. | ClusterIP |
service.port | Port to be exposed by service. | 80 |
nodeSelector | Node labels for pod assignment. | {} |
tolerations | Node taints to tolerate. | [] |
affinity | Node/pod affinities. | {} |
A complete list of parameters can be found here: https://github.com/codecentric/helm-charts/tree/master/charts/mailhog
The table below only lists parameters that have to be changed for different operation options of IOM.
Parameter | Description | Default Value |
---|---|---|
mailhog.enabled | Controls whether an integrated SMTP server should be used or not. This SMTP server is not intended to be used for any kind of serious IOM installation. It should only be used for demo-, CI- or similar types of setups. | false |
mailhog.probes.enabled | This parameter allows to switch on/off liveness and readiness probes of Mailhog. These probes are producing a lot of messages, which can be avoided if the probes are disabled. | true |
mailhog.resources | Resource requests & limits. | {} |
mailhog.ingress.hosts | A list of ingress hosts. |
|
mailhog.ingress.tls | A list of IngressTLS items. | [] |
mailhog.ingress.annotations | Annotations for the ingress. | {} |
A complete list of parameters can be found here: https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx
The table below only lists parameters that have to be changed for different operation options of IOM and those that must not be changed at all.
Parameter | Description | Default Value |
---|---|---|
nginx.enabled | Controls whether an integrated NGINX ingress controller should be installed or not. This ingress controller can serve two purposes:
| false |
nginx.proxy.enabled | Controls if the integrated NGINX ingress controller should act as a proxy between cluster-wide ingress controller and IOM, or as an ingress controller used instead of the cluster-wide one. | true |
nginx.proxy.annotations | Annotations for the ingress.
| {} |
ingress-nginx.controller.replicaCount | Desired number of controller pods. | 2 |
ingress-nginx.controller.service.type | Type of controller service to create. When using the integrated NGINX controller as a proxy, | ClusterIP |
ingress-nginx.controller.extraArgs | Additional command line arguments to pass to nginx-ingress-controller. Example to increase verbosity: { v: 3 } | |
ingress-nginx.controller.config | Adds custom configuration options to Nginx, see ingress-nginx user-guide. | { use-forwarded-headers: "true", proxy-add-original-uri-header: "true" } |
ingress-nginx.rbac.create | If true , create & use RBAC resources.
| true |
ingress-nginx.rbac.scope | If
| true |
ingress-nginx.controller.ingressClass | Name of the ingress class to route through this controller.
| nginx-iom |
nfginx-ingress.controller.scope.enabled | Limit the scope of the ingress controller. If set to
| true |
Parameter | Description | Default Value |
---|---|---|
postgres.enabled | Controls whether an integrated PostgreSQL server should be used or not. This PostgreSQL server is not intended to be used for any kind of serious IOM installation. It should only be used for demo-, CI- or similar types of setups. | false |
postgres.args | An array containing command line arguments, which are passed to the Postgres server at start. For more information, see the official PostgreSQL 12 documentation. | ["-N", "200", "-c", "max_prepared_transactions=100"] |
postgres.image.repository | Repository of the PostgreSQL image. For more information, see official Docker hub. | postgres |
postgres.image.tag | Tag of PostgreSQL image. For more information, see official Docker hub. | "12" |
postgres.image.pullPolicy | Pull policy to be applied when getting PostgreSQL Docker images. For more information, see the official Kubernetes documentation. | IfNotPresent |
postgres.pg | This group of parameters bundles the information about the superuser and default database (management database, not the IOM database). This information is used to configure the Postgres server on start, but is also used by clients which require superuser access to the Postgres server. The only client that needs this kind of access is the dbaccount init-image that creates/updates the IOM database. | |
postgres.pg.user | Name of the superuser. The superuser will be created when starting the Postgres server.
| postgres |
postgres.pg.userSecretKeyRef | Instead of storing the name of the user as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets. | |
postgres.pg.passwd | The password of the superuser. Password will be set when starting the Postgres server.
| postgres |
postgres.pg.passwdSecretKeyRef | Instead of storing the password as plain text in the values file, a reference to a key within a secret can be used. For more information, see section References to entries of Kubernetes secrets. | |
postgres.pg.db | Name of default (management) database which will be created when starting the Postgres server. | postgres |
postgres.persistence | Parameters of group postgres.persistence are controlling if and how the database data are persisted. | |
postgres.persistence.enabled | If set to false , data of the PostgreSQL server are not persisted at all. They are only written to memory and get lost if the Postgres pod ends. | false |
postgres.persistence.accessMode | The default value allows binding the persistent volume in read/write mode to a single pod only, which is exactly what should be done for the PostgreSQL server. For more information, see the official Kubernetes documentation.
| ReadWriteOnce |
postgres.persistence.storageClass | Name of existing storage class to be used by the PostgreSQL server.
| |
postgres.persistence.annotations | Annotations to be added to the according PersistentVolumeClaim. For more information, see the official Kubernetes documentation.
| {} |
postgres.persistence.storageSize | Requested storage size. For more information, see the official Kubernetes documentation. | 20Gi |
postgres.persistence.hostPath | For very simple installations, persistent data can be directly stored at a local disk. In this case, the path on local host has to be stored at this parameter. | |
postgres.resources | Resource requests & limits. | {} |
postgres.imagePullSecrets | The name of the secret to get credentials from. | [] |
postgres.nameOverride | Overwrites chart name. | |
postgres.fullnameOverride | Overwrites complete name, constructed from release, and chart name. | |
postgres.nodeSelector | Node labels for pod assignment. | {} |
postgres.tolerations | Node taints to tolerate (requires Kubernetes >=1.6). | [] |
postgres.affinity | Node/pod affinities (requires Kubernetes >=1.6). | {} |
The iom-tests
sub-chart provides a very generic way to run tests on an IOM installation. The sub-chart and the according parameters are simply the pure skeleton, resulting from a helm create
call.
Parameter | Description | Default Value |
---|---|---|
iom-tests.enabled | Enables the rollout of iom-tests sub-chart. | false |
iom-tests.env | List of environment variables required by the tests pod. | |
iom-tests.replicaCount | Desired number of iom-tests pods. | 1 |
iom-tests.image.repository | Docker image repository. | iom-tests |
iom-tests.image.pullPolicy | Docker image pull policy. | IfNotPresent |
iom-tests.image.tag | Docker image tag. | |
iom-tests.imagePullSecrets | Name of the secret to get credentials from. | [] |
iom-tests.nameOverride | Overwrites chart name. | |
iom-tests.fullnameOverride | Overwrites complete name, constructed from release, and chart name. | |
iom-tests.serviceAccount.create | If true , creates a backend service account. Only useful if you need a pod security policy to run the backend. | true |
iom-tests.serviceAccount.annotations | Annotations for the service account. Only used if create is true . | {} |
iom-tests.serviceAccount.name | The name of the backend service account to use. If not set and create is true , a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. | |
iom-tests.podAnnotations | Annotations to be added to pods. | {} |
iom-tests.podSecurityContext | Security context policies to add to the iom-tests pod. | {} |
iom-tests.securityContext | List of required privileges. | {} |
iom-tests.service.type | Type of service to create. | ClusterIP |
iom-tests.service.port | Port to be exposed by service. | 80 |
iom-tests.ingress.enabled | Enables ingress for iom-tests. It is suggested to get access to test results this way. | true |
iom-tests.ingress.className | The ingress class has to be specified by If the integrated NGINX controller should be used to serve incoming requests, the parameter
| nginx |
iom-tests.ingress.annotations | Annotations for the ingress. | {} |
iom-tests.ingress.hosts | A list of ingress hosts. The default value grants access to IOM.
| < IOM Helm charts 1.5.0:
>= IOM Helm charts 1.5.0:
|
iom-tests.ingress.tls | A list of IngressTLS items. | [] |
iom-tests.containerPort | The port used by the container to provide its service. | 8080 |
iom-tests.resources | Resource requests & limits. | {} |
iom-tests.autoscaling.enabled | If true , creates Horizontal Pod Autoscaler . | false |
iom-tests.autoscaling.minReplicas | If autoscaling enabled, this field sets the minimum replica count. | 1 |
iom-tests.autoscaling.maxReplicas | If autoscaling enabled, this field sets the maximum replica count. | 100 |
iom-tests.autoscaling.targetCPUUtilizationPercentage | Target CPU utilization percentage to scale. | 80 |
iom-tests.nodeSelector | Node labels for pod assignment. | {} |
iom-tests.tolerations | Node taints to tolerate (requires Kubernetes >=1.6). | [] |
iom-tests.affinity | Node/pod affinities (requires Kubernetes >=1.6). | {} |
All parameters ending by SecretKeyRef
serve as an alternative way to provide secret information. Instead of storing entries as plain text in the values file, these parameters allow reference entries within Kubernetes
secrets. For more information about secrets, see public Kubernetes documentation.
SecretKeyRef
parameters require a hash structure, consisting of two entries with the following hash-keys:
name
: is the name of the Kubernetes
secret, containing the referenced keykey
: is the name of the entry within the secretThe following two boxes show an example which consists of two parts:
Kubernetes
secret, which contains entries for different secret values, andapiVersion: v1 kind: Secret metadata: name: pgsecrets type: Opaque data: pguser: cG9zdGdyZXM= pgpasswd: ZGJ1c2VycGFzc3dk
... # general postgres settings, required to connect to postgres server # and root db. pg: userSecretKeyRef: name: pgsecrets key: pguser passwdSecretKeyRef: name: pgsecrets key: pgpasswd db: postgres sslMode: prefer sslCompression: "1" sslRootCert: ...
The ideal configuration mainly depends on the server resources and on the activity. Therefore, we can only provide a general guideline. The configuration ranges indicated below may not be applicable in all cases, especially on small systems. These values are intended for a mid-size system with about 32 GB RAM and 24 cores.
If PostgreSQL is used as a service (e.g. Azure Database for PostgreSQL servers), not all PostgreSQL server parameters can be set. When using a service, the method of how to change PostgreSQL server parameters might be different, too.
To achieve the best performance, almost all the required data (tables and indexes) for the ongoing workload should be able to reside within the file system cache. Monitoring the I/O activity will help to identify insufficient memory resources.
The IOM is built with Hibernate as an API between the application logic and the database. This mainly results in a strong OLTP activity, with a large number of tiny SQL statements. Larger statements occur during import/export jobs and for some OMT search requests.
The following main parameters in $PGDATA/data/postgresql.conf should be adapted, see PostgreSQL 12 | Chapter 19. Server Configuration.
You can consider PGConfig 2.0 as a guideline (using the OLTP Model).
Some aspects of data reliability are discussed in PostgreSQL 12 | Chapter 29. Reliability and the Write-Ahead Log. Understanding VACUUM is also essential when configuring/monitoring Postgres, see PostgreSQL 12 | Chapter 24. Routine Database Maintenance Tasks.
Parameter | Description |
---|---|
max_connections | The number of concurrent connections from the application is controlled by the parameters Info Highly concurrent connections have a negative impact on performance. It is more efficient to queue the requests than to process them all in parallel. |
max_prepared_transactions | Required for IOM installations. Set its value to about 150% of max_connections . |
shared_buffers | Between 1/4 and 1/3 of the total RAM, but not more than about 8 GB. Otherwise, the cache management will use too many resources. The remaining RAM is more valuable as a file system cache. |
work_mem | Higher work_mem can increase performance significantly. The default is way too low. Consider using 100-400 MB. |
maintenance_work_mem | Increase the default similar to work_mem to favor quicker vacuums. With IOM, this parameter will be used almost exclusively for this task (unless you also set autovacuum_work_mem ).Consider something like 2% of your total RAM per autovacuum_max_workers . e.g., 32GB RAM * 2% * 3 workers = 2GB. |
vacuum_cost_* | The feature can stay disabled at the beginning. You should keep an eye on the vacuum activity under high load. |
wal_level | Depends on your backup, recovery, and failover strategy. Should be at least archive . |
wal_sync_method | Depends on your platform, check PostgreSQL 12 | 19.5. Write Ahead Log | wal_sync_method (enum). |
max_wal_size | 8 (small system) - 128 (large system) |
max_parallel_workers (since Postgres 9.6) | 0 |
checkpoint_completion_target | Use 0.8 or 0.9 . |
archive_* and REPLICATION | Depends on your backup & failover strategy. |
random_page_cost | The default (4 ) is usually too high. Better choose 2.5 or 3 . |
effective_cache_size | Indicates the expected size of the file system cache. On a dedicated server: should be about total_RAM - shared_buffers - 1GB. |
log_min_duration_statement | Set it between |
log_filename | Better use an explicit name to help when communicating, e.g., Not applicable if the PostgreSQL server is running in Kubernetes since all messages are written to stdout in this case. |
log_rotation_age | Set it to 60 min or less. Not applicable if the PostgreSQL server is running in Kubernetes since all messages are written to stdout in this case. |
log_line_prefix | Better use a more verbose format than the default, e.g., %m|%a|%c|%p|%u|%h| . |
log_lock_waits | Activate it (=on). |
stats_temp_directory | Better redirect it to a RAM disk. |
log_autovacuum_min_duration | Set it to a few seconds to monitor the vacuum activity. |
idle_in_transaction_session_timeout. (since Postgres 9.6) | An equivalent parameter exists for the WildFly connection pool (query-timeout) where it is set to 1 hour per default. Set idle_in_transaction_session_timeout to a larger value, e.g., 9 hours, to clean up possible leftover sessions. |
The database initialization made by dbaccount image is creating a user and database, which uses the system-wide default tablespace pg_default
. If you want to use a custom tablespace, you have to create it prior to the database initialization, see PostgreSQL: Documentation: 12: CREATE TABLESPACE.
To make the database initialization process aware of this newly created tablespace, the parameter dbaccount.tablespace
has to be set to its name. If this is done, this tablespace will be set as default tablespace for the IOM database user and for the IOM database during the initialization process.
All database clients and the IOM database have to use the same timezone. For this reason, all IOM Docker images are configured on OS-level to use timezone Etc/UTC. The process, executed by dbaccount init-image, sets this timezone for the IOM database user as well.
The locale of database clients and the locale of the IOM database have to be identical. For this reason, all IOM Docker images are setting environment variable LANG to en_US.utf8.
Accordingly, the setting on the database is made by dbaccount init-image. Using parameter dbaccount.options
, it is possible to configure this process.
When creating the IOM database by dbaccount init-image, using the wrong Encoding, Collate or Ctype is the most common reason for failed initialization of the IOM database. The according values have to be exactly identical to the values used by template databases. Hence, if there are any problems with Encoding, Collate, or Ctype when creating the IOM database, the existing databases should be listed to get the correct values. To do so, use psql
database client with parameter -l
to list them.
The following box shows how to do this after an initialization error if IOM is running on Docker-Desktop.
# get name of PostgreSQL pod kubectl get pods -n iom NAME READY STATUS RESTARTS AGE demo-ingress-nginx-controller-6c6f5b88cc-6wsfh 1/1 Running 0 67s demo-iom-0 0/1 Init:Error 3 67s demo-mailhog-5d7677c7c5-zl8gl 1/1 Running 0 67s demo-postgres-96676f4b-mt8nl 1/1 Running 0 67s # execute psql -U postgres -l within PostgreSQL pod kubectl exec demo-postgres-96676f4b-mt8nl -n iom -t -- psql -U postgres -l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges -----------+----------+----------+------------+------------+----------------------- postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 | template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres + | | | | | postgres=CTc/postgres (3 rows)
In some circumstances, the search path for database objects has to be extended. Search-Path is set by dbaccount init-image. This process can be configured by parameter dbaccount.searchPath
.
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.