IOM projects cannot have individual project layouts if they are to run in the Intershop CaaS environment, as this environment cannot be customized invididually for each project.
Instead, IOM projects must use a predefined project layout to support the generic installation of projects in the Intershop CaaS environment.
Devenv-4-iom is a small package consisting of a shell script, configuration and templates, which helps to realize development tasks along with IOM docker images. This tool has an own life-cycle and does not follow the versioning of IOM. It can be downloaded from Intershops Maven repository by using the following coordinates:
GroupID: com.intershop.oms
ArtifactID: devenv4iom
Packaging: tgz
Version: 1.1.0.0 (please check Release Notes for latest version of devenv-4-iom)
Also see Public Release Note - Devenv-4-iom 1.1.0.0.
IOM is provided in form of Docker images. IOM projects have to add custom applications and configuration to these images. The customized images then have to be put into the Intershop CaaS environment for execution. To be able to deal with these images, Docker v.19 is required.
IOM is provided in form of Docker images.
The images are available at:
Note
Adapt the tag (version number), if you are using a newer version of IOM. For a full list of available versions see Overview - IOM Public Release Notes.
caaS2docker is a small package consisting of a shell script and configuration, which helps to create customized IOM project-images. This tool is delivered and labeled with each IOM version. It can be downloaded from Intershops Maven repository by using the following coordinates:
GroupID: com.intershop.oms
ArtifactID: caas2docker
Packaging: tar.gz
Version: 3.1.0.0 (please use the latest IOM version to get the latest version of caas2docker)
The section First Steps is intended to guide you through all main parts of devenv-4-iom based on simple examples. You will learn how to:
Once you are able to set up IOM with devenv-4-iom and have an insight into its main ideas, it should become easy for you to find out more by yourself and to solve the development tasks you have to solve.
devenv-4-iom uses a very simple concept to manage developer instances of IOM. One configuration file holds all the information required to run one instance of IOM. As first step, a new configuration file has to be created now. To do so, the script devenv-cli.sh has to be called with options get config. In order to get the following examples to work, you have to extend the PATH variable by the directory, containing devenv-cli.sh, or you can also call the script using its absolute path.
# extend PATH variable # PATH_TO_DEVENV_CLI has to be replaced by the real value. export PATH="${PATH_TO_DEVENV_CLI}:$PATH" # create configuration file, filled with default values devenv-cli.sh get config > config.properties
There is one value in config.properties, that has to be set manually: ID
. Every instance of IOM, hence every configuration file, needs to have a unique value for ID
. Once you have set the ID
and started the according IOM, you must not change it anymore. Otherwise you will loose the ability to access/control the resources associated with the IOM installation. Now set the ID
to first-steps
.
# set ID in config.properties to "first-steps" vi config.properties
The other values of the new configuration file are filled with default settings defined by devenv-4-iom. The most important settings are the *_IMAGE
properties, since they define what will be executed by devenv-4-iom. By defining the images, you can control, for example, that a specific project, a standard IOM product without any customizations, an IOM product which is currently in development or even containers of the IOM product or project you have created yourself will run on your local computer.
The default settings use the pure IOM product. It is not necessary to change any of these settings for the first steps. The Docker registry used by default settings requires a login. Hence you have to log in to the registry. Additionally you should check if you are able to access the Docker images specified in the configuration file. To do so, try to pull the images manually in a shell.
Open the newly created config-file config.properties and copy the values of the *_IMAGE properties and use them to pull the Docker images manually, just as shown in the box below.
# login into Docker registry docker login docker.intershop.de # pull images from registry docker pull postgres:11 docker pull mailhog/mailhog docker pull docker.intershop.de/intershop/iom-dbaccount:1.1.0.0 docker pull docker.intershop.de/intershop/iom-config:3.0.0.0 docker pull docker.intershop.de/intershop/iom-app:3.0.0.0
Before using devenv-cli.sh to manage our IOM developer instance, we need to have a look at how configuration files are passed to the script. There are two different ways:
For this guide, we will use the second variant. It is recommended to store the absolute name of the configuration file in DEVENV4IOM_CONFIG, otherwise devenv-cli.sh would find the file only if it is called in the same directory as the configuration file resides.
export DEVENV4IOM_CONFIG="$(pwd)/config.properties"
For IOM to run in Kubernetes, several (sub-)systems are required:
devenv-4-iom provides an easy way to setup all these systems and make them work together. Just create the cluster by executing the following command:
devenv-cli.sh create cluster
The process of cluster creation will take some minutes (between 3 and 10, depending on your hardware). During this time we should take a look at the statuses of the (sub-)systems.
# get status of storage devenv-cli.sh info storage # get info about mail server devenv-cli.sh info mailserver # get info about Postgres server devenv-cli.sh info postgres # get info about IOM server devenv-cli.sh info iom
Mail server and PostgreSQL server start very fast. The output of the according info
commands contains a section 'Kubernetes', which shows the state. For these two systems, the state should be running even shortly after creating the cluster. The box below shows an example output:
devenv-cli.sh info postgres ... -------------------------------------------------------------------------------- Kubernetes: =========== namespace: 21700snapshot KEEP_DATABASE_DATA: true NAME READY STATUS RESTARTS AGE postgres 1/1 Running 0 8s -------------------------------------------------------------------------------- ...
The start of IOM takes much longer. You can use the info iom
command to check the state periodically. After some minutes IOM should be in running state too. The according output should look like this:
devenv-cli.sh info iom ... -------------------------------------------------------------------------------- Kubernetes: =========== namespace: 21700snapshot NAME READY STATUS RESTARTS AGE iom-6c587ddd87-d7qb2 1/1 Running 0 5m5s -------------------------------------------------------------------------------- ...
Once IOM is running, we can access its GUI. The info iom
command provides the according information about the URL you have to use. The following box shows an example:
devenv-cli.sh info iom ... -------------------------------------------------------------------------------- Links: ====== OMT: http://computername.local:8080/omt DBDoc: http://computername.local:8080/dbdoc/ Wildfly (admin:admin): http://computername.local:9990/console -------------------------------------------------------------------------------- ...
Just copy the OMT
link into your browser and open the page. You should now see the login screen. The combination of admin:!InterShop00! should give you access to IOM.
IOM is running and we are able to use it in the browser. It is time to learn how to access some log messages. Since we can browse IOM, the access-log message will serve as a good example. The following command prints access-log entries and also waits for new entries.
# press ^C to stop printing logs devenv-cli.sh log access all -f ... { "eventSource": "web-access", "hostName": "default-host", "tenant": "Intershop", "environment": "2.17.0.0-SNAPSHOT", "logHost": "iom-6c587ddd87-d7qb2", "logVersion": "1.0", "appVersion": "2.17.0.0-SNAPSHOT@1234", "appName": "iom-app", "logType": "access", "configName": "ci", "bytesSent": 33586, "dateTime": "2019-12-17T14:12:36153Z", "localIp": "10.1.1.210", "localPort": 8080, "remoteHost": "192.168.65.3", "remoteUser": null, "requestHeaderReferer": "http://computername.local:8080/omt/app/order/landingpage", "requestHeaderUser-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/13.0.4 Safari/605.1.15", "requestHeaderHost": "computername.local:8080", "requestHeaderCookie": "OMS_IDENTITY=eyJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJPTVQiLCJleHAiOjE1NzY2NzgzNDksImlhdCI6MTU3NjU5MTk0OSwic3ViIjoiQXV0aGVudGljYXRpb24iLCJ1c2VyIjoiYWRtaW4ifQ.c5XuyKZM1FbrwRRGTg4CqaXog3WN6K-kuSTYSp6WEio; SessionKey=11a11a64-9ee9-44cc-bd4d-7ffbb1e12fac; JSESSIONID=k_AxMo_ElpYnjaTTLAkOeFoF3LF_W6VW67PpYcG1.iom-6c587ddd87-d7qb2; org.springframework.web.servlet.i18n.CookieLocaleResolver.LOCALE=en", "requestLine": "GET /omt/WEB-INF/views/widgets/shortOrderSearchContainer.jsp?_=1576591952608 HTTP/1.1", "requestProtocol": "HTTP/1.1", "requestScheme": "http", "responseCode": 200, "responseHeaderContent-Type": "text/html;charset=utf-8", "responseHeaderSet-Cookie": null, "responseTime": 3420 } { "eventSource": "web-access", "hostName": "default-host", "tenant": "Intershop", "environment": "2.17.0.0-SNAPSHOT", "logHost": "iom-6c587ddd87-d7qb2", "logVersion": "1.0", "appVersion": "2.17.0.0-SNAPSHOT@1234", "appName": "iom-app", "logType": "access", "configName": "ci", "bytesSent": 472, "dateTime": "2019-12-17T14:12:41026Z", "localIp": "10.1.1.210", "localPort": 8080, "remoteHost": "10.1.0.1", "remoteUser": null, "requestHeaderReferer": null, "requestHeaderUser-Agent": "kube-probe/1.14", "requestHeaderHost": "10.1.1.210:8080", "requestHeaderCookie": null, "requestLine": "GET /monitoring/services/health/status HTTP/1.1", "requestProtocol": "HTTP/1.1", "requestScheme": "http", "responseCode": 200, "responseHeaderContent-Type": "application/json", "responseHeaderSet-Cookie": null, "responseTime": 2 } ...
The execution of an SQL file is a very simple example of a development task. To execute this task, we have to create an SQL file first. Therefore just create a file with the extension .sql and copy the following content into it:
select * from "CountryDefDO";
You have to make sure, that the file can be shared with Docker Desktop. Just check the settings of Docker Desktop. Go to Docker Desktop | Preferences | File Sharing and check if the file is located in a shared directory. If not, move it to a shared directory or change the preferences (this requires a restart of Docker Desktop). For more information about sharing with Docker Desktop, refer to Configuring Docker for Windows Shared Drives / Volume Mounting with AD in the Microsoft documentation.
The following box shows an example where the file is named /home/user/test.sql. If you have jq installed, you can pipe the output through jq to get pretty-printed messages.
devenv-cli.sh apply sql-script /home/user/test.sql { "tenant":"Intershop", "environment":"devenv4iom", "logHost":"computername.local", "logVersion":"1.0", "appName":"devenv4iom", "appVersion":"1.0.0.0-SNAPSHOT", "logType":"script", "timestamp":"2019-12-17T14:42:23Z", "level":"INFO", "message":"apply-sql-scripts: job successfully started", "processName":"devenv-cli.sh", "additionalInfo":"job.batch/apply-sql-job created", "configName":"ci" } {"tenant":"Intershop","environment":"2.17.0.0-SNAPSHOT","logHost":"apply-sql-job-kqcnh","logVersion":"1.0","appName":"iom-config","appVersion":"2.17.0.0-SNAPSHOT@1234","logType":"script","timestamp":"2019-12-17T14:42:24+00:00","level":"INFO","processName":"apply_sql.sh","message":"Properties","configName":"ci","additionalInfo":"--src=/tmp/sql-dir-volume/test.sql\nOMS_DB_HOST=postgres-service\nOMS_DB_PORT=5432\nOMS_DB_NAME=oms_db\nOMS_DB_USER=oms_user\nOMS_DB_PASS=oms_pw\nOMS_USER_CONNECTION_SUFFIX=\nOMS_LOGLEVEL_SCRIPTS=INFO\nTENANT=Intershop\nENVIRONMENT=2.17.0.0-SNAPSHOT"} {"tenant":"Intershop","environment":"2.17.0.0-SNAPSHOT","logHost":"apply-sql-job-kqcnh","logVersion":"1.0","appName":"iom-config","appVersion":"2.17.0.0-SNAPSHOT@1234","logType":"script","timestamp":"2019-12-17T14:42:24+00:00","level":"INFO","processName":"apply_sql.sh","message":"processing file '/tmp/sql-dir-volume/test.sql'","configName":"ci"} {"tenant":"Intershop","environment":"2.17.0.0-SNAPSHOT","logHost":"apply-sql-job-kqcnh","logVersion":"1.0","appName":"iom-config","appVersion":"2.17.0.0-SNAPSHOT@1234","logType":"script","timestamp":"2019-12-17T14:42:24+00:00","level":"INFO","processName":"apply_sql.sh","message":"success","configName":"ci"} { "tenant":"Intershop", "environment":"devenv4iom", "logHost":"computername.local", "logVersion":"1.0", "appName":"devenv4iom", "appVersion":"1.0.0.0-SNAPSHOT", "logType":"script", "timestamp":"2019-12-17T14:42:29Z", "level":"INFO", "message":"apply-sql-scripts: successfully deleted job", "processName":"devenv-cli.sh", "additionalInfo":"job.batch \"apply-sql-job\" deleted", "configName":"ci" }
As you can see, the method shown above is not intended to show the results of your select statement. For such purposes the interactive usage of psql to communicate with the PostgreSQL server is the better solution. Just use the command info postgres
to get the according command line.
devenv-cli.sh info postgres ... Usefull commands: ================= Login into Pod: kubectl exec --namespace firststeps postgres -it bash psql into root-db: kubectl exec --namespace firststeps postgres -it -- bash -c "PGUSER=postgres PGDATABASE=postgres psql" psql into IOM-db: kubectl exec --namespace firststeps postgres -it -- bash -c "PGUSER=oms_user PGDATABASE=oms_db psql" ... # now use the command for "psql into IOM-db" and # enter the select statement interactively kubectl exec --namespace firststeps postgres -it -- bash -c "PGUSER=oms_user PGDATABASE=oms_db psql" psql (11.8 (Debian 11.8-1.pgdg90+1)) Type "help" for help. oms_db=> select * from "CountryDefDO"; id | currency | currencyName | currencySymbol | isoCode2 | isoCode3 | isoNumeric | name ----+----------+---------------------------+----------------+----------+----------+------------+-------------------------------------- 1 | JMD | Jamaica Dollar | J$ | JM | JAM | 388 | Jamaica 2 | EUR | Euro | € | DE | DEU | 276 | Germany 3 | EUR | Euro | € | AT | AUT | 040 | Austria 4 | EUR | Euro | € | NL | NLD | 528 | Netherlands 5 | CHF | Schweizer Franken | SFr | CH | CHE | 756 | Switzerland ...
Now it is time to clean up the environment. To do so, we have to execute the following two steps:
Unlike the cluster creation step, which included the creation of the persistent storage as well, the cluster deletion step does not affect the persistent storage. This way you could simply create a new cluster which uses the old database data. To delete the persistent storage, we have to do it explicitly by executing the according command.
devenv-cli.sh delete cluster devenv-cli.sh delete storage
After deleting all resources belonging to our IOM developer instance, it is also save to delete the configuration file. Do not forget to unset DEVENV4IOM_CONFIG
as well.
rm config.properties export DEVENV4IOM_CONFIG=
We have used devenv-cli.sh a lot. If you want to explore all features of this program, just call it with -h
as argument.
devenv-4-iom uses a very simple concept to manage developer instances of IOM. One configuration file holds all the information required to run one instance of IOM. Along the many configuration values required to control the behavior of IOM, there is one property which is required to mark Kubernetes and Docker resources as being associated with a certain configuration. This property is the ID. Each configuration has to have its own unique ID. Hence, the creation of a new configuration file consists of these steps:
ID
in the newly created configuration file to a unique value.# Make sure no other configuration file is currently used. export DEVENV4IOM_CONFIG= # Create a new configuration file that contains default values only. devenv-cli.sh get config > config.properties # Set ID to a unique value and adapt other values according your needs. vi config.properties
devenv-cli.sh is used to control your IOM developer instances. Therefore the configuration has to be passed on each call of this script. There are two different ways to link devenv-cli.sh to a certain configuration:
In case both methods are used at once, the configuration file passed on the command line has precedence.
# Provide the absolute name of configuration file in DEVENV4IOM_CONFIG export DEVENV4IOM_CONFIG="$(pwd)/config.properties" # devenv-cli.sh will now use the config defined by the environment variable devenv-cli.sh info iom # Or set configuration file as first parameter at the command line devenv-cli.sh another-config.properties info iom
Changing a value in the configuration file does not automatically change the according developer instance of IOM. The only process guaranteeing that changes are applied is the complete recreation of the IOM installation.
# Change configuration file vi config.properties # Delete the whole IOM cluster devenv-cli.sh delete iom # Create the IOM cluster devenv-cli.sh create iom
If you want to reset the whole configuration, simply create a new one and set the ID within the properties file to the old value (see Create an New Configuration).
To reset only parts of the configuration, just delete the according entries from your configuration file. Now create a new configuration file, but make sure the old configuration is used during this process. In this case, only the missing/empty properties of the old configuration are filled with default values in the new configuration file.
# Remove entries from the configuration file that should be filled with default values. vi config.properties # Create a new configuration file based on the old one. devenv-cli.sh config.properties get config > new-config.properties # Check the reset entries and change them according to your requirements.
Running different IOM installations within devenv-4-iom is no problem as long as they are not running simultaneously. Just run delete cluster
on one installation before running create cluster
on another.
Different IOM installations are perfectly isolated by different namespaces on Kubernetes level. Precondition is the usage of unique IDs in configurations (see Create an New Configuration). However, when it comes to the operating system level of your host machine, the ports required to access the IOM installation from the outside collide.
Devenv-4-iom provides a simple mechanism to avoid port collisions. The configuration variable INDEX controls the port usage when providing services at OS level. Just make sure that every IOM configuration uses a different value for INDEX. After each change of INDEX you have to delete and create the cluster (see Change Configuration Values).
After updating devenv-4-iom, the content of the current configuration file has to be updated too. The new version of devenv-4-iom might bring a new template for configuration files, which may contain new properties or improved comments. You have to create a new configuration file based on this template, which is filled with your current configuration. To do so, just create a new configuration file, but make sure your current configuration is used during this process (see Reset Configuration Partially).
# Create a new configuration file based on the old one devenv-cli.sh config.properties get config > migrated-config.properties # Check the migrated configuration file for new properties and change them according to your requirements. vi migrated-config.properties
Before deleting a configuration file, you must ensure that all associated Kubernetes and Docker resources are deleted as well. You will not be able to delete them using devenv-cli.sh afterwards. Executing 'delete cluster' and 'delete storage' will remove all resources assigned to a configuration. Additionally, it is recommended to delete unused Docker images as well.
# Delete IOM cluster devenv-cli.sh delete cluster # Delete storage devenv-cli.sh delete storage # Now the configuration file can be deleted rm config.properties # Do not forget to unset the environment variable pointing to the configuration file export DEVENV4IOM_CONFIG= # Clean up unused Docker images (cleans up all unused images, not only the ones related to the current configuration) docker system prune -a -f
If you have accidentally removed a configuration file before deleting the according Kubernetes and Docker resources, you have to cleanup these resources manually. Section Manual Cleanup describes this process in detail.
This functionality is available since version 1.1.0.0 of devenv-4-iom.
Private Docker registries are requiring authentication and sufficient rights to pull images from them. The according authentication data can be passed in a Kubernetes secret object. The configuration of devenv-4-iom provides the variable IMAGE_PULL_SECRET
, which has to hold the name of the Kubernetes secret object, if authentication is required.
Devenv-4-iom does not manage the Kubernetes secret in any way. The user is fully responsible to create, update and delete the Kubernetes secret object. Kubernetes secret objects, which should be used by devenv-4-iom, always need to be created within default namespace. During creation of IOM the secret will be copied from the default namespace to the namespace used by IOM.
The document Pull an Image from a Private Registry from Kubernetes documentation explains how to create Kubernetes secret objects in general, suitable to authenticate at a private Docker registry. Pull images from an Azure container registry to a Kubernetes cluster from Microsoft Azure documentation explains how to apply this concept to private Azure Container Registries.
The following box shows an example for how to create a Kubernetes secret within default namespace to be used to access the private Docker Registry docker.intershop.de. The name of the newly created secret is intershop-pull-secret
, which has to be set as value of variable IMAGE_PULL_SECRET
.
kubectl create secret docker-registry intershop-pull-secret \ --docker-server=docker.intershop.de \ --docker-username='<user name>' \ --docker-password='<password>'
If the secret is created and the variable IMAGE_PULL_SECRET
is set, devenv-4-iom can now authenticate at the Docker Registry docker.intershop.de.
When accessing a private Azure Container Registry (ACR), the same mechanism can be used. In this case the value of service principal ID has to be set at docker-username
and the value of service principal password for docker-password
.
The creation of a whole IOM cluster consists of several steps. These are:
KEEP_DATABASE_DATA
is set to false)PGHOST
is set)The command line client provides all these commands separately, but it also provides the shortcut create cluster
, which does all these steps at once.
Depending on the Docker registry you are using, it might be required to set IMAGE_PULL_SECRET
first.
# Now create the cluster devenv-cli.sh create cluster
Removing the whole IOM development environment consists of several steps. These are:
PGHOST
is set)All these steps are provided as single commands by devenv-4-iom's command line client. The command line client also provides the shortcut delete cluster
, which performs all these operations at once.
Please note that persistent storage will never be deleted by the delete cluster
command.
devenv-cli.sh delete cluster
Namespace is required to isolate the devenv-4-iom from other resources and from resources of other configurations. The following command creates a namespace based on the ID you have specified in your properties.
devenv-cli.sh create namespace
The following command deletes the namespace and all resources assigned to this namespace.
devenv-cli.sh delete namespace
The following command creates a mail server which is used to receive mails from IOM.
devenv-cli.sh create mailserver
The following command deletes the mail server.
devenv-cli.sh delete mailserver
The following command creates a local Docker volume to be used to keep database data. This command is only effective if KEEP_DATABASE_DATA
is set to true
.
devenv-cli.sh create storage
Note
true
)devenv-cli.sh delete storage
The following command creates the Postgres database. This command is only effective if an internal database server is used (when PGHOST
is not set).
devenv-cli.sh create postgres
The following command deletes the Postgres database. This command is only effective if an internal database was created before (when PGHOST
is not set)
devenv-cli.sh delete postgres
The following command creates the IOM application server.
Depending on the Docker registry you are using, it might be required to set IMAGE_PULL_SECRET
first.
# now create IOM devenv-cli.sh create iom
The following command deletes the IOM application server.
devenv-cli.sh delete iom
Each component (IOM, Postgres, mail server, storage) has a lot of information to provide, e.g.:
The command line client of devenv-4-iom provides a very simple interface to get these information:
# Get information about IOM devenv-cli.sh info iom # Get information about mail server devenv-cli.sh info mailserver # Get information about PostgreSQL devenv-cli.sh info postgres # Get information about storage devenv-cli.sh info storage
It is recommended to start with a very simple version of the new custom built artifact. At the very beginning, it has to meet only one requirement: It has to be deployable.
Before it can be deployed, the file deployment.cluster.properties within the running application container has to be adapted. To do so, you have to process the following steps:
CUSTOM_APPS_DIR
in your configuration file to the directory, holding your custom deployment artifact. Make sure that the directory is shared in Docker Desktop.CUSTOM_APPS_DIR
, the IOM application server must restarted:When you have made these changes, you have to log in to the app container, add the new artifact to deployment.cluster.properties and finally try to redeploy all artifacts.
# Determine the command line, how to login into running IOM pod. devenv-cli.sh info iom ... Login into Pod: kubectl exec --namespace customerprojectiom3000 iom-7b99d8c9df-trctc -it bash ... # Login into IOM pod kubectl exec --namespace customerprojectiom3000 iom-7b99d8c9df-trctc -it bash # Add the custom artifact to deployment.cluster.properties within running IOM pod vi /opt/oms/etc/deployment.cluster.properties exit # Deploy the custom deployment artifact into running IOM pod devenv-cli.sh apply deployment
If this worked, both changes have to be made to the customization artifact of your project:
Now a new version of project-images must be built, using caas2docker. Configure IOM Development Environment to use the new images and restart the IOM pod. From now on, it is possible to redeploy the custom built artifact as described in sections Deployment of Custom Built Artifacts Using the Wildfly Admin Console and Deployment of Custom Built Artifacts Using CLI.
Using the Wildfly Admin Console is the easiest way to add or update deployments. The deployment process is simply triggered by drag & drop.
Unlike described in Deployment of custom built artifacts using CLI, deployments added/updated this way, will not survive a restart of the IOM pod.
The Wildfly Admin Console has to be opened in a web browser. The according URL can be found in the output of the info iom
command.
# Get information about IOM devenv-cli.sh info iom ... -------------------------------------------------------------------------------- Links: ====== OMT: http://computername.local:8080/omt/ Online help: http://computername.local:8080/omt-help/ DBDoc: http://computername.local:8080/dbdoc/ Wildfly (admin:admin): http://computername.local:9990/console/ -------------------------------------------------------------------------------- ... # Copy the 'Wildfly' link to your web browser.
To deploy custom built artifacts using the command line interface, you have to:
CUSTOM_APPS_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.CUSTOM_APPS_DIR
, the IOM application server needs to be restarted:Once you have configured your developer VM this way, your custom built artifacts are deployed right at the start of IOM. To update/add deployments in a running developer VM, you have the following options:
# Redeploy OMT selectively by adding one more parameter: A regular-expression to select the artifact to be redeployed devenv-cli.sh apply deployment omt # Redeploy all devenv-cli.sh apply deployment
Of course you can combine both methods of deploying custom built artifacts to get the best out of both methods. If you set CUSTOM_APPS_DIR
and make sure that the according directory contains your custom built artifacts, your developer VM will always use these artifacts, even right after IOM starts. Additionally you can use the Wildfly Admin Console to update/add deployments during runtime.
To roll out custom mail templates in a running developer VM, you have to:
CUSTOM_TEMPLATES_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.CUSTOM_TEMPLATES_DIR
, the IOM application server must restarted:Once you have configured your developer VM this way, you can apply custom mail templates by using the following command:
devenv-cli.sh apply mail-templates
If CUSTOM_TEMPLATES_DIR
is configured, the templates are also copied when starting IOM.
To roll out custom XSL templates in a running developer VM, you have to:
CUSTOM_XSLT_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.CUSTOM_XSLT_DIR
, the IOM application server has to be restarted:Once you have configured your developer VM this way, you can apply custom XSL templates by using the following command:
devenv-cli.sh apply xsl-templates
If CUSTOM_XSLT_DIR
is configured, the templates are also copied when starting IOM.
The docker image defined by IOM_CONFIG_IMAGE contains all the necessary tools to apply SQL scripts to the IOM database. Devenv-4-iom enables you to use these tools as easily as possible. Therefore it provides a Kubernetes job (apply-sql-job) that applies SQL file(s) to the IOM database. Creation and deletion of job and access to logs is provided by the command apply sql-scripts
in the command line interface.
There are two different modes that can be used:
The information about the SQL file or directory is passed as third parameter to the command line client. The box below shows an example that executes all SQL scripts found in oms.tests/tc_stored_procedures (of course, the directory has to exist in your current working directory).
The logs are printed in JSON format. Verbosity can be controlled by the configuration variable OMS_LOGLEVEL_SCRIPTS
.
# Adapt third parameter according your needs devenv-cli.sh apply sql-scripts oms.tests/tc_stored_procedures
To develop and test a single or a couple of SQL scripts (which can be migration scripts too), the developer task Apply SQL Scripts is the first choice. However, at some point of development, the DBMigrate process as a whole has to be tested as well. The DBMigrate process is somewhat more complex than simply applying SQL scripts from a directory. It first loads stored procedures from the stored_procedures directory and then applies the migrations scripts found in the migrations directory. The order of execution is controlled by the names of sub-directories within migrations and the naming of the migration scripts itself (numerically sorted, smallest first).
The IOM_CONFIG_IMAGE contains a shell script that applies the migration scripts supplied with the Docker image. The developer task Apply DBMigrate scripts enables you to use this DBMigrate script together with the migration scripts located at CUSTOM_DBMIGRATE_DIR. Hence, if you want to roll out custom DBMigrate scripts, you have to:
CUSTOM_DBMIGRATE_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.You can and should have an eye on the logs created by the migration process. These logs are printed in JSON format. Verbosity can be controlled by the configuration variable OMS_LOGLEVEL_SCRIPTS
.
devenv-cli.sh apply dbmigrate
If CUSTOM_DBMIGRATE_DIR
is configured, the custom DBMigrate scripts are also applied when starting IOM.
Scripts for SQL configuration are simple SQL scripts that can be easily developed and tested with the help of the developer task Apply sql scripts. However, SQL configuration in a CaaS project context is more complex. E.g. the scripts are executed depending on the currently activated environment. To be able to test SQL configuration scripts exactly in the same context as in a real IOM installation, the developer task Apply SQL Configuration Scripts is provided.
To be able to roll out complete SQL configurations, you have to:
CUSTOM_SQLCONF_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.CAAS_ENV_NAME
in your configuration file to the environment you want to test.You should have an eye on the logs created by the configuration process. These logs are printed in JSON format. Verbosity can be controlled by the configuration variable OMS_LOGLEVEL_SCRIPTS
.
devenv-cli.sh apply sql-config
If CUSTOM_SQLCONFIG_DIR
is configured, the custom SQL configuration is also applied when starting IOM.
JSON configuration of IOM is not publicly available. There is no task to support the development of single JSON configuration scripts. Additionally, the current implementation of JSON configuration does not use the concept of environments (configuration variable CAAS_ENV_NAME
). The current developer task Apply JSON Configuration Scripts enables you to apply complete JSON configurations exactly in the same context as in a real IOM installation.
To roll out JSON configurations, you have to:
CUSTOM_JSONCONF_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.You should have an eye on the logs created by the configuration process. These logs are printed in JSON format. Verbosity can be controlled by the configuration variable OMS_LOGLEVEL_SCRIPTS
.
devenv-cli.sh apply json-config
If CUSTOM_JSONCONFIG_DIR
is configured, the custom JSON configuration is also applied when starting IOM.
Project specific properties and CLI scripts are applied when building the project-image using caas2docker. If you use this image within devenv-4-iom, the changed settings are already applied when starting IOM.
Before creating a new project image, the properties and CLI scripts have to be tested within a running IOM. The following box shows how to execute a CLI script in devenv-4-iom:
# determine command, how to access jboss-cli.sh in running IOM pod devenv-cli.sh info iom ... jboss-cli: kubectl exec --namespace customerprojectiom3000 iom-7b99d8c9df-trctc -it -- /opt/jboss/wildfly/bin/jboss-cli.sh -c ... # execute jboss-cli.sh in running IOM pod kubectl exec --namespace customerprojectiom3000 iom-7b99d8c9df-trctc -it -- /opt/jboss/wildfly/bin/jboss-cli.sh -c # test your CLI commands [standalone@localhost:9990 /] ls -l /deployment bakery.base-app-3.0.0.0.ear bakery.communication-app-3.0.0.0.ear bakery.control-app-3.0.0.0.war bakery.impex-app-3.0.0.0.war bakery.omt-app-3.0.0.0.war gdpr-app-3.0.0.0.war oms.monitoring-app-3.0.0.0.war oms.rest.communication-app-3.0.0.0.war order-state-app-3.0.0.0.war postgresql-jdbc4 process-app-3.0.0.0.ear rma-app-3.0.0.0.war schedule-app-3.0.0.0.war transmission-app-3.0.0.0.war
When starting IOM and the connected database is empty, the config container loads the initial dump. Devenv-4-iom allows you to load a custom dump during this process. This custom dump will be treated exactly as any other dump which is part of the docker image. If you want to load a custom dump, you have to:
CUSTOM_DUMPS_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop. The dump you want to load has to be located within this directory. To be recognized as a dump, it has to have the extension .sql.gz. If the directory contains more than one dump file, the script of the configuration container selects the one with the numerically largest name. You can check this with the following command: ls *.sql.gz | sort -nr | head -n 1
dump load
command of the command line client executes all the necessary steps to restart IOM with an empty database, but only if no external database is used (only if PGHOST
is not set).KEEP_DATABASE_DATA
is set to trueKEEP_DATABASE_DATA
is set to truePGHOST
is set), the steps listed above will not have any effect. You must take care of purging the external database and recreating the IOM installation yourself.You should inspect the logs created when running the config container to know if the dump was actually loaded. The logs of the configuration process are printed in JSON format. Verbosity can be controlled by the configuration variable OMS_LOGLEVEL_SCRIPTS
.
devenv-cli.sh dump load
Devenv-4-iom provides a job to create a dump of the IOM database. This job uses the variable CUSTOM_DUMPS_DIR too. It writes the dumps to this directory. The created dumps use the following naming pattern: OmsDump.<year-month-day>.<hour.minute.second>-<hostname>.sql.gz. To create dumps, you have to:
CUSTOM_DUMPS_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.You should check the output of the dump job. The logs of the job are printed in JSON format. Verbosity can be controlled by the configuration variable OMS_LOGLEVEL_SCRIPTS
.
devenv-cli.sh dump create
If CUSTOM_DUMP_DIR is configured, the latest custom dump is loaded when IOM is started with an empty database (according to the load-rules).
You must not set CUSTOM_DUMPS_DIR to a directory, not containing any dumps, when starting IOM with an uninitialized database. In this case the initialization of the database would fail, since no dump to be loaded can be found. Just set CUSTOM_DUMPS_DIR right before creating the dump and not before start of IOM.
To develop e-mail templates to test whether e-mails are successfully sent by business processes and in other use cases, it is necessary to access the e-mails. The information about links to mail server UI and REST interface is given by the command info mailserver
, provided by the command line interface.
devenv-cli.sh info mailserver
PDF documents are stored within shared file system of IOM. To get easy access to the content of shared file system, you have to:
CUSTOM_SHARE_DIR
in your configuration file and make sure that the directory is shared in Docker Desktop.CUSTOM_SHARE_DIR
, the IOM application server has to be restarted:After that, you will have direct access to IOMs shared file system through the directory you have set for CUSTOM_SHARE_DIR
.
The processes described in this section are specific for IOM product development. Nevertheless, the concept can be adapted in context of projects as well. The tasks of devenv-4-iom in context of testing are very simple:
The tests and the test framework (in case of IOM this is Geb / Spock) are part of the IOM product sources. In context of projects, this has to be handled the same way. Tests and according framework have to be defined by the project. The tests can then use the property-files provided by devenv-4-iom to access the project specific IOM installation, which is runs in locally in a development environment.
To apply stored procedures, simply use the command apply sql-scripts
and set the parameter to the directory containing the stored procedures required for testing.
# oms.tests has to exist in current working directory devenv-cli.sh apply sql-scripts oms.tests/tc_stored_procedures
To run a single test, use the the feature name or a substring of it. E.g:
# Make sure that geb.properties reflects the latest version of configuration devenv-cli.sh get geb-props > geb.properties # Go to the oms.tests directory in your oms source directory # PATH_TO_IOM_SOURCES and PATH_TO_GEB_PROPERTIES have to be replaced by real values. cd ${PATH_TO_IOM_SOURCES}/oms.tests # Run a single Geb test ./gradlew gebTest -Pgeb.propFile=${PATH_TO_GEB_PROPERTIES}/geb.properties --tests="IOM: Role Assignment Management: admin_Oms_1 lists users for role-assignment" # Run a group of Geb tests ./gradlew gebTest -Pgeb.propFile=${PATH_TO_GEB_PROPERTIES}/geb.properties --tests="*admin_Oms_1 lists users for role-assignment*"
To run a single test, use the the feature name or a substring of it. E.g:
# Make sure that ws.properties reflects the latest version of configuration devenv-cli.sh get ws-props > ws.properties # Go to the oms.tests directory in your oms source directory # PATH_TO_IOM_SOURCES and PATH_TO_WS_PROPERTIES have to be replaced by real values. cd ${PATH_TO_IOM_SOURCES}/oms.tests # Run a single ws test ./gradlew wsTest -Pws.propFile=${PATH_TO_WS_PROPERTIES}/ws.properties --tests="IOM-7421-1: OrderService v1.2: Create an order with one position and billing address == shipping address" # Run a group of ws tests ./gradlew wsTest -Pws.propFile=${PATH_TO_WS_PROPERTIES}/ws.properties --tests="*OrderService v1.2: Create an order with one position and billing address*"
To run all tests of a specification, use the the name of the specification. E.g:
# Go to the oms.tests directory in your oms source directory # PATH_TO_IOM_SOURCES, PATH_TO_GEB_PROPERTIES and PATH_TO_WS_PROPERTIES have to be replaced by real values. cd ${PATH_TO_IOM_SOURCES}/oms.tests # Run all tests of a Geb test specification ./gradlew gebTest -Pgeb.propFile=${PATH_TO_GEB_PROPERTIES}/geb.properties --tests="*RoleAssignmentManagementListUsersSpec*" # Run all tests of a ws test specification ./gradlew wsTest -Pws.propFile=${PATH_TO_WS_PROPERTIES}/ws.properties --tests="*ReverseServiceSpec*"
To run all tests of a group of specifications, just use the name of the used package. E.g:
# Go to the oms.tests directory in your oms source directory # PATH_TO_IOM_SOURCES and PATH_TO_GEB_PROPERTIES have to be replaced by real values. cd ${PATH_TO_IOM_SOURCES}/oms.tests # Run all tests of a specification group ./gradlew gebTest -Pgeb.propFile=${PATH_TO_GEB_PROPERTIES}/geb.properties --tests="*com.intershop.oms.tests.roleassignment*"
To run all soap tests, use the following method:
# Make sure that soap.properties reflects the latest version of configuration devenv-cli.sh get soap-props > soap.properties # Go to the oms.soap.tests directory in your oms source directory # PATH_TO_IOM_SOURCES an PATH_TO_SOAP_PROPERTIES have to be replaced by the real values. cd ${PATH_TO_IOM_SOURCES}/oms.soap.tests # Run all soap tests mvn -Dhost=$(cat "${PATH_TO_SOAP_PROPERTIES}/soap.properties") clean test
jq is a command line tool that allows to work with JSON messages. Since all messages created by devenv-4-iom and IOM are JSON messages, it is a very useful tool. jq is not included in devenv-4-iom. devenv-4-iom does not depend on it (except the 'log *' commands), but it is strongly recommended that you install jq as well.
The most important features used in context of devenv-4-iom are formatting and filtering. The following box shows some examples of these use cases. These examples are not intended to be used as they are. They are only meant to give you an impression about jq and encourage you to look into the subject by yourself.
# Print raw JSON messages cmd_producing_json ... {"tenant":"Intershop","environment":"2.17.0.0-SNAPSHOT","logHost":"iom-6c587ddd87-42k4f","logVersion":"1.0","appName":"iom-config","appVersion":"2.17.0.0-SNAPSHOT@1234","logType":"script","timestamp":"2019-12-13T13:35:45+00:00","level":"INFO","processName":"apply_json_config.sh","message":"processing file '/opt/caas-config/json-config/config/P_shopTX/G_Invoicing_and_Documents/060_InvoicingNoConfigDO/InvoicingNoConfigDO_test_shop_TX.iombc'","configName":"ci"} {"tenant":"Intershop","environment":"2.17.0.0-SNAPSHOT","logHost":"iom-6c587ddd87-42k4f","logVersion":"1.0","appName":"iom-config","appVersion":"2.17.0.0-SNAPSHOT@1234","logType":"script","timestamp":"2019-12-13T13:35:45+00:00","level":"INFO","processName":"apply_json_config.sh","message":"processing file '/opt/caas-config/json-config/config/P_shopTX/G_Invoicing_and_Documents/120_DocumentTransformerConfigDO/DocumentTransformerConfigDO_Shop_test_shop_TX.iombc'","configName":"ci"} {"tenant":"Intershop","environment":"2.17.0.0-SNAPSHOT","logHost":"iom-6c587ddd87-42k4f","logVersion":"1.0","appName":"iom-config","appVersion":"2.17.0.0-SNAPSHOT@1234","logType":"script","timestamp":"2019-12-13T13:35:45+00:00","level":"INFO","processName":"apply_json_config.sh","message":"processing file '/opt/caas-config/json-config/config/P_shopTX/H_Shop2PaymentProvider2Payment/010_Shop2PaymentProvider2PaymentDefDO/Shop2PaymentProvider2PaymentDefDO_test_shop_TX.iombc'","configName":"ci"} ... # Print formatted JSON messages cmd_producing_json | jq ... { "tenant": "Intershop", "environment": "2.17.0.0-SNAPSHOT", "logHost": "iom-6c587ddd87-42k4f", "logVersion": "1.0", "appName": "iom-config", "appVersion": "2.17.0.0-SNAPSHOT@1234", "logType": "script", "timestamp": "2019-12-13T13:35:45+00:00", "level": "INFO", "processName": "apply_json_config.sh", "message": "processing file '/opt/caas-config/json-config/config/P_shopTX/G_Invoicing_and_Documents/120_DocumentTransformerConfigDO/DocumentTransformerConfigDO_Shop_test_shop_TX.iombc'", "configName": "ci" } { "tenant": "Intershop", "environment": "2.17.0.0-SNAPSHOT", "logHost": "iom-6c587ddd87-42k4f", "logVersion": "1.0", "appName": "iom-config", "appVersion": "2.17.0.0-SNAPSHOT@1234", "logType": "script", "timestamp": "2019-12-13T13:35:45+00:00", "level": "INFO", "processName": "apply_json_config.sh", "message": "processing file '/opt/caas-config/json-config/config/P_shopTX/H_Shop2PaymentProvider2Payment/010_Shop2PaymentProvider2PaymentDefDO/Shop2PaymentProvider2PaymentDefDO_test_shop_TX.iombc'", "configName": "ci" } ... # Get entries, where key "level" has value "ERROR" cmd_producing_json | jq 'select(.level == "ERROR")' ... { "timestamp": "2019-12-13T11:55:23.608Z", "sequence": 1349200, "loggerClassName": "org.jboss.logging.DelegatingBasicLogger", "loggerName": "org.hibernate.engine.jdbc.spi.SqlExceptionHelper", "level": "ERROR", "message": "javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:/OmsDB", "threadName": "EJB default - 61", "threadId": 1113, "mdc": {}, "ndc": "", "hostName": "iom-6c587ddd87-zlznn", "processName": "jboss-modules.jar", "processId": 288, "sourceClassName": "org.hibernate.engine.jdbc.spi.SqlExceptionHelper", "sourceFileName": "SqlExceptionHelper.java", "sourceMethodName": "logExceptions", "sourceLineNumber": 142, "sourceModuleName": "org.hibernate", "sourceModuleVersion": "5.3.10.Final", "tenant": "Intershop", "environment": "2.17.0.0-SNAPSHOT", "logHost": "iom-6c587ddd87-zlznn", "logVersion": "1.0", "appVersion": "2.17.0.0-SNAPSHOT@1234", "appName": "iom-app", "logType": "message", "configName": "ci" } ... # Get entries, where key "level" has value "ERROR" and "sourceModuleName" has value "deployment.oms.monitoring-app-2.17.0.0-SNAPSHOT.war" cmd_producing_json | jq 'select((.level == "ERROR") and (.sourceModuleName == "deployment.oms.monitoring-app-2.17.0.0-SNAPSHOT.war"))' ... { "timestamp": "2019-12-13T11:50:59.698Z", "sequence": 1338700, "loggerClassName": "org.slf4j.impl.Slf4jLogger", "loggerName": "com.intershop.oms.monitoring.internal.rest.HealthServiceTimer", "level": "ERROR", "message": "Server not available because of missing database connection.", "threadName": "EJB default - 92", "threadId": 1222, "mdc": {}, "ndc": "", "hostName": "iom-6c587ddd87-zlznn", "processName": "jboss-modules.jar", "processId": 288, "sourceClassName": "com.intershop.oms.monitoring.internal.rest.HealthServiceTimer", "sourceFileName": "HealthServiceTimer.java", "sourceMethodName": "doHealthCheck", "sourceLineNumber": 305, "sourceModuleName": "deployment.oms.monitoring-app-2.17.0.0-SNAPSHOT.war", "sourceModuleVersion": null, "tenant": "Intershop", "environment": "2.17.0.0-SNAPSHOT", "logHost": "iom-6c587ddd87-zlznn", "logVersion": "1.0", "appVersion": "2.17.0.0-SNAPSHOT@1234", "appName": "iom-app", "logType": "message", "configName": "ci" } ... # Get only values "timestamp" and "message" of entries, where key "level" has value "ERROR" cmd_producing_json | jq 'select(.level == "ERROR") | .timestamp .message' ... "2019-12-13T11:49:27.58Z" "WFLYEJB0034: EJB Invocation failed on component MonitoringPersistenceBean for method public abstract bakery.persistence.dataobject.monitoring.HealthCheckStatusDO bakery.persistence.service.monitoring.MonitoringPersistenceService.getServerStatus(java.lang.String)" "2019-12-13T11:49:27.624Z" "WFLYEJB0034: EJB Invocation failed on component MonitoringLogicBean for method public abstract void com.intershop.oms.monitoring.capi.logic.MonitoringLogicService.setServerStatus(com.intershop.oms.monitoring.capi.rest.HealthCheckStatus)" "2019-12-13T11:49:27.625Z" " (systemPU) exception found for object 'class bakery.persistence.dataobject.monitoring.HealthCheckStatusDO'" "2019-12-13T11:49:27.625Z" "Server not available because of missing database connection." "2019-12-13T11:49:27.631Z" "WFLYEJB0022: Error during retrying timeout for timer: [id=9cd75873-66b1-4fdd-8155-fb859d7dc73e timedObjectId=oms.monitoring-app-2.17.0.0-SNAPSHOT.oms.monitoring-app-2.17.0.0-SNAPSHOT.HealthServiceTimer auto-timer?:false persistent?:false timerService=org.jboss.as.ejb3.timerservice.TimerServiceImpl@d537ad6 initialExpiration=Fri Dec 13 11:12:17 UTC 2019 intervalDuration(in milli sec)=5000 nextExpiration=Fri Dec 13 11:49:32 UTC 2019 timerState=RETRY_TIMEOUT info= startAT=Fri Dec 13 11:12:17 UTC 2019, runInterval=5000, cacheTime=11000]" "2019-12-13T11:49:30.006Z" "Error" ... # Get only values "timestamp", "message" and "sourceFileName" of entries, where key "level" has value "ERROR" in a new JSON structure cmd_producing_json | jq 'select(.level == "ERROR") | {timestamp: .timestamp, message: .message, sourceFileName: .sourceFileName}' ... { "timestamp": "2019-12-13T11:50:11.467Z", "message": "WFLYEJB0034: EJB Invocation failed on component CancelOrderControllerBean for method public abstract void bakery.control.controller.ControllerJob.execute()", "sourceFileName": "LoggingInterceptor.java" } { "timestamp": "2019-12-13T11:50:11.472Z", "message": "javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:/OmsDB", "sourceFileName": "SqlExceptionHelper.java" } { "timestamp": "2019-12-13T11:50:11.478Z", "message": "WFLYEJB0034: EJB Invocation failed on component CheckBonusPointsControllerBean for method public abstract void bakery.control.controller.ControllerJob.execute()", "sourceFileName": "LoggingInterceptor.java" } { "timestamp": "2019-12-13T11:50:11.486Z", "message": "WFLYEJB0034: EJB Invocation failed on component ProcessControlConfigBean for method public abstract java.util.Collection bakery.persistence.service.configuration.process.ProcessControlConfigService.loadModifiedConfigs()", "sourceFileName": "LoggingInterceptor.java" } ...
Logging of devenv-cli.sh is controlled by the configuration variable OMS_LOGLEVEL_DEVENV
. Since every execution of devenv-cli.sh reads the configuration file, changes of this variable become effective immediately.
As mentioned above, all log messages of devenv-cli.sh are written in JSON format. Hence, it is a good idea to pipe the output of devenv-cli.sh through jq for better readability of messages.
Unfortunately, it is not as simple as it seems at first glance. There are three reasons that make things more complicated:
info *
and get *
commands and all help
-messages are written as plain-text. The intended output of these commands is written to stdout. Any additional log messages are written to stderr. In fact, all log messages of devenv-cli.sh are written to stderr.apply *
) behave like that. E.g., when applying a SQL configuration using apply sql-config
, there are log messages of devenv-cli.sh that report the progress of the process (create kubernetes-job
, delete kubernetes-job
, etc.). In addition, there are other log messages coming directly from IOM, which provide information about applying the SQL configuration. In this case, only the log messages of devenv-cli.sh are controlled by the variable OMS_LOGLEVEL_DEVENV
. The messages of IOM are controlled by other OMS_LOGLEVEL_*
variables, see section below. Beyond that, you should know that log messages of IOM are written to stdout, unlike the log messages of devenv-cli.sh, which are written to stderr.log *
commands, which help to facilitate access to messages created by IOM containers. For more information, see section below.Hence, the following hints should be taken into account, when using devenv-cli.sh along with jq:
info
, get
), only stderr must be redirected to jq.apply
, dump
), stdout and stderr must be piped into jq.# example of jq-usage along with devenv-cli.sh command, having intended output printed to stdout # intended output is written to file config.properties # log messages are piped to jq # creates a pretty-printed version of log messages devenv-cli.sh get config 2>&1 >config.properties | jq # example of jq-usage with mixed log messages from devenv-cli.sh and IOM # log messages of devenv-cli.sh are written to stderr # log messages of IOM are written to stdout # both are piped to jq devenv-cli.sh apply sql-script test.sql 2>&1 | jq
In addition to the IOM application container, two init containers also belong to the IOM. All these containers write messages in JSON format to stdout. The log-levels of these messages are controlled by the following variables:
OMS_LOGLEVEL_CONSOLE
OMS_LOGLEVEL_IOM
OMS_LOGLEVEL_HIBERNATE
OMS_LOGLEVEL_QUARTZ
OMS_LOGLEVEL_ACTIVEMQ
OMS_LOGLEVEL_CUSTOMIZATION
OMS_LOGLEVEL_SCRIPTS
The values of these log-levels cannot be changed at runtime. For a change to take effect, IOM must be deleted and created again.
Beside these application level messages, access logs are written to stdout in JSON format too. Hence, the output of the IOM containers is a mixture of different logs.
You can use kubectl
to access these messages. In general these message can be provided in two different ways:
The according kubectl
command lines are provided by the info iom
command.
Hence, if you use kubectl
to get log messages of IOM, you will get everything mixed in one stream (messages and access-log), exactly as defined by the current logging configuration. E.g., if a log-level is currently set to INFO
, but you are interessed in FATAL
, ERROR
and WARN
messages only, you have to write an according jq command line by your own to receive only the requested messages (see section jq).
The following box shows some examples on how to access log messages of IOM containers and how to filter and format them with the help of jq.
# Get all FATAL, ERROR and WARN messages produced by IOM application container (do not follow new messages) # kubectl command line was taken from output of 'info iom' command # output is filtered by jq # - ignore any lines, that are not valid json structures # - print only json messages, with 'level' element having the value 'FATAL', 'ERROR' or 'WARN' kubectl logs iom-6c587ddd87-42k4f --namespace 21700snapshot -c iom | jq -R 'fromjson? | select(type == "object")' | jq 'select((.level == "FATAL") or (.level == "ERROR") or (.level == "WARN"))' # Follow new access log entries, having a status code indicating an error # kubectl command line was taken from output of 'info iom' command # output is filtered by jq # - ignore any lines, that are not valid json structures # - show only json messages with 'logType' element having the value 'access' # - show only json messages with 'responceCode' element having a value greater or equal 400 kubectl logs --tail=1 -f iom-6c587ddd87-42k4f --namespace 21700snapshot -c iom | jq -R 'fromjson? | select(type == "object")' | jq 'select((.logType == "access" ) and (.responseCode >= 400))'
The section before showed how to get messages out of the IOM containers and how to further process them with the help of jq. This is a valid procedure if special requirements have to be met. However, there are some standard situations that should be easier to handle. For this reason devenv-cli.sh provides the log *
commands.
The log *
commands facilitate accessing the logs of different IOM containers and different types of logs of IOM's application container. It is the only command that uses jq internally to provide a basic filtering and formatting of messages. By using the log *
commands you can do the following:
WARN
is passed as argument to the log command, only messages of levels FATAL
, ERROR
and WARN
will be printed.The following box shows some examples on how to use the log *
commands.
# Show FATAL, ERROR and WARN messages of IOM's config container and format them devenv-cli.sh log config # Show INFO messages of IOM's config container and format them devenv-cli.sh log config info # Follow FATAL, ERROR and WARN messages of IOM's application container and format the messages devenv-cli.sh log app -f # Follow all access log entries # Do not format messages to be able to process output by a second jq stage that filters for response time > 100 ms devenv-cli.sh log access all -f | jq 'select(.responseTime > 100)'
devenv-cli.sh has a simple system for its command line arguments. In general, each call to devenv-cli.sh requires two arguments. These could best be understood as topic and sub-topic. If you append -h
or --help
to the command line, you will get detailed help. This works even if no topic or sub-topic is passed on the command line. In this case, the provided help gives information about available topics or sub-topics.
devenv-cli.sh -h devenv-cli.sh command line interface for configuration with ID test. SYNOPSIS devenv-cli.sh [CONFIG-FILE] COMMAND CONFIG-FILE Name of config-file to be used. If not set, the environment variable DEVENV4IOM_CONFIG will be checked. If no config-file can be found, devenv-cli.sh ends with an error, with one exception: 'get config'. COMMANDS get|g* get devenv4iom specific resource info|i* get information about kubernetes resources create|c* create Kubernetes/Docker resources delete|de* delete Kubernetes/Docker resources wait|w* wait for Kubernetes resourses to get ready apply|a* apply customization dump|du* create or load dump log|l* simple access to log-messages Run 'devenv-cli.sh [CONFIG-FILE] COMMAND --help|-h' for more information on a command.
"IOM is not working" is a very unspecific description, but it is the most common. For a systematic search for the root cause, you have to know the different stages of starting IOM and how to get detailed information about each stage.
The IOM development environment consists of three main components: database, mail server and IOM. If the mail server is not working properly, it will not affect the startup of IOM. Hence, in a situation where IOM is not working at all, we only have to take a look at the database and IOM itself.
Before searching for a problem, the status of the Postgres database should be checked. The easiest way to do this is to use the info postgres
command. The output of this command contains a section named Kubernetes
, which shows the status of Postgres. If Postgres is working, the entry "READY
" should be "1/1
" and the entry STATUS
should be running, see example below:
# get information about postgres component devenv-cli.sh info postgres ... -------------------------------------------------------------------------------- Kubernetes: =========== namespace: test KEEP_DATABASE_DATA: true NAME READY STATUS RESTARTS AGE postgres 1/1 Running 0 74m -------------------------------------------------------------------------------- ...
If there is no Kubernetes
section at all, the problem occurred in a very early stage, just before the Kubernetes resources were created. E.g. you might have tried to start the system with an invalid configuration. Please check the output of the command you have used to create the Postgres database (create cluster
or create postgres
) for error messages.
If there is a Kubernetes
section but postgres
is not running or not ready, the start process of the Postgres database has to be investigated. There are two possible causes of the problem:
READY
state (e.g. Postgres version in not compatible with persistently stored data).To get information about these two different stages, two different strategies are necessary:
kubectl describe
command.The info postgres
command of devenv-cli.sh provides you with the necessary command lines for further investigation. You will find them in the section "Useful commands" as "Describe pod" and "Get logs".
# get information about postgres component devenv-cli.sh info postgres ... -------------------------------------------------------------------------------- Useful commands: ================= ... Describe pod: kubectl describe --namespace test pod postgres Get logs: kubectl logs postgres --namespace test ... -------------------------------------------------------------------------------- # Execute these commands for further investigation kubectl describe --namespace test pod postgres ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled default-scheduler Successfully assigned test/postgres to docker-desktop Normal Pulling 9m11s kubelet, docker-desktop Pulling image "postgres:11" Normal Pulled 9m7s kubelet, docker-desktop Successfully pulled image "postgres:11" Normal Created 9m7s kubelet, docker-desktop Created container postgres Normal Started 9m7s kubelet, docker-desktop Started container postgres kubectl logs postgres --namespace test ... ' 2020-05-22 11:36:15.750 UTC [1] 'LOG: listening on IPv4 address "0.0.0.0", port 5432 ' 2020-05-22 11:36:15.750 UTC [1] 'LOG: listening on IPv6 address "::", port 5432 ' 2020-05-22 11:36:15.754 UTC [1] 'LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" ' 2020-05-22 11:36:15.780 UTC [26] 'LOG: database system was interrupted; last known up at 2020-05-22 11:13:26 UTC ' 2020-05-22 11:36:17.036 UTC [26] 'LOG: database system was not properly shut down; automatic recovery in progress ' 2020-05-22 11:36:17.042 UTC [26] 'LOG: redo starts at 0/31FE868 ' 2020-05-22 11:36:17.043 UTC [26] 'LOG: invalid record length at 0/3204028: wanted 24, got 0 ' 2020-05-22 11:36:17.043 UTC [26] 'LOG: redo done at 0/3203FE8 ' 2020-05-22 11:36:17.043 UTC [26] 'LOG: last completed transaction was at log time 2020-05-22 11:15:51.823575+00 ' 2020-05-22 11:36:17.071 UTC [1] 'LOG: database system is ready to accept connections ...
The process of searching problems of IOM is in general identical to the one used for the Postgres database, with one slight addition: IOM has two init-containers, which may cause problems too. Hence, the process of searching for errors consists of these steps:
Kubernetes
section is missing:According to the checklist above, the first step gets the status of IOM. This can be easily done by using the info iom
command. The Kubernetes
section in the output shows the current status. If everything is fine, READY
should be 1/1
and STATUS
should be Running
.
devenv-cli.sh info iom ... -------------------------------------------------------------------------------- Kubernetes: =========== namespace: test NAME READY STATUS RESTARTS AGE iom-849dcb5d88-6dnss 1/1 Running 0 60m -------------------------------------------------------------------------------- ...
If there is no Kubernetes
section at all, the problem occurred in a very early stage, just before the Kubernetes resources were created. E.g. you might have tried to start the system with an invalid configuration. Please check the output of the command you have used to create IOM (create cluster
or create iom
) for error messages.
If there is a Kubernetes
section but IOM is not running or not ready, the start process of IOM has to be investigated. There are the following possible causes of the problem:
ready
state (e.g. erroneous deployment artifact).To get information about these different stages, according strategies have to be used:
kubectl describe
command.The info iom
command provides you the necessary command line to investigate the actions made by Kubernetes when starting IOM. The log *
commands of devenv-cli.sh provide access to the log messages created by containers belonging to IOM.
# Get information about IOM devenv-cli.sh info iom ... -------------------------------------------------------------------------------- Useful commands: ================= ... Describe iom pod: kubectl describe --namespace test pod iom-849dcb5d88-6dnss Describe iom deployment kubectl describe --namespace test deployment iom ... -------------------------------------------------------------------------------- # Execute these commands for further investigation kubectl describe --namespace test pod iom-849dcb5d88-6dnss ... QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> kubectl describe --namespace test deployment iom ... Conditions: Type Status Reason ---- ------ ------ Available True MinimumReplicasAvailable Progressing True NewReplicaSetAvailable OldReplicaSets: <none> NewReplicaSet: iom-849dcb5d88 (1/1 replicas created) Events: <none>
The easiest method to get log messages out of containers is using the log *
commands, provided by devenv-cli.sh. If called without further parameters, only messages of levels ERROR
and FATAL
are displayed, which makes it easy to find errors. You can use these commands only if jq is properly installed.
# Show error messages of the dbaccount init-container devenv-cli.sh log dbaccount ... # Show error messages of the config init-container devenv-cli.sh log config ... # Show error messages of the iom application server devenv-cli.sh log app ...
Unexpected errors may occur that are not handled properly. These errors cannot be found by using the log *
commands. To find such errors, the raw output of containers has to be investigated. If you do not have jq properly installed, you have to use this basic access to log messages as well. The according command lines are provided by info iom
within the section Useful commands
.
devenv-cli.sh info iom ... -------------------------------------------------------------------------------- Useful commands: ================= ... Get dbaccount logs: kubectl logs iom-849dcb5d88-6dnss --namespace test -c dbaccount Get config logs: kubectl logs iom-849dcb5d88-6dnss --namespace test -c config Get iom logs: kubectl logs iom-849dcb5d88-6dnss --namespace test -c iom ... --------------------------------------------------------------------------------
According to section Delete a Configuration, a configuration must not be deleted as long as corresponding Kubernetes and Docker resources still exist. In a situation where the configuration file is deleted and resources belonging to this configuration still exist, you have to delete these resources manually. To do so, perform the following steps:
All Kubernetes resources belonging to a configuration are assigned to one Kubernetes namespace. The name of this namespace is derived from the ID defined in the configuration file. In order to create a valid name for namespace, all non-alphanumerical characters are stripped from the ID and the remaining characters are transformed to lowercase. E.g., if you had used CustomerProject IOM 3.0.0.0 as ID, the derived name of the namespace is customerprojectiom3000
.
Kubernetes uses namespaces for its own purposes. To avoid any conflict with these namespaces, devenv-4-iom will not allow you to use an ID that starts with: default, docker or kube. Hence, the orphaned Kubernetes namespace we search, will never start with any of these three phrases.
The following box shows how to list all existing namespaces. According to the naming rules of namespaces created by devenv-4-iom, only two entries in the list of results are of interest: customerprojectiom3000
and oldprojectiom3000
. If you know the IDs of your currently existing configurations, you can find out the name of the orphaned namespace. In our example, oldprojectiom3000
is the one we have searched for.
# list all existing Kubernetes namespaces kubectl get namespace NAME STATUS AGE customerprojectiom3000 Active 40m default Active 28d docker Active 28d kube-node-lease Active 28d kube-public Active 28d kube-system Active 28d oldprojectiom3000 Active 10d # delete orphaned Kubernetes namespace kubectl delete namespace oldprojectiom3000 namespace "oldprojectiom3000" deleted
Docker volumes are used to provide persistent storage for the PostgreSQL database server. Since the usage of persistent storage is optional, an orphaned Docker volume might not exist at all. Once you have found the name of an orphaned Kubernetes namespace, it is very simple to find out whether an according Docker volume exists or not. The name of the Docker volume is derived from ID too. The same rules are applied to the ID as described above, additionally the prefix -pgdata
is appended.
Hence, if the orphaned Kubernetes namespaces is oldprojectiom3000
, the according Docker volume is named oldprojectiom3000-pgdata
. The following command lists all Docker volumes and shows you how to delete the one Docker volume you have identified before.
# list all existing Docker volumes docker volume ls -q 008e5dc60890b954a68de526da1ba73113143b8dcb9edbf382db585cb7cf2736 customerprojectiom3000-pgdata oldprojectiom3000-pgdata # delete orphaned Docker volume docker volume rm oldprojectiom3000-pgdata oldprojectiom3000-pgdata
After resetting your password you may experience problems with your shared drives. In those cases go to Settings | Shared Drives | Reset credentials.
kubectl
can interact with more than one Kubernetes cluster by setting the context. If devenv-cli.sh does not work properly, a wrong context might be the cause.
# List the existing contexts # Look out for entry "current-context". It should be set to docker-desktop kubectl config view # Change context to docker-desktop kubectl config use-context docker-desktop
When using Docker Desktop, this setting can be easily changed using the Kubernetes menu entry. It lists all existing contexts. You just have to select the right one: "docker-desktop".
When you trying a docker login from a Linux like terminal on Windows such as Git bash or Docker quickstart terminal, you will get the following error.
docker login docker.intershop.de > Error: Cannot perform an interactive login from a non TTY device # The trick here is to use: winpty docker login docker.intershop.de
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.