Document Properties
Last Modified04-Mar-2020
Added to KB02-Dec-2013
Public AccessEveryone
Doc TypeGuidelines, Concepts & Cookbooks
  • ICM 7.6
  • ICM 7.7
  • ICM 7.8
  • ICM 7.9
  • ICM 7.10

Concept - Logging

1 Introduction

The scope of this document is to give an idea about what the logging framework in Enfinity / Intershop can provide. It is not meant to be a reference listing all options/features/setting of a specific Enfinity / Intershop version.

Logging in general means to record events for documentation and to analyze processes, especially to find the cause of an issue if something went wrong.

Within a software system logging in general works as follows:

  • Code is enriched with log statements
  • At run time the log statements produce log events (messages).
  • Those log events are processed by logging engine that filters, enriches and finally passes the events to targets where they are stored.

Since Enfinity Suite 6.3 the logging system is based on SLF4J and Logback.

2 Glossary

This glossary describes the terms used here (typewriter style denoted terms directly correspond with classes/interfaces names):



Log Statement

A code line that produces an log event

Log Event

An atomic set of data to be recorded


An class that actually writes a log event into a log target like a file, a database, sends an email, sends it to a monitoring application

Log Category

Also often called logger. In most logging systems each log event is directly assigned to a category. The set of all categories forms tree that makes it for most use cases easier to select log messages for filtering and routing to the appenders.


two meanings (sometimes synonymously used): 1. A class/object that has methods for submitting log events and 2. A log category

SMCSystem Management Console (or System Management as of IS 7.5 or later)

3 Basic Concepts of SLF4J and Logback

SLF4J (Simple Logging Facade for Java) provides an API for sending log messages it is independent of the actual implementation of the engine that processes those log messages. Furthermore, the SLF4J project provides libraries so that all common used API for submitting log messages like Log4J, Java Common Logging and so on are fulfilled and pass their messages to the desired implementation.

Logback provides a native implementation for SLF4J and is intended as a successor to the popular Log4J project.

The following sections give only a brief overview of the basic concepts, for detailed information look into the official documentation of those two projects: Latest SL4J documentation and Latest Logback documentation.

3.1 Log Event Creation

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Wombat
    final Logger logger = LoggerFactory.getLogger(Wombat.class);
    private Integer current;

    public void setTemperature(Integer temperature)
        final Integer old = current;
        current = temperature;

        logger.debug("Temperature set to {}. Old temperature was {}.", current, old);
        if (current.intValue() > 50)
  "Temperature has risen above 50 degrees.");

This example shows the basic way of submitting log events via SLF4J:

  • A Logger object is retrieved from the LoggerFactory, typically with a class name (retrieved from the class object).
  • The Logger object provides methodes for submitting events whereas the method name is one of the five log levels (error, warning, info, debug and trace).
  • Message texts can be parameterized.

3.1.1 Log Levels

It is important to use the correct log level, otherwise the message may be ignored.


target user

content | reason



very detailed information, content of variables



summary information, high level content of variables (like external ids)



main business goal reached - description describes what business goal reached with identification of the business object.This is an signal to the administrator that the application is working well. Additionally developers can see milestones of the business process during analysis of failures.



something goes wrong, because of missing resources, timeouts to external systems. The message should contain enough information for the administrator to identify the environment issues.



something goes wrong, the stacktrace needs to analyst to avoid these kind of problems in the future.

A good explanation for the levels and many other logging related hints can be found at javacodegeeks.

Note: Information for the business user (e.g an import file contains wrong content) must be provided to the business user (mostly via UI or specific log files for import/export). Information for business user must not be located at debug, warn or error.log files.

3.2 The Mapped Diagnostic Context (MDC)

The Mapped Diagnostic Context (MDC) is a way to enrich logging events with contextual runtime information that is not in the scope of the code containing the log statements. It is basically a map of key value pairs of type String held as a InheritableThreadLocal which means that child threads automatically get all values from the parent thread as they existed at thread creation time.
Values of the MDC can be used to filter logging events, to enrich the log messages or to route the logging event to different targets.

package org.slf4j;

public class MDC
    public static void put(String key, String value);
    public static String get(String key);
    public static void remove(String key);
    public static void clear();

3.3 Logback Event Processing

Logback is optimized for performance which actually means to filter out and drop logging events as early as possible. The process is as follows:

  1. Turbo Filter Chain
    • A global decision if a logging event can be dropped
    • If the result is ACCEPT then the category level filter is skipped.
  2. Category Level Filter
    • Each category has an effective level that is compared to the level of the logging event.
  3. Create a LoggingEvent object
    • Up to this point Logback passes all parameters unmodified as they are to the filters.
  4. Invoking applicable appenders
    • Appenders are classes that append the log message to the log target (e.g., a file, the database, console, remote applications).
    • Appenders can have their own filter chain.
    • Applicable appenders are found by traversing the category tree from the leave to the root.
    • Categories can be marked to stop traversing the tree (additivity flag).

4 Enfinity Suite / Intershop Integration

The SLF4J and Logback Jar files are located in the tools cartridge. This means that Logback logging system is instantiated for the Enfinity Web Application and not for the (outer) servlet container. The basic default configuration is contained in logback-default-config.jar logs all events of levels ERROR, WARNING and INFO to the console. Also some additional server independent tools around come with those jar-files.


The console output of the server is redirected to the log file IS_HOME/<server name>.log.

The logging framework of the Intershop solution also provides some additional features that are not part of Logback:

  • Cartridge based configuration
  • Dynamic configuration changes via the SMC and the LoggingAdminMBean
  • Logging status via in the SMC
  • Log Processes
  • Enhanced MDC
  • Routing of Java util logging messages into Logback

4.1 Configuration

During the server startup (at a very early stage) the loggings system is initially configured.


The configuration of the logging system for the server of Intershop solution is customized. This means Logback configured programmatically instead of relying on the basic mechanisms built in Logback.

4.1.1 Configuration Files

The Intershop solution uses Logback Joran configuration files. These are XML files that are interpreted like scripts to configure the logging system. The Intershop solution internally generates a main file that contains includes to all found / required XML files deployed in the files system.

The following files are involved:

  • <IS_SHARE>/config/cluster/
  • <IS_SHARE>/config/cluster/logback-main.xml
  • <IS_SHARE>/config/cluster/logback-dbinit.xml
  • <IS_SHARE>/config/cartridges/logback/logback-<CartridgeName>.xml
  • <IS_SHARE>/config/cluster/loggingextension/<Additional XML File>
  • <IS_SHARE>/config/servers/<Host>/<InstallationID>/<ServerName>.properties
  • <IS_SHARE>/config/servers/<Host>/<InstallationID>/<ServerName>/loggingextension/<Additional XML File>

The XML files are native Logback configuration files. All those mentioned files have to be includes. The Intershop solution internally generates a main file that contains includes to all found / required XML files deployed in the files system, like:

<?xml version="1.0" encoding="UTF-8" ?>
<property name="intershop.logging.start" value="2012-07-12_12-59" />
<property name="intershop.InstallationID" value="ES1" />
<property name="intershop.HostName" value="localhost" />
<property name="intershop.ServerName" value="appserver0" />
<property name="intershop.serverlogfile.Directory" value="D:/eserver/engine/tomcat/servers/appserver0/logs" />
<property name="intershop.logfile.Directory" value="D:/eserver/share/system/log" />
<property name="intershop.logfile.NamePostfix" value="localhost-ES1-appserver0" />
<include file="D:\\eserver\\share\\system\\config\\cluster\\logback-main.xml"/>
<include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-bc_auditing.xml"/>
<include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-bc_pricing.xml"/>
<include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-dev_lilith.xml"/>

When running DBinit or DBMigrate additionally the file logback-dbinit.xml is included directly after logback-main.xml.

After the main configuration the optionally existing additional XML files are executed. Those files can be added and removed via the Advanced Logging Settings within the SMC.

The and the <ServerName>.properties do contain additional global configurations for certain custom features and also contain the modifications made via the SMC. Logback XML-files are never changed from the Intershop solution.

4.1.2 Global Settings in MDC

The Intershop solution provides the utility class com.intershop.beehive.core.capi.log.MDC that allows to put objects of any type into the SLF4J-MDC which actually only supports string values. This is achieved by putting multiple attributes of the passed object also into the MDC besides to originally object itself. Which attributes of a given type are additionally added can be configured.

intershop.logging.mdc.types           = List
intershop.logging.mdc.List.class      = java.util.List
intershop.logging.mdc.List.attr.first = Value.isEmpty() ? null : Value.get(0)
intershop.logging.mdc.List.attr.last  = Value.isEmpty() ? null : Value.get(Value.size() - 1)
List<Integer> a = Arrays.asList(new Integer[] {1, 2, 3});
List<Object> b = Arrays.asList(new Object[] {"a", a});
MDC.put("xyz", b);
SLF4J MDC will contain:
xyz             = [a, [1, 2, 3]]
xyz.first       = a
xyz.last        = [1, 2, 3]
xyz.last.first  = 1
xyz.last.last   = 3 Java Util Logging

The issue with Java util logging is that this configuration is global for the JVM. This may not work well together with a servlet container where multiple applications are running.
There are two properties that control how the configuration of the Java logging is influenced:

# should logging messages from Java logging be passed to the Intershop logging system (true / false)?
# should a possibly existing Java logging configuration be overwritten by a new one so that the only
# receiver of all those messages is the Intershop logging system
intershop.logging.javaloggingadapter.exclusive=true Misc

# Dynamic file appender
# e.g., used when creating an impex log


intershop.logging.dynamicfiletarget.pattern=[%date{yyyy-MM-dd HH:mm:ss.SSS z}] [%thread] %-5level %logger{36} - %msg%n

# Auto flush of buffered appenders. 
# The flush interval time has to be given in seconds, default is 30 seconds.
# A value less or equal to zero disables this feature


# Logging of messages from the logging engine 
# Print messages to console with a level higher or equal to the configured level (ERROR, WARN, INFO)
# Size of the buffer for the last messages kept in memory 


# Enrichment of stack traces with class packaging data



The auto flush feature is only supported for appenders inheriting from  ch.qos.logback.core.OutputStreamAppender and are using the  ch.qos.logback.core.encoder.LayoutWrappingEncoder.

4.1.3 SMC

The Logging module within the SMC provides:

  • An overview of the status of the logging systems for all running application servers in the cluster
  • The detailed status of the logging systems for each application server including the list of the last status messages of the logging system (default 500)
  • The possibility to change if an application server uses an individual logging configuration or the default for the whole cluster
  • The possibility to upload and remove additional logging configuration, uploaded additional configuration files become persistent (are used after server restarts)
  • The possibility to change some properties for appenders statically defined via the XML configuration files

Changes made here are stored in the or the the <ServerName>.properties (depending if the change was made for a certain server or the whole cluster).

# Values that overwrite the configuration of the native logback configuration files
# Entries can be added and changed via the SMC
# intershop.logging.appender.<Appender Name>.level = [THRESHOLD,FIXED],[ERROR, WARN, INFO, DEBUG, TRACE]
# intershop.logging.appender.<Appender Name>.categories = <category1> <category2> ...


Possible changes of appenders via the SMC:

Level Filter
The changes displayed within the SMC are based only on the first filter assigned to the appender. Only if it is a LevelFilter or ThresholdFilter then the SMC will show some values. Usually, this happens if the first filter of the filter chain of the appender is changed, or an additional filter is added in the first place. In all other cases the SMC will show "Level filter = None".

Assigned Categories
Here the logging categories can be viewed / changed the appender is assigned to and therefore receives logging events from them. Additionally, select boxes to only show categories the logging system knows at time can be checked. This means either such a category was mentioned in any configuration, or any category is used by any code (to send a event).

4.2 Log Processes

Via a log process additional temporary filters or appenders can be added to the configuration of the logging system. This is useful , e.g., to create a new log file for a certain business process, or to apply filters for test cases that are intended to trigger and test the correctness of error log messages.

To allow a business task specific filtering of the log messages a key/value pair within the MDC is introduced, namely the process type as key and process ID as value.


Only one log process per unique process type can be active within a thread at any time.

There exist currently three different types:

  • com.intershop.beehive.core.capi.log.ProcessSingleFileAppender
    • Can be created with pipelet CreateProcessFileLogAppender
    • Produces a single text file
    • File format via properties intershop.logging.dynamicfiletarget.encoding and intershop.logging.dynamicfiletarget.pattern
  • com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
    • Can be created with pipelet CreateProcessFileLogAppender
    • Produces a single XML file
    • Can be read via com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender or pipelet ReadXMLLogFile
  • com.intershop.beehive.core.capi.log.ProcessTemplateBasedAppender
    • Can be created with pipelet CreateProcessTemplateBasedLogAppender
    • Is intended to create any kind of appender via a native Logback XML fragment stored under a certain property key

If such a process appender is not explicitly assigned to any certain log category the categories are stored per default with the property intershop.logging.dynamictarget.categories.


Process appenders are not visible in the appender lists in the SMC. But the logging status view may provide messages that reflect the creation and / or action of such appenders.


The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.

Customer Support
Knowledge Base
Product Resources
Support Tickets