Document Tree
Document Properties
Kbid
30937S
Last Modified
23-Jun-2023
Added to KB
26-Jun-2023
Public Access
Everyone
Status
Online
Doc Type
Concepts
Product
ICM 7.10
Concept - Logging (valid to 7.10)

Introduction

The scope of this document is to give an idea about what the logging framework in Enfinity / Intershop / Intershop Commerce Management can provide.

Logging in general means to record events for documentation and to analyze processes, especially to find the cause of an issue if something went wrong.

Within a software system, logging in general works as follows:

  • Code is enriched with log statements
  • At run time the log statements produce log events (messages).
  • Those log events are processed by logging engine that filters, enriches and finally passes the events to targets where they are stored.

Since Enfinity Suite 6.3 the logging system is based on SLF4J and Logback.

Glossary

This glossary describes the terms used here (monospaced style denoted terms directly correspond with classes/interfaces names):

Term

Description

Log Statement

A code line that produces a log event

Log Event

An atomic set of data to be recorded

Appender

A class that actually writes a log event into a log target like a file, a database, sends an e-mail, sends it to a monitoring application

Log Category

Also often called logger. In most logging systems each log event is directly assigned to a category. The set of all categories forms a tree that makes it easier for most use cases to select log messages for filtering and routing to the appenders.

Logger

Two meanings (sometimes synonymously used): 1. A class/object that has methods for submitting log events and 2. A log category

SMCSystem Management Console (or System Management as of IS 7.5 or later)

Basic Concepts of SLF4J and Logback

SLF4J (Simple Logging Facade for Java) provides an API for sending log messages. It is independent of the actual implementation of the engine that processes those log messages. Furthermore, the SLF4J project provides libraries so that all common used APIs for submitting log messages like Log4J, Java Common Logging and so on are fulfilled and pass their messages to the desired implementation.

Logback provides a native implementation for SLF4J and is intended as a successor to the popular Log4J project.

The following sections only give a brief overview of the basic concepts. For detailed information have a look into the official documentation of those two projects: Latest SL4J documentation and Latest Logback documentation.

Log Event Creation

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Wombat
{
    final Logger logger = LoggerFactory.getLogger(Wombat.class);
    private Integer current;

    public void setTemperature(Integer temperature)
    {
        final Integer old = current;
        current = temperature;

        logger.debug("Temperature set to {}. Old temperature was {}.", current, old);
        if (current.intValue() > 50)
        {
            logger.info("Temperature has risen above 50 degrees.");
        }
    }
}

This example shows the basic way of submitting log events via SLF4J:

  • A Logger object is retrieved from the LoggerFactory, typically with a class name (retrieved from the class object).
  • The Logger object provides methods for submitting events whereas the method name is one of the five log levels (error, warning, info, debug and trace).
  • Message texts can be parameterized.

Log Levels

It is important to use the correct log level, otherwise the message may be ignored.

Level

Target User

Content | Reason

Trace

developer

very detailed information, content of variables

Debug

developer

summary information, high level content of variables (like external ids)

Info

administrator

main business goal reached - description describes what business goal is reached with identification of the business object. This is a signal to the administrator that the application is working well. Additionally, developers can see milestones of the business process during analysis of failures.

Warn

administrator

something goes wrong because of missing resources, timeouts to external systems. The message should contain enough information for the administrator to identify the environment issues.

Error

developer

something goes wrong, the stacktrace needs to analyze to avoid these kinds of problems in the future.

A good explanation for the levels and many other logging-related hints can be found at javacodegeeks.

Note

Information for the business user (e.g. an import file contains wrong content) must be provided to the business user (mostly via UI or specific log files for import/export). Information for business users must not be located at debug, warn or error.log files.

Enrichment of the Log Events with Mapped Diagnostic Context (MDC)

The Mapped Diagnostic Context (MDC) is a way to enrich logging events with contextual runtime information that is not in the scope of the code containing the log statements. Like:

  • current request identifier
  • current application
  • current session
  • ...

It is basically a map of key value pairs of type String held as a InheritableThreadLocal which means that child threads automatically get all values from the parent thread as they existed at thread creation time.
Values of the MDC can be used to filter logging events to enrich the log messages or to route the logging event to different targets.

package org.slf4j;

public class MDC
{
    public static void put(String key, String value);
    public static String get(String key);
    public static void remove(String key);
    public static void clear();
}

Logback Event Processing

Logback is optimized for performance which actually means to filter out and drop logging events as early as possible. The process is as follows:

  1. Turbo Filter Chain
    • A global decision if a logging event can be dropped
    • If the result is ACCEPT, then the category level filter is skipped.
  2. Category Level Filter
    • Each category has an effective level that is compared to the level of the logging event.
  3. Create a LoggingEvent object
    • Up to this point Logback passes all parameters unmodified as they are to the filters.
  4. Invoking applicable appenders
    • Appenders are classes that append the log message to the log target (e.g., a file, the database, console, remote applications).
    • Appenders can have their own filter chain.
    • Applicable appenders are found by traversing the category tree from the leaf to the root.
    • Categories can be marked to stop traversing the tree ("additivity" flag).

ICM Integration

Preparation and class loading

Before the Logging configuration is initialized, especially at server start up and class loaders. It's not possible to use slf4j, so System.out.println is the only way to log messages out of the application. These messages are unformatted and non-encoded output, please reduce or avoid such kind of output to System.out or System.err.

Note

The nodemanager collects these messages from stdout (in case the nodemanager startet the application server, which is done in production environments) and stores the messages at IS_HOME/logs/<server name>.log.

Logback Configuration

The SLF4J and Logback Jar files are integrated via dependencies. The log configuration is combined from different sources: logback-main.xml and log configurations from cartridges.

Note

The console output of the server is redirected to the log file IS_HOME/<server name>.log.

The logging framework of the Intershop solution also provides some additional features that are not part of Logback:

  • Cartridge-based configuration
  • Dynamic configuration changes via the SMC and the LoggingAdminMBean
  • Logging status via the SMC
  • Log Processes
  • Enhanced MDC
  • Routing of Java util logging messages into Logback

Configuration

During the server startup (at a very early stage) the logging system is initially configured.

Note

The configuration of the logging system for the server of Intershop solution is customized. This means Logback is configured programmatically instead of relying on the basic mechanisms built in Logback.

Configuration Files

The Intershop solution uses Logback Joran configuration files. These are XML files that are interpreted like scripts to configure the logging system. The Intershop solution internally generates a main file that contains includes to all found / required XML files deployed in the files system.

The following files are involved:

  • <IS_SHARE>/system/config/cluster/logging.properties
  • <IS_SHARE>/system/config/cluster/logback-main.xml
  • <IS_SHARE>/system/config/cluster/logback-dbinit.xml
  • <IS_SHARE>/system/config/cartridges/logback/logback-<CartridgeName>.xml
  • <IS_SHARE>/system/config/cluster/loggingextension/<Additional XML File>
  • <IS_SHARE>/system/config/servers/<Host>/<InstallationID>/<ServerName>.properties
  • <IS_SHARE>/system/config/servers/<Host>/<InstallationID>/<ServerName>/loggingextension/<Additional XML File>

Files under cluster/loggingextension/ or <ServerName>/loggingextension/ must be listed in their related properties file (the value is the relative path of file from the point of the loggingextension directory).

cluster/logging.properties
# read <IS_SHARE>/system/config/cluster/loggingextension/trace-global-issue.xml
intershop.logging.additionalfiles.cluster=trace-global-issue.xml

or

servers/<Host>/<InstallationID>/<ServerName>.properties
# read <IS_SHARE>/system/config/servers/<Host>/<InstallationID>/<ServerName>/loggingextension/trace-local-issue.xml
intershop.logging.additionalfiles.server=trace-local-issue.xml

The XML files are native Logback configuration files. All those mentioned files have to be includes. The Intershop solution internally generates a main file that contains includes to all found / required XML files deployed in the files system, like:

<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<property name="intershop.logging.start" value="2012-07-12_12-59" />
<property name="intershop.InstallationID" value="ES1" />
<property name="intershop.HostName" value="localhost" />
<property name="intershop.ServerName" value="appserver0" />
<property name="intershop.serverlogfile.Directory" value="D:/eserver/engine/tomcat/servers/appserver0/logs" />
<property name="intershop.logfile.Directory" value="D:/eserver/share/system/log" />
<property name="intershop.logfile.NamePostfix" value="localhost-ES1-appserver0" />
<include file="D:\\eserver\\share\\system\\config\\cluster\\logback-main.xml"/>
<include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-bc_auditing.xml"/>
<include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-bc_pricing.xml"/>
<include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-dev_lilith.xml"/>
</configuration>

When running DBinit or DBMigrate additionally, the file logback-dbinit.xml is included directly after logback-main.xml.

After the main configuration, the optionally existing additional XML files are executed. Those files can be added and removed via the Advanced Logging Settings within the SMC.

The logging.properties and the <ServerName>.properties contain additional global configurations for certain custom features and also contain the modifications made via the SMC. Logback XML-files are never changed from the Intershop solution.

Global Settings in logging.properties

MDC

The Intershop solution provides the utility class com.intershop.beehive.core.capi.log.MDC that allows to put objects of any type into the SLF4J-MDC, which actually only supports string values. This is achieved by additionally putting multiple attributes of the passed object into the MDC besides to originally the object itself. Which attributes of a given type are additionally added can be configured.

Configuration:
intershop.logging.mdc.types           = List
intershop.logging.mdc.List.class      = java.util.List
intershop.logging.mdc.List.attr.first = Value.isEmpty() ? null : Value.get(0)
intershop.logging.mdc.List.attr.last  = Value.isEmpty() ? null : Value.get(Value.size() - 1)
Code:
List<Integer> a = Arrays.asList(new Integer[] {1, 2, 3});
List<Object> b = Arrays.asList(new Object[] {"a", a});
MDC.put("xyz", b);
SLF4J MDC will contain:
xyz             = [a, [1, 2, 3]]
xyz.first       = a
xyz.last        = [1, 2, 3]
xyz.last.first  = 1
xyz.last.last   = 3

Java Util Logging

The issue with Java util logging is that this configuration is global for the JVM. This may not work well together with a servlet container where multiple applications are running.
There are two properties that control how the configuration of the Java logging is influenced:

# should logging messages from Java logging be passed to the Intershop logging system (true / false)?
intershop.logging.javaloggingadapter.enable=true
# should a possibly existing Java logging configuration be overwritten by a new one so that the only
# receiver of all those messages is the Intershop logging system
intershop.logging.javaloggingadapter.exclusive=true

Misc

#
# Dynamic file appender
#
# e.g., used when creating an impex log
#

intershop.logging.dynamictarget.categories=root

intershop.logging.dynamicfiletarget.encoding=
intershop.logging.dynamicfiletarget.pattern=[%date{yyyy-MM-dd HH:mm:ss.SSS z}] [%thread] %-5level %logger{36} - %msg%n


#
# Auto flush of buffered appenders. 
#
# The flush interval time has to be given in seconds, default is 30 seconds.
# A value less or equal to zero disables this feature
#

intershop.logging.appender.buffer.flushInterval=30

#
# Logging of messages from the logging engine 
#
# Print messages to console with a level higher or equal to the configured level (ERROR, WARN, INFO)
# Size of the buffer for the last messages kept in memory 
#

intershop.logging.engine.statuslogging.level=WARN
intershop.logging.engine.statusbuffer.size=500


#
# Enrichment of stack traces with class packaging data
#

intershop.logging.engine.stacktrace.packaginginfo.enable=false 

Limitation

The auto flush feature is only supported for appenders inheriting from  ch.qos.logback.core.OutputStreamAppender and they are using the  ch.qos.logback.core.encoder.LayoutWrappingEncoder.

SMC

The Logging module within the SMC provides:

  • An overview of the status of the logging systems for all running application servers in the cluster
  • The detailed status of the logging systems for each application server including the list of the last status messages of the logging system (default 500)
  • The possibility to change if an application server uses an individual logging configuration or the default for the whole cluster
  • The possibility to upload and remove additional logging configuration, uploaded additional configuration files become persistent (are used after the server restarts)
  • The possibility to change some properties for appenders statically defined via the XML configuration files

Changes made here are stored in the logging.properties or the <ServerName>.properties (depending on if the change was made for a certain server or the whole cluster).

#
# Values that overwrite the configuration of the native logback configuration files
# Entries can be added and changed via the SMC
#
# intershop.logging.appender.<Appender Name>.level = [THRESHOLD,FIXED],[ERROR, WARN, INFO, DEBUG, TRACE]
# intershop.logging.appender.<Appender Name>.categories = <category1> <category2> ...
#

intershop.logging.additionalfiles.cluster=SQL_Pipeline_Appender.xml
intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.level=FIXED,DEBUG
intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.categories=com.intershop.beehive.orm.internal.query.SQLManagerImpl

Possible changes of appenders via the SMC are:

Level Filter
The changes displayed within the SMC are based on the first filter assigned to the appender only. Only if it is a LevelFilter or ThresholdFilter, then the SMC will show some values. Usually, this happens if the first filter of the filter chain of the appender is changed, or an additional filter is added in the first place. In all other cases the SMC will show "Level filter = None".

Assigned Categories
Here the logging categories can be viewed / changed the appender is assigned to and therefore receives logging events from them. Additionally, select boxes to only show categories the logging system knows at time can be checked. This means either such a category was mentioned in any configuration, or any category is used by any code (to send an event).

Log Processes

Via a log process additional temporary filters or appenders can be added to the configuration of the logging system. This is useful , e.g., to create a new log file for a certain business process, or to apply filters for test cases that are intended to trigger and test the correctness of error log messages.

To allow a business-task-specific filtering of the log messages, a key/value pair within the MDC is introduced, namely the process type as key and process ID as value.

Note

Only one log process per unique process type can be active within a thread at any time.

Currently, three different types are existing:

  • com.intershop.beehive.core.capi.log.ProcessSingleFileAppender
    • Can be created with pipelet CreateProcessFileLogAppender
    • Produces a single text file
    • File format via properties intershop.logging.dynamicfiletarget.encoding and intershop.logging.dynamicfiletarget.pattern
  • com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
    • Can be created with pipelet CreateProcessFileLogAppender
    • Produces a single XML file
    • Can be read via com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender or pipelet ReadXMLLogFile
  • com.intershop.beehive.core.capi.log.ProcessTemplateBasedAppender
    • Can be created with pipelet CreateProcessTemplateBasedLogAppender
    • Is intended to create any kind of appender via a native Logback XML fragment stored under a certain property key

If such a process appender is not explicitly assigned to any certain log category, the categories are stored per default with the property intershop.logging.dynamictarget.categories.

Note

Process appenders are not visible in the appender lists in the SMC. However, the logging status view may provide messages that reflect the creation and / or action of such appenders.

Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.