Document Tree
Document Properties
Kbid
25395D
Last Modified
04-Apr-2024
Added to KB
02-Dec-2013
Public Access
Everyone
Status
Online
Doc Type
Concepts
Product
  • ICM 11
  • ICM 12
Concept - Logging

Introduction

The purpose of this document is to give you an idea of what the logging framework in Intershop Commerce Management can do.

Logging in general means recording events for documentation purposes and for analyzing processes, especially for finding the cause of an issue if something has gone wrong.

Within a software system, logging generally works as follows

  • Code is enriched with log statements.
  • At runtime the log statements produce log events (messages).
  • Those log events are processed by a logging engine that filters, enriches, and ultimately passes the events to targets where they are stored.

Since Enfinity Suite 6.3, the logging system is based on SLF4J and Logback.

Glossary

This glossary describes the terms used here (monospaced style denoted terms directly correspond with classes/interfaces names):

Term

Description

Log Statement

A line of code that produces a log event

Log Event

An atomic set of data to be recorded

Appender

A class that actually writes a log event to a log target file such as a file or database, sends an email, or sends it to a monitoring application.

Log Category

Also often called logger. In most logging systems, each log event is directly assigned to a category. The set of all categories forms a tree that makes it easier for most use cases to select log messages for filtering and routing to the appenders.

Logger

Two meanings (sometimes used interchangeably): 1. A class/object that has methods for submitting log events and 2. A log category.

SMCSystem Management Console (or System Management as of IS 7.5 or later)

Log Levels

It is important to use the correct log level, otherwise the message may be ignored.

Level

Target User

Content | Reason

Trace

developer

very detailed information, content of variables

Debug

developer

summary information, high level content of variables (like external IDs)

Info

administrator

main business goal reached - description describes what business goal is reached with identification of the business object. This is a signal to the administrator that the application is working well. Additionally, developers can see milestones of the business process during analysis of failures.

Warn

administrator

something went wrong because of missing resources or timeouts to external systems. The message should contain enough information for the administrator to identify the environment issues.

Error

developer

something went wrong, the stacktrace needs to be analyzed to avoid these kinds of problems in the future.

A good explanation for the levels and many other logging-related hints can be found at javacodegeeks.

Note

Business user information (e.g. an import file that contains incorrect content) must be provided to the business user, usually via the UI or specific log files for import/export. Business user information must not be located in debug, warn, or error.log files.

Basic Concepts of SLF4J and Logback

SLF4J (Simple Logging Facade for Java) provides an API for sending log messages. It is independent of the actual implementation of the engine that processes those log messages. Furthermore, the SLF4J project provides libraries so that all commonly used APIs for submitting log messages, such as Log4J, Java Common Logging, and so on, are fulfilled and their messages are passed to the desired implementation.

Logback provides a native implementation for SLF4J and is intended as a successor to the popular Log4J project.

The following sections only give a brief overview of the basic concepts. For detailed information, please refer to the latest official documentation of these two projects: SL4J documentation and Logback documentation.

Log Event Creation

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

public class Wombat
{
    final Logger logger = LoggerFactory.getLogger(Wombat.class);
    private Integer current;

    public void setTemperature(Integer temperature)
    {
        final Integer old = current;
        current = temperature;

        logger.debug("Temperature set to {}. Old temperature was {}.", current, old);
        if (current.intValue() > 50)
        {
            logger.info("Temperature has risen above 50 degrees.");
        }
    }
}

This example shows the basic way of submitting log events via SLF4J:

  • A Logger object is retrieved from the LoggerFactory, typically with a class name (retrieved from the class object).
  • The Logger object provides methods for submitting events, where the method name is one of the five log levels (error, warning, info, debug and trace).
  • Message texts can be parameterized.

Enrichment of the Log Events with Mapped Diagnostic Context (MDC)

The Mapped Diagnostic Context (MDC) is a way to enrich logging events with contextual runtime information that is not in the scope of the code containing the log statements. Like:

  • Current request identifier
  • Current application
  • Current session
  • ...

It is basically a map of key value pairs of type String held as a InheritableThreadLocal, which means that child threads automatically get all values from the parent thread as they existed at thread creation time.
Values of the MDC can be used to filter logging events to enrich the log messages or to route the logging events to different targets.

package org.slf4j;

public class MDC
{
    public static void put(String key, String value);
    public static String get(String key);
    public static void remove(String key);
    public static void clear();
}

Logback Event Processing

Logback is optimized for performance which actually means to filter out and drop logging events as early as possible. The process is as follows:

  1. Turbo Filter Chain
    • A global decision whether a logging event can be dropped
    • If the result is ACCEPT, then the category level filter is skipped.
  2. Category Level Filter
    • Each category has an effective level that is compared to the level of the logging event.
  3. Creating a LoggingEvent object
    • Up to this point Logback passes all parameters to the filters as they are (unmodified).
  4. Invoking applicable appenders
    • Appenders are classes that append the log message to the log target (e.g., a file, the database, console, remote applications).
    • Appenders can have their own filter chains.
    • Applicable appenders are found by traversing the category tree from the leaf to the root.
    • Categories can be marked to stop traversing the tree ("additivity" flag).

ICM Integration

Note

The configuration of the logging system for the server is customized to provide log configuration enhancements for implementation partner. This means Logback is configured programmatically instead of relying on the basic mechanisms built in Logback.

Preparation and Class Loading

Before the logging configuration is initialized, especially at server startup and class loaders, it's not possible to use SLF4J, so System.out.println is the only way to log messages from the application. These messages are unformatted and non-encoded output. Please reduce or avoid this type of output to System.out or System.err.

Note

Per default, these log messages are written directly to the standard out.

Logback Configuration

The logging framework of the Intershop solution also provides some additional features that are not part of Logback:

  • Cartridge-based configuration
  • Dynamic configuration changes via the SMC
  • Logging status via the SMC
  • Log filtering to avoid flooding logs with the same message
  • Enhanced MDC

The SLF4J and Logback Jar files are integrated via dependencies. The log configuration is combined from different sources: logback-main.xml and log configurations from cartridges.

  • <CARTRIDGE_SOURCE>/src/main/resources/resources/<CartridgeName>/logback/logback-<CartridgeName>.xml


Note

Per default, all log messages are written to the standard out, using JSON log format.

MDC

The Intershop solution provides a utility class com.intershop.beehive.core.capi.log.MDC that allows objects of any type to be placed in the SLF4J-MDC, which actually only supports string values. This is achieved by putting multiple attributes of the passed object into the MDC in addition to the original object itself. Which attributes of a given type are added can be configured.

Configuration:
intershop.logging.mdc.types           = List
intershop.logging.mdc.List.class      = java.util.List
intershop.logging.mdc.List.attr.first = Value.isEmpty() ? null : Value.get(0)
intershop.logging.mdc.List.attr.last  = Value.isEmpty() ? null : Value.get(Value.size() - 1)
Code:
List<Integer> a = Arrays.asList(new Integer[] {1, 2, 3});
List<Object> b = Arrays.asList(new Object[] {"a", a});
MDC.put("xyz", b);
SLF4J MDC will contain:
xyz             = [a, [1, 2, 3]]
xyz.first       = a
xyz.last        = [1, 2, 3]
xyz.last.first  = 1
xyz.last.last   = 3

Java Util Logging

The issue with Java util logging is that this configuration is global for the JVM. This may not work well together with a servlet container running multiple applications.
There are two properties that control how Java logging configuration is affected:

# should logging messages from Java logging be passed to the Intershop logging system (true / false)?
intershop.logging.javaloggingadapter.enable=true
# should a possibly existing Java logging configuration be overwritten by a new one so that the only
# receiver of all those messages is the Intershop logging system
intershop.logging.javaloggingadapter.exclusive=true

Misc

#
# Dynamic file appender
#
# e.g., used when creating an impex log
#

intershop.logging.dynamictarget.categories=root

intershop.logging.dynamicfiletarget.encoding=
intershop.logging.dynamicfiletarget.pattern=[%date{yyyy-MM-dd HH:mm:ss.SSS z}] [%thread] %-5level %logger{36} - %msg%n


#
# Auto flush of buffered appenders. 
#
# The flush interval time has to be given in seconds, default is 30 seconds.
# A value less or equal to zero disables this feature
#

intershop.logging.appender.buffer.flushInterval=30

#
# Logging of messages from the logging engine 
#
# Print messages to console with a level higher or equal to the configured level (ERROR, WARN, INFO)
# Size of the buffer for the last messages kept in memory 
#

intershop.logging.engine.statuslogging.level=WARN
intershop.logging.engine.statusbuffer.size=500


#
# Enrichment of stack traces with class packaging data
#

intershop.logging.engine.stacktrace.packaginginfo.enable=false 

Limitation

The auto flush feature is only supported for appenders that inherit from ch.qos.logback.core.OutputStreamAppender and use the ch.qos.logback.core.encoder.LayoutWrappingEncoder.

SMC

The Logging module within the SMC provides:

  • An overview of the status of the logging systems for all running application servers in the cluster
  • The detailed status of the logging systems for each application server including the list of the last status messages of the logging system (default 500)
  • The possibility to change if an application server uses an individual logging configuration or the default for the whole cluster
  • The possibility to upload and remove additional logging configuration; uploaded additional configuration files become persistent (are used after the server restarts)
  • The possibility to change some properties for appenders statically defined via the XML configuration files

Changes made here are stored in the logging.properties or the <ServerName>.properties (depending on if the change was made for a certain server or the whole cluster).

#
# Values that overwrite the configuration of the native logback configuration files
# Entries can be added and changed via the SMC
#
# intershop.logging.appender.<Appender Name>.level = [THRESHOLD,FIXED],[ERROR, WARN, INFO, DEBUG, TRACE]
# intershop.logging.appender.<Appender Name>.categories = <category1> <category2> ...
#

intershop.logging.additionalfiles.cluster=SQL_Pipeline_Appender.xml
intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.level=FIXED,DEBUG
intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.categories=com.intershop.beehive.orm.internal.query.SQLManagerImpl

Possible changes of appenders via the SMC are:

Level Filter
The changes displayed in the SMC are based only on the first filter assigned to the appender. The SMC will only display values if it is a LevelFilter or ThresholdFilter. Usually, this happens when the first filter of the filter chain of the appender is changed, or when an additional filter is added. In all other cases, the SMC will show "Level filter = None".

Assigned Categories

Here you can view/modify the logging categories to which the appender is assigned and therefore receives logging events from. Additionally, checkboxes can be checked to show only categories that are known to the logging system at the time. This means that either such a category is mentioned in any configuration, or any category is used by any code (to send an event).

Log Processes

A log process can be used to add additional temporary filters or appenders to the configuration of the logging system. This is useful, for example, to create a new log file for a particular business process, or to apply filters to test cases that are intended to trigger and test the correctness of error log messages.

To enable business-task-specific filtering of log messages, a key/value pair is introduced within the MDC, namely the process type as the key and the process ID as the value.

Note

Only one log process per unique process type can be active within a thread at a time.

Currently, three different types are existing:

  • com.intershop.beehive.core.capi.log.ProcessSingleFileAppender
    • Can be created with pipelet CreateProcessFileLogAppender
    • Produces a single text file
    • File format via properties intershop.logging.dynamicfiletarget.encoding and intershop.logging.dynamicfiletarget.pattern
  • com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
    • Can be created with pipelet CreateProcessFileLogAppender
    • Produces a single XML file
    • Can be read via com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender or pipelet ReadXMLLogFile
  • com.intershop.beehive.core.capi.log.ProcessTemplateBasedAppender
    • Can be created with pipelet CreateProcessTemplateBasedLogAppender
    • Is intended to create any kind of appender via a native Logback XML fragment stored under a certain property key

If such a process appender is not explicitly assigned to a specific log category, the categories are stored by default with the property intershop.logging.dynamictarget.categories.

Note

Process appenders are not visible in the appender lists in the SMC. However, the logging status view may provide messages that reflect the creation and / or action of such appenders.

Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.