The scope of this document is to give an idea about what the logging framework in Enfinity / Intershop / Intershop Commerce Management can provide.
Logging in general means to record events for documentation and to analyze processes, especially to find the cause of an issue if something went wrong.
Within a software system, logging in general works as follows:
Since Enfinity Suite 6.3 the logging system is based on SLF4J and Logback.
This glossary describes the terms used here (monospaced
style denoted terms directly correspond with classes/interfaces names):
Term | Description |
---|---|
Log Statement | A code line that produces a log event |
Log Event | An atomic set of data to be recorded |
Appender | A class that actually writes a log event into a log target like a file, a database, sends an e-mail, sends it to a monitoring application |
Log Category | Also often called logger. In most logging systems each log event is directly assigned to a category. The set of all categories forms a tree that makes it easier for most use cases to select log messages for filtering and routing to the appenders. |
Logger | Two meanings (sometimes synonymously used): 1. A class/object that has methods for submitting log events and 2. A log category |
SMC | System Management Console (or System Management as of IS 7.5 or later) |
SLF4J (Simple Logging Facade for Java) provides an API for sending log messages. It is independent of the actual implementation of the engine that processes those log messages. Furthermore, the SLF4J project provides libraries so that all common used APIs for submitting log messages like Log4J, Java Common Logging and so on are fulfilled and pass their messages to the desired implementation.
Logback provides a native implementation for SLF4J and is intended as a successor to the popular Log4J project.
The following sections only give a brief overview of the basic concepts. For detailed information have a look into the official documentation of those two projects: Latest SL4J documentation and Latest Logback documentation.
import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Wombat { final Logger logger = LoggerFactory.getLogger(Wombat.class); private Integer current; public void setTemperature(Integer temperature) { final Integer old = current; current = temperature; logger.debug("Temperature set to {}. Old temperature was {}.", current, old); if (current.intValue() > 50) { logger.info("Temperature has risen above 50 degrees."); } } }
This example shows the basic way of submitting log events via SLF4J:
Logger
object is retrieved from the LoggerFactory
, typically with a class name (retrieved from the class object).Logger
object provides methods for submitting events whereas the method name is one of the five log levels (error, warning, info, debug and trace).It is important to use the correct log level, otherwise the message may be ignored.
Level | Target User | Content | Reason |
---|---|---|
Trace | developer | very detailed information, content of variables |
Debug | developer | summary information, high level content of variables (like external ids) |
Info | administrator | main business goal reached - description describes what business goal is reached with identification of the business object. This is a signal to the administrator that the application is working well. Additionally, developers can see milestones of the business process during analysis of failures. |
Warn | administrator | something goes wrong because of missing resources, timeouts to external systems. The message should contain enough information for the administrator to identify the environment issues. |
Error | developer | something goes wrong, the stacktrace needs to analyze to avoid these kinds of problems in the future. |
A good explanation for the levels and many other logging-related hints can be found at javacodegeeks.
Note
Information for the business user (e.g. an import file contains wrong content) must be provided to the business user (mostly via UI or specific log files for import/export). Information for business users must not be located at debug, warn or error.log files.
The Mapped Diagnostic Context (MDC) is a way to enrich logging events with contextual runtime information that is not in the scope of the code containing the log statements. Like:
It is basically a map of key value pairs of type String
held as a InheritableThreadLocal
which means that child threads automatically get all values from the parent thread as they existed at thread creation time.
Values of the MDC can be used to filter logging events to enrich the log messages or to route the logging event to different targets.
package org.slf4j; public class MDC { public static void put(String key, String value); public static String get(String key); public static void remove(String key); public static void clear(); }
Logback is optimized for performance which actually means to filter out and drop logging events as early as possible. The process is as follows:
LoggingEvent
objectBefore the Logging configuration is initialized, especially at server start up and class loaders. It's not possible to use slf4j, so System.out.println is the only way to log messages out of the application. These messages are unformatted and non-encoded output, please reduce or avoid such kind of output to System.out or System.err.
Note
The nodemanager collects these messages from stdout (in case the nodemanager startet the application server, which is done in production environments) and stores the messages at IS_HOME/logs/<server name>.log.
The SLF4J and Logback Jar files are integrated via dependencies. The log configuration is combined from different sources: logback-main.xml and log configurations from cartridges.
Note
The console output of the server is redirected to the log file IS_HOME/<server name>.log.
The logging framework of the Intershop solution also provides some additional features that are not part of Logback:
During the server startup (at a very early stage) the logging system is initially configured.
Note
The configuration of the logging system for the server of Intershop solution is customized. This means Logback is configured programmatically instead of relying on the basic mechanisms built in Logback.
The Intershop solution uses Logback Joran configuration files. These are XML files that are interpreted like scripts to configure the logging system. The Intershop solution internally generates a main file that contains includes to all found / required XML files deployed in the files system.
The following files are involved:
Files under cluster/loggingextension/ or <ServerName>/loggingextension/ must be listed in their related properties file (the value is the relative path of file from the point of the loggingextension directory).
# read <IS_SHARE>/system/config/cluster/loggingextension/trace-global-issue.xml intershop.logging.additionalfiles.cluster=trace-global-issue.xml
or
# read <IS_SHARE>/system/config/servers/<Host>/<InstallationID>/<ServerName>/loggingextension/trace-local-issue.xml intershop.logging.additionalfiles.server=trace-local-issue.xml
The XML files are native Logback configuration files. All those mentioned files have to be includes. The Intershop solution internally generates a main file that contains includes to all found / required XML files deployed in the files system, like:
<?xml version="1.0" encoding="UTF-8" ?> <configuration> <property name="intershop.logging.start" value="2012-07-12_12-59" /> <property name="intershop.InstallationID" value="ES1" /> <property name="intershop.HostName" value="localhost" /> <property name="intershop.ServerName" value="appserver0" /> <property name="intershop.serverlogfile.Directory" value="D:/eserver/engine/tomcat/servers/appserver0/logs" /> <property name="intershop.logfile.Directory" value="D:/eserver/share/system/log" /> <property name="intershop.logfile.NamePostfix" value="localhost-ES1-appserver0" /> <include file="D:\\eserver\\share\\system\\config\\cluster\\logback-main.xml"/> <include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-bc_auditing.xml"/> <include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-bc_pricing.xml"/> <include file="D:\\eserver\\share\\system\\config\\cartridges\\logback-dev_lilith.xml"/> </configuration>
When running DBinit or DBMigrate additionally, the file logback-dbinit.xml is included directly after logback-main.xml.
After the main configuration, the optionally existing additional XML files are executed. Those files can be added and removed via the Advanced Logging Settings within the SMC.
The logging.properties and the <ServerName>.properties contain additional global configurations for certain custom features and also contain the modifications made via the SMC. Logback XML-files are never changed from the Intershop solution.
The Intershop solution provides the utility class com.intershop.beehive.core.capi.log.MDC
that allows to put objects of any type into the SLF4J-MDC, which actually only supports string values. This is achieved by additionally putting multiple attributes of the passed object into the MDC besides to originally the object itself. Which attributes of a given type are additionally added can be configured.
intershop.logging.mdc.types = List intershop.logging.mdc.List.class = java.util.List intershop.logging.mdc.List.attr.first = Value.isEmpty() ? null : Value.get(0) intershop.logging.mdc.List.attr.last = Value.isEmpty() ? null : Value.get(Value.size() - 1)
List<Integer> a = Arrays.asList(new Integer[] {1, 2, 3}); List<Object> b = Arrays.asList(new Object[] {"a", a}); MDC.put("xyz", b);
xyz = [a, [1, 2, 3]] xyz.first = a xyz.last = [1, 2, 3] xyz.last.first = 1 xyz.last.last = 3
The issue with Java util logging is that this configuration is global for the JVM. This may not work well together with a servlet container where multiple applications are running.
There are two properties that control how the configuration of the Java logging is influenced:
# should logging messages from Java logging be passed to the Intershop logging system (true / false)? intershop.logging.javaloggingadapter.enable=true # should a possibly existing Java logging configuration be overwritten by a new one so that the only # receiver of all those messages is the Intershop logging system intershop.logging.javaloggingadapter.exclusive=true
# # Dynamic file appender # # e.g., used when creating an impex log # intershop.logging.dynamictarget.categories=root intershop.logging.dynamicfiletarget.encoding= intershop.logging.dynamicfiletarget.pattern=[%date{yyyy-MM-dd HH:mm:ss.SSS z}] [%thread] %-5level %logger{36} - %msg%n # # Auto flush of buffered appenders. # # The flush interval time has to be given in seconds, default is 30 seconds. # A value less or equal to zero disables this feature # intershop.logging.appender.buffer.flushInterval=30 # # Logging of messages from the logging engine # # Print messages to console with a level higher or equal to the configured level (ERROR, WARN, INFO) # Size of the buffer for the last messages kept in memory # intershop.logging.engine.statuslogging.level=WARN intershop.logging.engine.statusbuffer.size=500 # # Enrichment of stack traces with class packaging data # intershop.logging.engine.stacktrace.packaginginfo.enable=false
Limitation
The auto flush feature is only supported for appenders inheriting from ch.qos.logback.core.OutputStreamAppender
and they are using the ch.qos.logback.core.encoder.LayoutWrappingEncoder
.
The Logging module within the SMC provides:
Changes made here are stored in the logging.properties or the <ServerName>.properties (depending on if the change was made for a certain server or the whole cluster).
# # Values that overwrite the configuration of the native logback configuration files # Entries can be added and changed via the SMC # # intershop.logging.appender.<Appender Name>.level = [THRESHOLD,FIXED],[ERROR, WARN, INFO, DEBUG, TRACE] # intershop.logging.appender.<Appender Name>.categories = <category1> <category2> ... # intershop.logging.additionalfiles.cluster=SQL_Pipeline_Appender.xml intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.level=FIXED,DEBUG intershop.logging.appender.SQL_Pipeline@SQL_Pipeline_Appender.xml@c.categories=com.intershop.beehive.orm.internal.query.SQLManagerImpl
Possible changes of appenders via the SMC are:
Level Filter
The changes displayed within the SMC are based on the first filter assigned to the appender only. Only if it is a LevelFilter
or ThresholdFilter,
then the SMC will show some values. Usually, this happens if the first filter of the filter chain of the appender is changed, or an additional filter is added in the first place. In all other cases the SMC will show "Level filter = None".
Assigned Categories
Here the logging categories can be viewed / changed the appender is assigned to and therefore receives logging events from them. Additionally, select boxes to only show categories the logging system knows at time can be checked. This means either such a category was mentioned in any configuration, or any category is used by any code (to send an event).
Via a log process additional temporary filters or appenders can be added to the configuration of the logging system. This is useful , e.g., to create a new log file for a certain business process, or to apply filters for test cases that are intended to trigger and test the correctness of error log messages.
To allow a business-task-specific filtering of the log messages, a key/value pair within the MDC is introduced, namely the process type as key and process ID as value.
Note
Only one log process per unique process type can be active within a thread at any time.
Currently, three different types are existing:
com.intershop.beehive.core.capi.log.ProcessSingleFileAppender
CreateProcessFileLogAppender
intershop.logging.dynamicfiletarget.encoding
and intershop.logging.dynamicfiletarget.pattern
com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
CreateProcessFileLogAppender
com.intershop.beehive.core.capi.log.ProcessSingleXMLFileAppender
or pipelet ReadXMLLogFile
com.intershop.beehive.core.capi.log.ProcessTemplateBasedAppender
CreateProcessTemplateBasedLogAppender
If such a process appender is not explicitly assigned to any certain log category, the categories are stored per default with the property intershop.logging.dynamictarget.categories
.
Note
Process appenders are not visible in the appender lists in the SMC. However, the logging status view may provide messages that reflect the creation and / or action of such appenders.