Document Properties
Kbid
277L32
Last Modified
02-Dec-2022
Added to KB
20-Jul-2016
Public Access
Everyone
Status
Online
Doc Type
Guidelines
Product
ICM 7.10
Guide - Node Manager

Introduction

The present guide outlines configuration and administration options with respect to the Node Manager and the Apache Jakarta Tomcat application server. This document is addressed to system administrators or DevOps who configure and maintain Intershop Commerce Management instances.

Info

Prior to Intershop version 7.7 the information provided in this document were part of the Administration and Configuration Guide that can be found in the Knowledge Base.

Note

All relevant setup options are to be configured in advance via dedicated deployment script files, before actually executing the deployment. So be aware that if you modify the Intershop Commerce Management configuration after it is deployed, the next deployment will override all changes with the settings specified for your deployment.

Glossary

ConceptDescription
Node ManagerThe Node Manager is a standalone Java program that is used to control all server processes in an Intershop Commerce Management instance. The Node Manager starts, stops and (in case of abnormal process termination) restarts application server processes. In addition, it provides the communication interface for Cluster Management, which is used for remote control purposes.
application server

The Apache Jakarta Tomcat application server makes the operating environment for all Intershop Commerce Management applications. It provides (at least) the JSP and servlet engine and HTTP(S) connectivity. The application server comes out of the box with Intershop Commerce Management and is installed with every Intershop Application Server.

Cluster ManagementIn Intershop Commerce Management, an application for the system administrator to control the application server instances, as well as the applications on top of them, running in the cluster.

References

You should be familiar with the main concepts of the Intershop Commerce Management infrastructure. Refer to Overview - Infrastructure, Scaling and Performance.

Tomcat, Node Manager and Cluster Management Interaction

The Node Manager and Cluster Management interact starting and managing the Tomcat application server processes, and when checking the cluster state. The figure below illustrates the interaction using a distributed installation as an example.

Tomcat cluster management overview

  • Startup (1)
    The Node Manager starts the Tomcat server processes (and thus, the application on top of them) as configured. It monitors the servers permanently to restart them when necessary.
  • Application and Process Management (2), (3)
    To shut down a remote server or to control application on a remote server, Cluster Management communicates to the remote Cluster Management instance, which then fulfills the request in the local server instance (2).
    To get a list of all remote server instances and their state or to terminate a remote server, Cluster Management communicates to the responsible Node Manager, which then executes the requested operations (3).
  • Cluster State (4)
    Cluster Management instances and Node Managers send out heartbeat information (including name, host, port) via event messaging (multicast by default). In addition, each Cluster Management instance evaluates incoming heartbeats. This way, it gets the required information about the available server instances in the entire cluster, and on how to contact a remote Cluster Management instance or Node Manager for application and progress management.

Cluster Management Settings

Shared Cluster Management Settings

For the communication and remote management to work, the Cluster Management instances and the Node Managers must share the event messaging settings and the user database.

  • Settings necessary for cluster-wide management are stored in the common configuration file tcm.properties and the user database users.xml, located in <IS.INSTANCE.SHARE>/system/tcm/config.
  • Additional, instance-specific configuration information is read from the configuration file tcm.properties located in <IS.INSTANCE.SHARE>/system/tcm/config/local/<IS.AS.HOSTNAME>/<IS.INSTANCE.ID>.

The following table lists the required settings in tcm.properties that the Cluster Management instance and the Node Manager use for cluster-wide management.

Property

Description

intershop.tcm.event.messengerClass

Specifies the messenger class to be used, the default is com.intershop.beehive.messaging.internal.multicast.MulticastMessenger.

intershop.tcm.event.multicastAddress

Defines the group address used for multicast messaging.

intershop.tcm.event.multicastPort

Defines the event distribution port for multicast messaging.

intershop.tcm.event.multicastListenerThreads

Defines the number of handler threads to process incoming events. The default value is 5.

intershop.tcm.registration.registrationTime

Defines the interval (in seconds) after which the Cluster Management instance or the Node Manager sends out a heartbeat packet to re-register with all other Cluster Management instances. The default value is 10.

intershop.tcm.registration.expirationTime

Defines the interval (in seconds) after which Cluster Management unregisters a Node Manager or another Cluster Management instance if no heartbeat packets were received. The default value is 50.

intershop.tcm.jmx.protocol

Defines the protocol used to transport JMX control commands to other Cluster Management instances and node manager instances. Currently only HTTP is supported.

intershop.tcm.password.digest

Defines the algorithm used for Cluster Management user password encryption. The default value is MD5.

If you intend to use other messaging systems than the default multicast, you must enable and adjust the corresponding intershop.tcm.event properties.

Local Cluster Management Settings

In larger installations, it is typically necessary to separate the event messaging traffic (multicast is the default) between Cluster Management and Node Managers from other traffic. This can be done by binding multicast messaging to a dedicated network adapter.

If there are multiple network adapters installed on a machine, you can specify the IP of the network interface to use for multicast messaging as value of property intershop.tcm.event.multicastInterface.

Note

This machine-specific configuration is defined in a local configuration file tcm.properties, located in <IS.INSTANCE.SHARE>/system/tcm/config/local/<IS.AS.HOSTNAME>/<IS.INSTANCE.ID>.

Node Manager Settings

In addition to the shared and local cluster-wide configuration properties, each Node Manager reads its local configuration file, which defines the server processes to be started and the process control options.

Node Manager Configuration

The local Node Manager configuration is defined in the nodemanager.properties file (in <IS.INSTANCE.LOCAL>/engine/nodemanager/config/).

The location of the nodemanger.properties file can be set as JVM property via the command line expression -Dnodemanger.config.dir=<path>.

As one Node Manager controls all server processes in an Intershop Commerce Management instance, these settings apply to all servers. The following table lists the Node Manager properties in the nodemanager.properties file.

Property

Description

network.protocol

Defines the protocol to use for communication (default HTTP).

network.interface

Specifies the IP of the network interface to use; by default, the primary IP is set.

network.port

Sets the port for receiving requests from the Cluster Management instance (default: 10050). If not set, the Node Manager starts without communication functionality.

config.update.timeout

Defines the interval (in seconds) between two configuration look-ups to enable configuration changes at runtime. If not specified or set 0, the Node Manager reads its configuration once at startup.

The table below lists the properties that can be set for the processes controlled by the Node Manager.

Property

Description

process.allowedExitCodes

Specifies untreated exit codes of the Node Manager sub-processes. If a sub-process exits with one of these exit codes, the Node Manager will not restart it.

process.list

Specifies the names of the sub-processes that are controlled by the current Node Manager instance (comma separated list), e.g., appserver0,appserver1.

process.<process_name>.command

Specifies the command line string to start the sub-process. This string can include command line arguments to be passed to the sub-process.

process.<process_name>.autostart

A Boolean value indicating whether the Node Manager should start the specified sub-process at startup. The default value is true. If set false, only the Cluster Management instance can start this sub-process.

To pass additional properties to the application server process, either edit the startup script (this would apply to all server processes) or enter the required arguments in the command shell when starting a single instance. Additional properties include specific memory allocation, additional classpaths, etc.

Node Manager Startup Command

As the command line call to start the Node Manager is very complex, a command line script is used for convenience reasons (nodemanager.sh on Unix platforms, nodemanager.bat on Windows, located in <IS.INSTANCE.LOCAL>/bin/).

The startup script performs the following tasks:

  • Prepares the Node Manager system environment via the environment shell script
  • Assembles the class path for the watchdog Java process
  • Starts the Node Manager process

Intershop recommends not to change the Node Manager startup scripts.

Once running, the Node Manager starts the Intershop Commerce Management server processes, as defined in the nodemanager.properties file.

Intershop recommends to use the nodemanager.sh|bat script only for development or debugging purposes.

Application Server Configuration

The configuration for each Tomcat instance that runs a specific Intershop Commerce Management instance, is saved in the directory <IS.INSTANCE.LOCAL>/engine/tomcat/servers/appserver<ID>/conf.

The default configuration, which is used, for example, when cloning a Tomcat instance, is stored in <IS.INSTANCE.LOCAL>/engine/tomcat/servers/_default_/conf.

Application Server Shutdown

The Tomcat application server receives shutdown requests via a dedicated shutdown port. The default port number for instance ES1 is 10051. However, the shutdown port can be freely defined.

The string which is used internally to request a shutdown is configured in the server.xml file. By default, the string is set to SHUTDOWN, as shown below:

<Server port="10051" shutdown="SHUTDOWN">

For security reasons, it is strongly recommended to change the default shutdown request string in the server.xml file. Otherwise, any local network user can shut down the application server instance by simply sending the string SHUTDOWN to the respective shutdown port.

Application Server Network Settings

The Apache Jakarta Tomcat provides abundant options for configuring its network connection. These properties are set in the Tomcat configuration file server.xml as attributes to the Connector element. The Connector component represents Tomcat's connection interface for serving HTTP(S) requests.

For detailed information on how to configure the Apache Jakarta Tomcat, refer to the Tomcat documentation.

For the purpose of serving Intershop Commerce Management, the following options are relevant and may require customizing:

  • Application Server Network Connection
  • Application Server Ports
  • HTTPS Keystore Settings

Application Server Network Connection

The TCP port number on which the Tomcat Connector will create a server socket and await incoming connections is defined by the attribute port. With Intershop Commerce Management, the default application server ports are 10052 for HTTP and 10053 for HTTPS. The port numbers have to be modified when cloning an application server.

In the context of Intershop Commerce Management, the TCP port numbers discussed above are used for communication with Cluster Management. Note that the Web Adapter routes all requests to its internal JSP and servlet engine, using the port specified by the key intershop.servletEngine.connector.port.

Application Server Ports

The following table lists the recommended application server port scheme.

Intershop Commerce Management Instance

AS Instance

Port

Description

ES1


10050

Node Manager port


appserver0

10051

Tomcat shutdown port



10052

Tomcat HTTP port



10053

Tomcat HTTPS port



10054

Intershop Commerce Management port


appserver1

10061

Tomcat shutdown port



10062

Tomcat HTTP port



10063

Tomcat HTTPS port



10064

Intershop Commerce Management port

ES2


10100

Node Manager port


appserver0

10101

Tomcat shutdown port



10102

Tomcat HTTP port



10103

Tomcat HTTPS port



10104

Intershop Commerce Management port


appserver1

10111

Tomcat shutdown port



10112

Tomcat HTTP port



10113

Tomcat HTTPS port



10113

Intershop Commerce Management port

HTTPS Keystore Settings

The location of the security key for the HTTPS Connector is specified in the attribute keystoreFile. This attribute gives the absolute path and file name of the keystore file, the default is

<IS.INSTANCE.LOCAL>/engine/tomcat/servers/appserver<ID>/conf/server.xml
keystoreFile="<IS.INSTANCE.SHARE>/system/tcm/config/keystore"

To edit the key file, you need a password. This password is defined by the attribute keystorePass, with intershop as default value.

Intershop Commerce Management provides a demo, yet fully functional security key. To prevent warnings about certificate/IP mismatches, Intershop recommends to create an own key file for each server.

To edit the existing keystore file, e.g., to create new certificates, use the keytool program, located in <JAVA_HOME>/bin. For information on managing keystores using keytool, refer to the JDK documentation.

The keystore settings play a role for communication with Cluster Management. They are not relevant to securing Commerce Management or your storefront applications.

Tomcat Logging Options

The Tomcat log files tcc_access_log are stored separately for every server instance in the Tomcat log directory <IS.INSTANCE.LOCAL>/engine/tomcat/servers/appserver<ID>/logs. For information on the log, consult the official documentation of the Apache Jakarta Tomcat.

Managing Tomcat Server Instances Across Intershop Commerce Management Clusters

In complex deployments involving multiple Intershop Commerce Management clusters, it may be desirable to manage Tomcat server processes from different Intershop Commerce Management clusters in a single Cluster Management instance. For example, consider a data replication scenario with source system and a target system. Being part of different Intershop Commerce Management clusters, source and target system access separate Intershop Shared Files instances.

In a standard setup, the Cluster Management configuration files are also separate, as they reside in the respective Intershop Shared Files area of cluster 1 and cluster 2. Using the Intershop Commerce Management environment variable IS_TCM_SHARE, defined in the intershop.properties file below <IS.INSTANCE.HOME>, both clusters can be forced to use the same set of configuration files. As a consequence, the Tomcat server instances can be managed from the same Cluster Management instance.

Sample scenario with two Intershop Commerce Management clusters accessing joint Cluster Management configuration file

For different clusters to use the same Cluster Management configuration files, the IS_TCM_SHARE variable in the intershop.properties file of each instance has to point to the same location.

Managing server instances from different Intershop Commerce Management clusters in the same Cluster Management instance

You can use any Tomcat server instance to connect to the Cluster Management instance, using the instance's Tomcat HTTP port (e.g., port 10052). In larger deployments, consider using a dedicated Tomcat server instance which serves administrative purposes only and does not run the Intershop Commerce Management application.

OutOfMemory Error Handling

Info

This section replaces the outdated Knowledge Base article with the ID 23K026 and the title Using the OnOutOfMemoryError JVM Option.

The OnOutOfMemoryError option helps you dealing with OutOfMemory situations of your Java Virtual Machine by defining an action to be executed when an OOM error occurs.

Simply adjust your tomcat.bat / tomcat.sh in <%ESERVER_HOME %>/bin by adding the following line to the JVM options section:

Adjust tomcat.bat / tomcat.sh
set JAVA_OPTS=%JAVA_OPTS% -XX:OnOutOfMemoryError="<YOUR_ACTION>"

Example:

Example
set JAVA_OPTS=%JAVA_OPTS% -XX:OnOutOfMemoryError="C:/WINDOWS/system32/taskkill /F /PID %%p"

This example would kill the application server process in case of an OOM error in a windows environment. For more information about this and other JVM options refer to Java HotSpot VM Options.

Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.