Concept - Technical Architecture of Intershop Commerce Management 11+

Introduction

This document describes the technical architecture of Intershop Commerce Management (ICM) 11. It provides a high-level overview of the most important terms, concepts and practices. For more details about a certain topic, refer to the design specifications, concept documents or cookbooks.

System Architecture

Overview

System Landscape

An ICM system is never used stand-alone and self-sufficient. Instead, it is always integrated into a bigger e-commerce landscape. The exact architecture of this landscape is customer-specific, but typically, the ICM system is integrated with dozens to hundreds of external systems. Such systems include:

  • Product information systems (PIM)

  • Content management systems (CMS)

  • Order management systems (OMS)

  • Enterprise resource planning systems (ERP)

  • Customer relationship management systems (CRM)

  • User management / authentication / authorization services

  • Payment services

  • Personalization services

  • Address validation services

  • Fulfillment backends

  • Creditworthiness / scoring services

  • Reporting and business intelligence services

  • E-mail / newsletter services

  • External marketplace connectors

  • Image management services

  • ...

The possibilities are endless.

How ICM is integrated, which sub processes of the overall e-commerce process are to be covered by ICM, and which processes are provided by external systems depends on the business model and the existing IT and service environment of the customer.

In addition to the integration with external systems, multiple ICM systems are often integrated with each other. Typically, larger installations consist of:

  • Editing systems for preparing new content

  • Live systems for handling storefront traffic

  • Development systems for developing new versions of the software

  • Test systems for testing software and content

Tiers

ICM comes with a 3-tier architecture, which consists of:

  • Web tier

  • Application tier

  • Data tier

Here the term tier refers to the physical distribution of the software installation. Typically, each tier is deployed on a separate server machine for a production environment. For development and test installations, all tiers of ICM can also be deployed on a single machine.

Web Tier

The web tier consists of the web servers (Apache) with the additional ICM web adapter. In an ICM cluster, there are usually multiple web servers, that get their load from a hardware load-balancer in front of them. The web adapter, which is plugged into the Apache web server, has multiple tasks:

  • It acts as a reverse proxy and comes with a page cache for a fast delivery of cacheable content (web pages, images, style sheets, other static content) to the clients.

  • It composes response pages from smaller includes that can either be cacheable or non-cacheable.

  • It provides a dynamic load-balancing for the application servers that is based on an optimization algorithm to minimize the average response times.

  • It performs session management with sticky sessions for the application servers.

  • It supports translation of SEO-optimized URLs into ICM application server URLs.

The web adapter comes with a separate web adapter agent that is responsible for background maintenance tasks:

  • Invalidation of page cache content

Related documents:

Application Tier

The application tier consists of the ICM application servers. In a production cluster, there are usually multiple application servers for performance and availability reasons.

The main application is represented by the ICM application, which receives the load from the web adapter and performs the business functions.

Each application server process is controlled by Kubernetes (K8s) deployment. The K8s controller can automatically restart, scale up or scale down an application server in case of failure, i.e., it acts like a watchdog process.

The ICM web application comes with its own embedded servlet engine. All requests are handled by this "inner" servlet engine, which also opens its own HTTP ports.(With ICM 11 there is no external Tomcat Application Server needed)

Data Tier

The data tier consists of the database and a shared file system (a central network drive), which is accessed by all application servers. The database is used for storing all mass data and all transactional data, like product information, customer accounts, their orders etc. The shared file system holds static files that are needed by the business applications, like product images, style sheets, PDF documents, configuration files and so on.

As a database, currently Microsoft SQL Server is supported.

Cluster Management

The group of the several instances of web servers, application servers and database servers that are working together is called a cluster. A minimum ICM cluster would consist of a single web server with the web adapter, a single application server with the ICM application, a shared file system and an database server.

The web adapters that forward dynamic requests to the application servers must know about the structure of the cluster. To this end, they are configured with a list of so-called configuration services that are provided by each application server and that they poll at a regular base. The several application servers communicate with each other using an event channel. Each server regularly sends an event about its presence (like their ID, their IP addresses, their ports etc.) and its operational state as a broadcast message to the local network, which is received by the other application servers. Consequently, every application servers knows about the others and can provide this information to the web adapters.

ICM application servers are subject to licensing.

Application Architecture

Overview

The structure of an ICM application server can be visualized as a set of architectural layers and cross-cutting concerns. A layer here describes the logical view on the application, in contrast to tiers, which describe the physical (deployment) view.

Every layer is supported by numerous frameworks that can be used to implement the functionality of the layer.

Platform Layer

Cartridges

An ICM application server forms a runtime environment for cartridges. Cartridges are deployment containers for all kinds of elements that exist in an ICM system, like Java jar files, pipelines, ISML templates, web services, query files, images, XML files, and so on. A cartridge has a well-defined structure in the file system. At runtime, it is represented by an instance of a cartridge class, which can have initialization hooks that are called during server startup.

A cartridge has a version number and a build number. It defines dependencies to other cartridges that are needed during the build/deployment process and/or at runtime. The version number follows the API stability contract, i.e., it consists of a major version, a minor version and a patch version number. However, this API stability contract can not be guaranteed, because required libraries that indicating compatibility are not compatible sometimes. Sometimes a customization uses an internal or rarely used API that has accidentally broken.

The cartridges that must be loaded by an application server are specified in the cartridge list. This list defines which cartridges should be initialized, and it also defines a loading order. The cartridge order is important for the fallback lookup of many ICM elements like pipelines or ISML templates, which can be overridden in customer projects. When a cartridge is loaded, its cartridge class is instantiated and its initialization hooks are called. Any necessary registration/initialization steps can be triggered in such hook methods.

Cartridges can be associated with an architectural layer to which they belong. The layer is expressed in the naming of the cartridges by a prefix. There are strict conventions which prefixes can be used. Depending on the layer, different elements are allowed as contents of the cartridge. Examples:

  • pf_*: platform cartridges, typical contents: Java code, 3rd party libraries

  • bc_*: business component cartridges, typical contents: business objects, pipelets, process pipelines

  • app_*: application cartridges, typical contents: ISML templates, images, style sheets, view pipelines

Cartridges may depend on each other. All dependencies must be declared. There are strict conventions about which dependencies are allowed. For example, a business cartridge may depend on a platform cartridge, but a platform cartridge must not depend on a business cartridge.

Related documents:

back to Application Architecture | Overview

Customizations

An ICM application server deployment can deploy different customizations to enhance the standard ICM functionality. Customizations are a set of Cartridges that are deployed together. Cartridges inside a customization will be updated at once.

Related documents:

Components

The ICM component framework allows to initialize a graph of Java objects that represent the running application types. The component files are XML files with instructions that define which Java classes must be instantiated and how the members of the instances must be initialized/connected with each other. Component files allow to hide the internal implementations. In contrast to existing IOC frameworks like Spring, instances that are created by the component framework and Guice can be application-specific, i.e., different wiring can be achieved per application type.

Related documents:

back to Application Architecture | Overview

Object Graphs

In addition to the instantiation of Java objects in component files, dependency injection with standard JSR 330 annotations can be used in ICM at certain objects, such as managers, business objects and pipelets. For injections, the Google Guice framework is used. So-called object graph files configure which Guice modules must be used for an application server.

Related documents:

back to Application Architecture | Overview

Extension Points

ICM comes with several standard implementations for business features that often must be enhanced in customer projects. To this end, there are extension points at several points in the processing logic, into which extensions can be plugged in. Extension points can be provided in pipelines, templates or in Java code. The kind of extension that is to be plugged into an existing extension point is defined in an extension binding file, which is an XML file. The visibility of extensions is application-specific. For example, it is possible to apply an extension to a standard pipeline within one application, but not in another application that executes the same pipeline.

Related documents:

back to Application Architecture | Overview

Data Layer

Database

ICM supports "Microsoft SQL Server" and "Azure SQL Managed Instance"  as a database for all business objects and transactional data. An application server is connected to a single database account that contains all data for all ICM applications running on that server.

For higher performance and better data integrity, several database features are used:

  • Stored procedures for doing specific operations like cascading delete operations

  • Synonyms for tables for quick switching between live content and edit content

  • Foreign key constrains for referential integrity

  • Several indexes for fast lookup

All requests to the database go through the ORM engine or through the Query engine via JDBC.

Related documents:

back to Application Architecture | Overview

ORM Engine

The ORM engine is responsible for the object-relational mapping of persistent Java objects to relational database tables. The ORM engine forms an object cache that communicates with the database via JDBC. It is written in pure Java. It consists of multiple sub-systems that perform tasks like loading and parsing deployment descriptors, providing meta-information about persistent objects, generating and executing SQL statements or switching between different transactional and non-transactional states of an object.

An ORM bean (i.e., a persistent object) consists of 4 files:

  • A Java class representing a persistent object

  • A Factory class that manages the life cycle of the persistent objects

  • A Key class that represents the primary key of the persistent object

  • An XML deployment descriptor that describes the attributes and relations of a persistent object and the mapping to database tables and columns

The code for persistent objects that are managed by the ORM engine can be generated from EDL models using the Intershop Studio code generator. EDL (Enfinity Definition Language) is a textual DSL for modeling persistent objects.

Related documents:

back to Application Architecture | Overview

Queries

The ORM factories allow to do SQL queries on persistent objects, but sometimes it is necessary to query the database for other information that do not belong to a persistent object. Such queries can be expressed in Query files. ICM query files are XML files with control elements that allow to dynamically construct SQL queries and to execute them in the database. The response from the database can be mapped to various Java objects, like primitives or ORM objects.

Again, query files are application-specific and are subject to the cartridge fallback lookup. They can be overridden for customization.

Related documents:

back to Application Architecture | Overview

Import and Export

ICM is always integrated into a bigger environment with other various systems from which business data originate or which consume business data. To support this, ICM provides import and export capabilities for several business objects. Depending on the kind of object, the import/export feature is capable of handling mass data and long-running asynchronous and/or parallel import processes. Usually, XML files are used as a transfer format, but other formats like CSV or custom formats are common, too. When objects are imported, they can be checked for consistency with validators. The import of persistent objects is mostly done through the ORM engine.

Export of data relies on XML frameworks like JAXB, or in some cases on ISML templates for formatting the text.

Related documents:

back to Application Architecture | Overview

Data Replication (for early adapters)

Data replication is used in ICM installations with separate instances for edit and live systems. An edit system is used by business operators to create/prepare content like product and catalog information, web content etc. The live system is hosting the active web shop application and is holding the active content that must be shown to the customers, while the edit system is holding the "future" data and has less traffic to handle (only operators). With data replication, it is possible to transfer the database content from the edit system to the live system when it has been completed, tested and approved. Data replication is able to synchronize groups of persistent objects that belong together at once (so-called replication groups), or to synchronize single objects. Switching from old content to new content in the live system can be done in an atomic step by importing the content into a shadow table first and by renaming synonyms in the database second.

Related documents:

back to Application Architecture | Overview

Sharing

With the channel concept, ICM can model large enterprises and their organizational structures and responsibilities for managing business content. Often, the same data (like product data) must be "rolled out" to different sales channels and applications, while still managing them at a central place. For example, a large company may chose to manage their product and catalog portfolio at the global headquarter, but all sales activity and order processing is done locally in several countries/regions. The distribution of such centrally managed content can be controlled with the sharing concept.

back to Application Architecture | Overview

Syndication

Syndication is also about reusing centrally managed data in sub-divisions, but instead of linking to shared objects this is achieved by creating copies. Consequently, it is easier to perform local adjustments (like changing single attributes) of business objects and to support complex data distribution scenarios. But as a tradeoff, the amount of data in the database is much higher and the import performance is degraded.

back to Application Architecture | Overview

Several business objects like products and content of the WCM must be searchable in the storefront or Commerce Management applications. To this end, external search services can be employed. ICM comes with Apache Solr Cloud integration for full-text searches, other engines like Intershop Sparque can be supported if there is a business demand.

The index for searchable objects can be built after import/replication processes. It can also be updated on changes of single objects in the Commerce Management application.

Related documents:

back to Application Architecture | Overview

Business Layer

Business Objects

The business object layer provides an explicit business-oriented domain model as a Java API. The business objects usually form an abstraction over an underlying persistence layer (or data layer), which in ICM is provided by the ORM objects. The business object API is accessed from the various business applications in ICM, like web shop or management applications. The business objects provide for an object-oriented view on the data that is optimized for representing domain concepts and for "programming usability", while the underlying persistence model is optimized for the database mapping and storage/query performance. The business object layer comes with concepts to change and extend the behavior of business objects without breaking the API. The business object API hides the underlying internal implementation that can still be based on the existing ORM model or on any other back end. There may be multiple implementations of the same business object API, each accessing and holding the data in another back end.

The business object concepts defines various types of objects:

  • An Entity forms a business object whose identity is important (in contrast to a Value Object, where only the value matters).

  • An Aggregate is a set of related business objects that belong together (e.g.: Basket, Basket Line Item).

  • A Root Entity is the entrance into an Aggregate, it is the main business object (e.g.: Basket)

  • A Repository is a container for root entities. Similarly to the ORM factories, it manages the life cycle of the root entities. The implementation of a repository can be based on another business object, too. I.e., some business object may represent the repository for another business object.

  • An Extension is an attachment to a business object with additional methods. It can be used to extend the API of a business object.

  • Each business object lives within a Business Object Context.

The business object API is visible globally, but the extensions can be made application-specific. This means, they would only be visible for certain applications.

Related documents:

back to Application Architecture | Overview

Pipelines

Pipelines are executable flowchart models that could be used to implement the business functionality. They can be edited using the Visual Pipeline Manager, which is part of the Intershop Studio IDE. For execution, the pipelines are loaded by the application servers first and are interpreted for every request.

Example:

A pipeline consists of nodes and transitions. Nodes can be control nodes, like decision nodes, start nodes, end nodes, call nodes, or pipelets. Transitions are the links between two nodes and represent the execution flow. Pipelets are small Java classes that form small processing steps. They usually operate on business objects and their repositories or on other underlying Java APIs. Pipelets are written for maximum reusability in different business scenarios. Consequently, they can be arranged to different flows in other business applications. Data exchange in pipelines is done using a pipeline dictionary. All input parameters of the pipeline calls are put into the dictionary, from which pipelets or other nodes read their input values and into which they put their output values. Dictionary objects can also be accessed with Object Path expressions, in order to map them as input parameter for a node.

In ICM, two major types of pipelines are distinguished:

  • View pipelines are used to implement navigational flows in the (web-based) user interface. They end in an interaction node, which renders an ISML template. The rendered HTML code contains links to view pipelines, again.

  • Process pipelines are used as sub-routines to implement reusable business behavior. They are invoked by view pipelines.

Beside these major types, there are a number of other areas in web applications where pipelines are used:

  • Web service pipelines implement the functions of a Web Service API.

  • Job pipelines represent background jobs that can be scheduled and perform reoccurring background operations like import/export, database cleanup etc.

Pipelines can also control the database transactions. To this end, a pipelet can declare that it is transactional, i.e., that it will change data in the database. Transaction boundaries can be adjusted at the pipeline transitions. Their transactions can be started, committed or rolled back. All changes of the pipelets that are executed within such a transaction frame will be atomic.

Customizations using the concept of pipelines increasing their effort for migration. REST controllers don't need to use the pipeline concept to provide custom specific endpoints. The introduction of the business layer moved the business logic from pipelets and managers to business objects and their repositories.

Related documents:

back to Application Architecture | Overview

Applications

The term application has already been used several times in this document, so it is necessary to clarify what it actually means. Applications in ICM are instances of application types and represent the binding of the application type to the data. An application type combines cartridges to meaningful bundles. As an example, there are Web Shop application types, which consists of all code elements that make up a web shop and Management/Backoffice applications types to manage products, content and so on. Application types can be instantiated multiple times (with another data context), so each web shop application operating on a different set of products or other content, and with separate or shared customer and order data.

Application types consist of cartridges, which are a subset of the global cartridge list in the application server.

Related documents:

back to Application Architecture | Overview

Repositories

Business objects like products and customer data are stored in repositories, which are assigned to applications. There are two kinds of repositories: master repositories and channel repositories. A master repository belongs to a parent organization, while channel repositories represent sub-divisions that run their own sales applications (like web shops). Between master repositories and channel repositories, there may be sharing or syndication relations for distributing and reusing the data among them. Typically, the exact layout of the repository structure and the way of the data flow is different for each customer project.

Related documents:

back to Application Architecture | Overview

Presentation Layer

ISML

The Intershop Markup Language (ISML) is a template language that can be used to render dynamic HTML pages. The syntax is similar to XML, but it does not represent strict XML. The language consists of ISML tags and Object Path expressions. ISML tags implement general functions like loops, includes, formatting etc., while Object Path expressions are uses to retrieve dynamic values from business objects. ISML also comes with some ISML functions that can be used for frequently needed tasks like string concatenation, comparison of values etc.

ISML Example

<isloop iterator="Category:Products" alias="p">
    <isif condition="#p:Online#">
        <isprint value="#p:SKU#">
    </isif>
</isloop>

ISML is based on JSP. Before execution, an ISML template is compiled into a JSP template, which again is compiled into an executable form by the JSP compiler of the JSP container. Since the ISML grammar defines the available tags, it is not possible to add own custom tags. However, it is possible to use JSP taglibs within ISML templates. With taglibs, the system can be enhanced with additional tags and functions. However, ISML uses different variable scopes and spaces than JSP/JSP expressions, so mixing the ISML world with the JSP world is not straightforward.

ISML templates are contained in cartridges and are executed in the scope of an IS7 application. When a template is looked up, a fallback approach is used: all cartridges that are assigned to the application are checked in the reverse order of the cartridge list whether they contain the wanted template. Consequently, it is possible to override existing templates of other cartridges for customization.

For translating static texts within templates to other languages, there are two approaches:

  • Externalization of strings into separate files and using them with the <istext> tag (recommended).

  • Duplication of the whole template and translation of the texts using the tLoc tool (used in previous versions of IS7, not recommended anymore).

In special cases, ISML templates may not only be used for rendering HTML responses, but also for exporting files and for rendering e-mails.

Related documents:

back to Application Architecture | Overview

Object Path Expression Language

The Object Path expression language is an expression language for navigation on Java objects and for accessing their properties. Originally, it was implemented as part of ISML, but later other uses were introduced. Object Path expressions can be used in ISML tags for resolving their parameters, in pipelines for mapping pipeline dictionary values to pipelet input parameters, and in query files for mapping query parameters.

back to Application Architecture | Overview

Web Forms

The Web Form Framework allows to define forms for web pages. A web form consists of various input fields and validation rules to check if the input is valid. The web form definitions are stored in XML files and are subject to the application-specific cartridge fallback lookup.

Related documents:

back to Application Architecture | Overview

Web Content Management

ICM comes with a complex web content management system (WCM), which allows to design and assemble dynamic web pages at runtime by the business user. Pages are composed from pagelets, which are the basic building blocks and can be developed in Intershop Studio. Pages can be arranged in a kind of "inheritance" hierarchy in order to reuse existing layouts and looks. "Super pages" can have multiple page variants that contain so-called slots, which are to be filled with components that hold the actual content. As the content model must be changeable at runtime, it is stored in the database. The several pagelets can invoke pipelines for determining their content data for rendering, and they are rendered using ISML templates. WCM pages can be displayed as the result of a view pipeline.

In ICM, the whole web shop storefront application is built using the WCM technology.

Related documents:

back to Application Architecture | Overview

SEO

Of course, e-commerce shops are designed for humans. Apart from the user experience , Search Engine Optimization (SEO) is tremendously important for the business success of a web shop storefront. This is precisely why ICM supports various techniques of search engine optimization , e.g., XML sitemaps or URL re-writing.

Related documents:

back to Application Architecture | Overview

Service Layer

REST

RESTful web services are the most popular service interface, currently. ICM supports the Jersey framework for implementing REST services. The behavior of such services can be adjusted using Java annotations on the service classes. It is possible to invoke a pipeline from within a service implementation, or to implement the functionality directly in Java. Authentication and authorization is done using Jersey means.

Related documents:

back to Application Architecture | Overview

Managed Services

For the integration of external services like payment services, personalization services, mail services etc. into ICM, the managed service framework provides a common infrastructure. Often-needed things like configuration of services, enabling/disabling services for certain applications, monitoring of services and so on are provided by the framework, into which the several service adapters can be plugged. Service implementations can use their own protocols and communication libraries to communicate with external systems.

Related documents:

back to Application Architecture | Overview

Execution Model

Startup Sequence

The application server startup is controlled by the node manager. When the "outer" Tomcat starts the ICM web application, this triggers the initialization of the embedded servlet engine, several important subsystems and the cartridge engine. All cartridges from the cartridge list are loaded and initialized by calling their initialization hooks for the various initialization states. In the hooks, a cartridge registers its contained elements like pipelets, persistent objects and so on at the responsible engines. Depending on the runtime mode, the loading of elements like pipelines and pipelets can be done lazy (i.e., on first access, good for development because short startup times are needed) or eager (good for production, because the system must be fully initialized and responsive immediately when it is loaded).

back to Application Architecture | Overview

Request Processing

All requests from web users go to the web server with the ICM web adapter. The web adapter checks the page cache for a cached response for this URL from previous requests. If a page was found there and is still valid (i.e., it is not expired), it will be prepared for the client and served immediately. If no cached page was found, an application server is selected and its request handler servlet is called to handle the request. Which application server will be called depends on session affinity (i.e., all requests from the same session go to the same application server), server group configurations and load conditions in the cluster. For initial requests (i.e., new sessions or session-less requests), the load-balancing algorithm selects a server so that the overall average response times are minimized.

If the request is a pipeline URL, the request handler servlet triggers the execution of the pipeline. Depending on the requested operation, the pipeline consists of pipelets that operate on business objects with underlying persistent objects from the database. The pipeline ends with an interaction node, which usually links to an ISML template. This template will be rendered by the JSP engine after successful compilation of the template. The response is returned to the web adapter, which optionally stores it in the page cache, if it is cacheable. Finally, the response is returned by the web adapter to the web client (the browser).

In addition to the request handler servlet for pipeline requests, there are some more servlets running in the application server, which are responsible for handling other request types:

  • Delivering static contents like images, style sheets, etc.

  • Handling REST requests

  • Translating SEO-optimized URLs to pipeline URLs via URL rewriting

  • Submitting information about the cluster to the web adapter

The general flow is the same.

back to Application Architecture | Overview

Schedules

ICM has the capability to execute scheduled pipelines in regular intervals as background jobs. Typically, this feature is used to implement reoccurring tasks like scheduled order exports, import processes or database cleanup operations. When and how often a job is to be executed can be configured by the business operator in the SMC. The number of jobs that may run concurrently on a single machine can be limited via configuration to avoid overload situations with computationally intensive tasks. Some customers chose a setup with dedicated server(s) in a separate server group, which is exclusively responsible for job processing and does not get any storefront traffic. This separates the load by background jobs from the storefront load.

Under certain circumstances, the execution of multiple jobs must be synchronized, because they depend on each other. For example, a job for rebuilding the search index should be started right after the job for the import of product data has finished. In this example, having fixed time slots is not a good solution, because the runtime of the import job may vary and there would be a gap with an invalid index before the index builder is started. To solve this, ICM supports the definition of process chains, which synchronize the execution of multiple depending jobs.

The job framework is also used for scheduling non-repeating background tasks, like long-running actions that are started via the user interface. Such actions must be decoupled from the UI response.

ICM comes with set of preconfigured jobs to support the administrative staff.

Related documents:

back to Application Architecture | Overview

Cross-Cutting Concerns

Configuration

Many parts of ICM are configurable by system administrators, developers or others. The configuration can be stored in different file formats, like XML files, property files or even in the database, depending on the particular subsystem. The access to such configuration values by the implementation is simplified by the configuration framework.

Related documents:

back to Application Architecture | Overview

Logging

ICM uses various logging systems for writing different kinds of information to logs (files or standard out). In the application server, the SLF4J library is used as the logging API. This library allows a flexible configuration of log targets, log scopes, log levels and so on. Typically, the following information is logged:

  • Request logs (written by the web adapter) containing request URLs, server response times etc.

  • Error, Warn and Info logs containing exception stack traces that occurred in the application server for inspection by developers. These logs are written to Standard-Out and can be collected by the Monitoring System.

  • Database/SQL logs for analyzing database accesses and performance optimization

  • Job logs for tracking the execution of background jobs

  • Impex logs for tracking import/export processes

The scope, detail etc. of information to be logged to which log target can be specified in configuration files or at runtime in the SMC for accidental cases.

The web adapter access logs are packaged and transferred to the application servers via the web adapter agent. They can be analyzed by external tooling.

Related documents:

back to Application Architecture | Overview

Locking

The ICM locking framework allows to lock/unlock resources (e.g., objects, processes) that are potentially subject to concurrent modifications or executions by multiple users in order to serialize modifications and to avoid conflicts. Several business features support locking in the user interface. For example, the pages of the Commerce Management application for managing product data allow to lock the product that is currently being edited. Consequently, other business users cannot edit the same product at the same time. They have to wait until the lock has been released by the current owner after the modifications are done.

There are various kinds of lock implementations with different lifespans and conflict resolution strategies.

Related documents:

back to Application Architecture | Overview

Authentication

User authentication can be done in the application server or by external OIDC providers. For compatibility reasons the application servers supports the Java Authentication and Authorization Service(JAAS). We strongly recommend to use external OIDC providers to manage authentication of the user for multiple application at one place (SSO). These providers can also provide enhanced authentication mechanism like two or multi-factor authentication (MFA).

The authentication state of web users is represented by a secure cookie containing an authentication token. This cookie is only transmitted in HTTPS connections, therefore all protected views and actions must be invoked via HTTPS.

The authentication state of REST requests is done on every request by using token-based authentication.

Besides the user management within the ICM application, there are various other user accounts involved:

  • There are different user accounts for the operating system of the servers under which the several processes are executed (like separate OS users for running web servers, application servers, database).

  • There are database users identifying the database account

  • There may be user accounts for connecting external services like fulfillment back ends. Managing such accounts is implementation-specific.

Related documents:

back to Application Architecture | Overview

Authorization

As with authentication, authorization can be done on several levels. Besides file permissions for the OS users, database permissions for database users and web permissions for web server users (controlled by Apache), ICM can perform permission checks for the execution of pipelines and REST requests. The permissions that a user must have in order to execute a pipeline can be specified in a  ACL file. Usually, all back office pipelines are protected, while all storefront (web shop) pipelines are open for anonymous users.

Permissions can be managed for groups of users rather than for individual users. Assigning users to groups/roles can be done in the Commerce Management application.

back to Application Architecture | Overview

Auditing

Auditing is the logging of actions that are done by users in the system, i.e., who did what and when. It is required by the PA-DSS certification that all administrative actions (like those done by the business operators) must be logged. ICM comes with an auditing framework that allows to audit operations on business objects/processes.

Related documents:

back to Application Architecture | Overview

Caching

For improving performance, caching is employed at several places and architectural levels. In the web adapter, complete HTML pages and/or fragments of HTML pages or similar documents can be cached in the page cache. This cache stores the cached text documents in the file system of the web servers. With the page cache, the load on the application servers can be reduced drastically, since only dynamic pages like the basket or user registration pages must be rendered by the application servers. Many other pages that contain relatively static content like product details or catalog pages can be cached and be directly delivered to the clients without accessing the application servers.

Within the application servers, the ORM engine caches all persistent objects that are stored in the database. This cache helps to minimize the load on the database. ORM caches of different application servers can be synchronized with each other.

For several other business features, specialized caches have been implemented to solve performance problems.

A common caching API allows to register existing caches and to trigger operations like cache clearing, which are often necessary after import processes or other changes of the underlying business data.

Related documents:

back to Application Architecture | Overview

Monitoring

Technical application parameters like memory consumption, CPU utilization, database connections or cache hit ratios of the application servers can be monitored using JMX or Prometheus Metric endpoint.

With performance sensors, it is also possible to measure the runtimes of pipelines, SQL statements or other processing elements. The runtime behavior can be inspected in the SMC.

The proprietary "Performance Sensors" are subject to change in to "standardized" Prometheus metrics.

Related documents:

back to Application Architecture | Overview

Internationalization

Internationalization (i18n) is more than just the translation of texts into different languages, it also includes topics like formatting values, unit systems, regional settings, time zone handling and so on.

Most content in ICM can be localized. For the web shop application, static texts in templates can be externalized to resource bundles that map the keys to translated texts for the supported languages. For business objects that are stored in the database, texts like names, descriptions and so on can be stored in multiple languages in the database. Which locale must be presented to a user is decided using a locale fallback. Several fallback stages are supported, like session locale, application locale, system locale etc.

ICM is able to handle prices and price calculations in multiple currencies and regions.

Related documents:

back to Application Architecture | Overview

Development Practices

Development

Development of the ICM product itself and of customer projects can be done in Intershop Studio, which is an IDE based on Eclipse. Intershop Studio provides editors and views for all implementation elements, like graphical editors for pipelines, text-based editors for ISML templates, and so on. It also comes with the code generator that can generate the Java code for persistent objects and other elements from EDL models. Running applications can be debugged with an integrated pipeline debugger, or with the common Eclipse debugging facilities. Developers can use other IDEs like Intellij or Microsoft Visual Studio Code as well, but editors for pipelines and ISML are very limited.

Beside the IDE, a version control system (VCS) is needed, such as Git or similar.

For all elements that make up ICM, naming conventions and programming style guides exist. Intershop Studio can (partly) validate implementation elements for their compliance.

Related documents:

Build

ICM uses a Gradle-based build. Building the system is done locally by the developers to compile and test their code before submitting it to the central VCS of the project. After pushing the source to the remote project repository, a central build system will compile, test and publish artifacts and docker images.

Related documents:

Deployment

The build system published docker images in a repository, from which they can be deployed to the working environments using ICM Helm charts. There may be different deployment scenarios, like single machine deployments, or complex production clusters. The values.yml (configuration of Helm charts) provides different options to configure the deployment.

In addition to setting up the server installations, the population of the database with initial content, demo content or even production-ready content is necessary. To support this, ICM start up contains an internal process to prepare these data. For development purposes, the DBPrepare tool is also available as separate application.

Related documents:

Testing

Automated testing of ICM applications can be done on multiple levels:

  • Unit tests are performed locally on implementation artifacts.

  • Integration tests are performed on a combination of subsystems.

  • Web tests are performed on the web user interface and/or remote APIs of the running applications.

  • Performance tests are performed with a focus on response times and scalability, rather than testing functionality.

For all kinds of tests, supporting test frameworks are used, which are usually based on JUnit.

Related documents:


Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.