Document Properties
Kbid
247V26
Last Modified
02-Feb-2023
Added to KB
20-Jun-2013
Public Access
Everyone
Status
Online
Doc Type
Concepts
Product
  • Gradle Tools
  • ICM 7.10
Concept - Gradle Deployment Tools

Introduction

This document describes the deployment framework that is part of the component-oriented delivery process of Intershop 7. Before reading this document, please make sure to get a glimpse at the big picture in the Concept - Continuous Delivery Tools (valid to 7.10).

This document is primarily targeted at administrators and developers, who need to create the deployment of an Intershop 7 based solution. Following the spirit of the DevOps-movement in general and Gradle in particular, configuration and development strongly blend into each other. Therefore this document also provides an entry point for administrators, who need to customize the deployment beyond most basic needs. A more practical and more basic entry to configuring and running the deployment is provided in the Cookbook - Gradle Deployment Tools (7.4 CI - ICM 7.7). The majority of recipes are intended to be used without prior knowledge of this document.

In the component-oriented life cycle the deployment is located between the phases build/assembly and runtime. The following diagram shows what the deployment does on a low technical and on a high conceptual level.

Key Features

The framework emphasizes the following features:

  • Reusable deployment steps (plugins) that are easy to customize and extend.
  • Deployment logic and configuration is assigned locally to deployed components. This is a prerequisite for constructing custom sets of components to deploy and run, i.e., enable free combination.
  • Specification of a desired state that can be reached from any previous deployed state with as few actions as possible. This is a prerequisite for upgrades / downgrades between any two versions of the deployed software, no matter if they differ largely or just slightly.
  • Unified deployment for all supported platforms, differing from platform to platform only where necessary. This is an optimization that makes build and deployment (and their customization) scale better with the number of platforms.
  • Detection and configurable treatment of modifications by non-deployment processes. As an added benefit of describing a desired state instead of providing an imperative procedure, the deployment can compare the actual state with the desired one. This way it can uncover changes in a deployed system and provide options to revert / merge them.
  • Deploy the results of a component-oriented build and assembly process. Especially we avoid re-packaging of component contents and disregard the physical form of the repository, as long as it supports some basic operations like look-up by qualified name and version and retrieval.

Glossary

Common Continuous Delivery Glossary


PhraseMeaning
Version Control System (VCS)

Also known as source control, source code management systems (SCM), or revision control systems (RCS). VCS is a mechanism for keeping multiple versions of your files, so that when you modify a file you can still access the previous revisions.

Artifact Repository

Place, where build and package software components are located. Provide a common interface to a dependency management system.

Code AnalysisProcess to analyze source code to calculate metrics, find bugs, etc.
Continuous Delivery PipelineSometimes called Deployment Pipeline, describes the stages, which code artifacts runs through source to production system.
System ComponentA software package of different code artifacts and files, that have to be deployed together.
System Component SetIs a container for system components, that needs to be build and branched together.
AssemblyAn assembly references one or more system components residing in the same or a configured artifact repository in order to deploy or deliver them together.
Build ProcessCompiles and packages files and code artifacts from a source project to deployable artifacts.
Publish ProcessThe process which transfers the deployable artifacts to a configured artifact repository.
Assembly ProcessThis process combines several system components to an assembly.
Deployment ProcessThis process extracts files and code artifacts from an artifact repository and applies the configuration.
Project Gradle DistributionThis is a customized Gradle distribution with the preconfigured artifact repositories and Gradle plugins.
Gradle PluginA Gradle plugin packages up reusable pieces of build logic, which can be used across many different projects and builds.
Project Gradle PluginThis is a Gradle plugin which contains special corporate respectively project settings.
Corporate PluginThe term is used as a synonym for Project Gradle Plugin.
Gradle Extension ObjectJava Bean compliant class holding configurations for Gradle plugins.
Gradle WrapperThe Gradle Wrapper is the preferred way of starting a Gradle build. The wrapper is a batch script on Windows, and a shell script for other operating systems. When you start a Gradle build via the wrapper, Gradle will be automatically downloaded and used to run the build. See for more information The Gradle Wrapper in the Gradle documentation (2.11, 2.7, 2.3, 2.0, 1.8)
Intershop ClusterA number of hosts of different types serving an Intershop 7.
Cluster NodeOne separately deployable part of an Intershop cluster. A host can run multiple nodes of one Intershop cluster.

Deployment-specific Glossary

PhraseMeaning
Desired StateA goal to be reached. In the domain of deployment this goal can consist of a file structure and certain file contents, but also operating system configuration. The phrase is used to put emphasis on the goal rather then the way to reach it, which can be determined automatically.
IdempotenceA process is idempotent, if executing it repeatedly results in the same state as executing it once. Processes aiming to reach a desired state must be at least idempotent.
ConvergenceA process is convergent, if it does not change more than necessary to reach a desired state. If the current state is already the desired state it should therefore do nothing. Convergence is stronger than idempotence.
ResourceGeneral term for everything the deployment may create or change, like files, directories or links.

References

Desired State-based Deployment

Desired State

The Intershop 7 deployment process is based on a concept known as desired state or target state.

The deployment process is able to reach that desired state from any current state. This can be a virgin system, the state after a previous deployment, or previously deployed state has been modified or broken. Thus providing a single description the deployment can handle use cases like installation, upgrade, downgrade (between any two versions), repair and undeployment. It also covers re-installation in case of failed hardware – even if the system that failed has gone through some very complicated upgrade-path, which is hard or inefficient to repeat.

Resources

The overall desired state for a deployment consists of desired states for resources. Resources are files, directories, links and services. Resources have an identity, a type and properties (content, permissions, etc).

This is similar to what the configuration management system Puppet offers. However, Intershop does not try to reimplement full configuration management. Instead we concentrate on lower level resources, scaling better with large number of those than most configuration management systems do. For higher level or more OS-centric types of resources, like users and groups, 3rd party packages and OS configuration, you can easily wrap a configuration management system around the deployment.

When applying a desired state the deployment creates not only resources and determine their properties (e.g., a file and its contents and permissions), but it will also delete resources created by previous deployments that are no longer desired. An undeployment is then simply applying an empty desired state.

Idempotence and Convergence

In order to allow reaching a desired state a process must at least be idempotent: Executed with the same input repeatedly, it has to lead to the same output. Translated to our scenario, feeding the same desired state to the deployment process repeatedly must result in the same system state (e.g., file contents, permissions, etc).

Idempotence can also be satisfied by a simple full-remove and start-fresh approach for upgrades. The deployment process however tries to only perform operations that are really necessary to reach the desired state. There are a few reasons:

  • The reinstall approach scales very badly for small changes when compared to classic incremental upgrades (precalculated by the build system between two fixed versions).
  • There is always a risk of being unable to recover bits of state. (Changes on files that were introduced by a non-deployment-process like log files.)
  • Shaking the state, i.e., running through one or more intermediate states only to reach the original state again, can unnecessarily disturb running processes. Even if the deployed process itself can cope with a certain kind of change without restart (like the Intershop 7 application server, which can reload and recompile ISML templates), the deployment process would require restarting the process. Having not to is an important prerequisite for introducing hot deployments.

Only applying the minimum set of operations is called convergence. The concepts of desired state management and convergence originate from the DevOps movement. The deployment process is designed to play nicely with configuration management systems, like Puppet or Chef, which base upon the same principle.

Declaration of Desired State

Development and Operations

The effort to declare the desired state is split between development and administration. On a high conceptual level the workflow is:

  1. During development a template for the desired state is created, declaring everything that must be met in order to run the application successfully and known at development time. For pieces of information unknown at development time or useful to diverge in different deployments this declaration contains blanks.
  2. An administrator creates a desired state by specifying which template to use and how to fill the blanks. Additionally he can modify or override the pre-defined declaration in some respects.

Technically developers create components to be deployed and assemblies grouping their own and third party components and publish them to repositories. They contain (directly or indirectly by pointing to deployment plugins):

  • Which components to include for deployment. Generally, a single assembly covers deployment for all host-types of a distributed deployment in a cluster in all environments (developer, demo, QA, production). For this purpose an assembly declares a set of possible environments and possible host-types and which components belong to these.
  • Which component artifacts need to be deployed to which logical location (e.g., IS_HOME and IS_SHARE), whether they are archives that need to be extracted or not.
  • Templates for common configuration tasks like creating application server instances, configuring the database access, adding configuration sources for the configuration framework. (See also next section.)

The administrator creates a Gradle settings script in the local filesystem specifying:

  • repositories to pull components from
  • a single assembly to deploy
  • environment and host-type
  • physical locations to deploy to
  • values for common configuration, like host-names, ports

For details on how to bootstrap the deployment see the Cookbook - Gradle Deployment Tools (7.4 CI - ICM 7.7).

Gradle-based Deployment

This section assumes that you are familiar with Concept - Continuous Delivery Tool | Gradle-based Automation.

The desired state is declared in:

  • Gradle-scripts both at development and deployment time and
  • Ivy based meta-data of components and assemblies

Unifying the language used during development and administration allows for free distribution of efforts between both.

We provide different Gradle plugins for deployment, covering different levels of abstraction and granularity.

Every plugin configures a Gradle  Project in three ways:

  1. It adds a DSL extension that can be used to declare a desired state.
  2. It adds tasks operating on the declaration to apply desired state or query how it differs from the current state.
  3. Apply other plugins and/or configure their DSL, passing down/distributing information from their DSL extension.

The following table shows the most important plugins from low to high level:

PluginPurpose
ResourceDeploymentPluginhandles the declaration of a desired state for all built-in resource types on a low level and provides tasks to apply them
CartridgeDeploymentPlugin

knows how to deploy a cartridge to a specified directory using ResourceDeploymentPlugin

This plugin is automatically applied to all projects, that do not provide their own deploy.gradle script.

InfrastructureDeploymentPlugincontains the re-usable part of deploying a low level Intershop 7 component using ResourceDeploymentPlugin
AssemblyDeploymentPlugin

knows how to deploy an assembly with all its components by:

  • applying and/or passing down information to CartridgeDeploymentPlugin and InfrastructureDeploymentPlugin of the components
  • deploying assembly-specific artifacts using the ResourceDeploymentPlugin

provides tasks to trigger the deployment of all components by depending on their tasks for applying the desired state

Deployment as Multi-Project Build

For deploying multiple components at once the deployment framework uses Gradle's multi-project build feature. The assembly is represented by the root project. All other deployed components are represented by direct child projects. Therefore root project is typically configured using the AssemblyDeploymentPlugin and its children by the CartridgePlugin/ InfrastructurePlugin.

The project hierarchy must be specified during Gradle's initialization phase by configuring the Settings object. For this purpose we provide two plugins:

PluginPurpose
AssemblyDeploymentSettingsPlugin

Handles selection of components to deploy based on an environment and host-type and creates an according project structure, triggers application of Project-plugins during configuration phase. Components can provide component-specific deployment scripts by publishing an artifact of type deploy-gradle. AssemblyDeploymentSettingsPlugin automatically loads these scripts if available.

The assembly itself can also provide an assembly-specific deployment script in an artifact of type deploy-gradle as well. This will typically apply the AssemblyDeploymentPlugin.

DeploymentBootstrapPluginAllows to bootstrap the deployment from a Gradle settings script in the local filesystem using a standard Gradle distribution. It must be provided with an assembly to start. The assembly must contain an artifact of type deploy-settings-gradleDeploymentBootstrapPlugin will apply this script to the Gradle Settings and the script will typically apply the AssemblyDeploymentSettingsPlugin.

The following diagram shows a typical distribution of build logic across plugins and scripts. It also depicts the control flow and where dependencies to components based on Ivy-meta-data are declared.

Deployment vs. Configuration

Configuration and Upgrades

To improve support for automated upgrades (and downgrades), we encourage a strong separation of code from data, meta-data from configuration.

To perform an upgrade automatically, applying a desired state, the deployment must be in charge of the deployed resources exclusively. Meaning that no other process, no manual action of the administrator, nor any automatically running process including the deployed application itself should change the resource's state. Changing a properties-file after it has been created by the deployment can lead to loss of data as it is overwritten by future upgrades. While it would be possible to try to merge these changes and those coming from an upgrade, this approach is rarely automatic nor delivers reproducible results.

Two more reliable approaches are:

  1. Preferable: physical separation of configuration provided by development and administration. Let the application consuming the configuration merge the information from different sources.
  2. Alternatively: – as this does not work for a few cases – let the deployment merge information provided by development and by administration into a single physical resource.

The next two sections cover these approaches and their applicability in more detail.

Another physical storage that must merge information from development and runtime are database tables: It must meet a schema and must contain some minimum entries. This is a much more complex use case and is continued to be handled by DBInit and DBMigrate.

Configuration Framework

A prominent example for runtime merging of configuration sources is the Intershop configuration framework. Based on a single entry file, the configuration.xml any number of property files and other physical storages, like the database, can be combined to a single source of configuration values. Prior to its introduction the common and necessary workflow was:

  1. Create a properties-file during development, providing default values.
  2. Copy this properties-file to the host / the shared file system during deployment.

  3. Edit properties in the deployed configuration file / overwrite them by an externally prepared file.

With the introduction of the configuration framework instead of step 3 we recommend to add an entry in the  configuration.xml(an additional file/an external configuration source) and leave the original properties-file untouched. The properties-files provided from development remain as a limited form of meta-data: Name of possible properties, together with a description in form of comments and a default value, which will be effective at runtime if not overridden.

Other consumers of configuration do not use the configuration framework, but still offer splitting configuration into multiple physical sources. The Intershop logging framework based on logback is an example.

Content Filters

In some physical resources it is still necessary to merge information from development and deployment / administration time. Besides legacy reasons, there are a few reasons remaining.

There is a technological gap between some consumers of the configuration and the configuration framework:

  • For configuration files that do not use the Java properties format, but some more complex format like the httpd.conf or XML it is not straightforward to merge information from multiple sources. Trying to do so at runtime can lead to hard to understand misconfiguration. The configuration.xml itself is a good example.

  • The WebAdapter is written in C++ cannot read properties through the Java based configuration framework.
  • Shell / batch scripts contain placeholders for local paths.

The deployment offers content filters on deploying files to solve this problem. The deployment can write a modified version of a file delivered by the build process, e.g., can add properties, modify XML on the fly. Details of this modification - like which values to give to properties - are part of the desired state. To change these details the desired state must be changed and the deployment must be rerun. (Because of convergence the deployment scales down nicely for performing small changes.)

Modification Handling

Some of the files will get modified after deployment, so that they do no longer correspond to the indexed state. This may either be an accidental misconfiguration, that should be repaired in the next deployment process, or it may represent a desired change in itself, so that it must not be overwritten by future deployments.

This problem does not impact convergence, as the deployment process is able to distinguish between updating an unmodified file from a previous desired state to a new desired state and an external modification, that has not been performed by deployment itself. In the former case, the existing file can be overwritten safely, whereas the latter case cannot be handled equally for all conflicts and requires configuration on a per-file basis.

Gradle Deployment Tools vs. Previous Solutions

This section lists differences between the Gradle deployment tools and the previously used solutions for deploying Intershop 7.

  • The installation must be now executed by the application users. The deployment generates scripts for the service configuration. Only this scripts must be executed by an user with root or administrator permissions. The deployment does not create users and groups.
  • There is only one deployment tool for all supported platforms. Therefore the GUI of the Windows installer was removed. Furthermore, the installer does not try to check whether the configured ports are blocked by other applications.
  • The file system structure of an Intershop installation is now identical on Windows and Linux.
    • All components of one installation are located in two target folders ( IS_HOME and IS_SHARE).
    • The location of these target folders is now freely configurable.
  • The principle of the installation is changed with the Gradle based tool set. The deployment configuration describes the final state of the installation completely. So it is possible to deploy components and application configuration to different machines with the same deployment configuration. To support this it is required to add changes to the assembly build and / or deployment configuration, otherwise manually changes are lost with the next deployment.
  • The deployment now supports an incremental installation and the update of single components. The update process is faster than before.
  • Custom Fixes are handled like other components. Therefore the information about these artifacts is visible in the monitoring of the components.
  • The deployment tool is based on Gradle. So it is possible to extend configuration scripts with Groovy, own plugins or plugins provided by the community.
Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.