Document Tree
Document Properties
Kbid
27N691
Last Modified
29-Apr-2022
Added to KB
20-Jul-2016
Public Access
Everyone
Status
Online
Doc Type
Concepts
Product
  • Gradle Tools
  • ICM 7.10
Concept - Continuous Delivery Tools (valid to 7.10)

Introduction

Intershop 7 (since IS 7.4 CI) is delivered with tools, that can be used to build, assemble, upgrade, downgrade, deploy and undeploy software components. These tools are designed to implement very flexible continuous delivery processes. This document targets developers and administrators, who want to get an overview about the entire tool chain. It introduces the high level concept of these particular phases of the overall continuous software delivery process.

For more detailed concepts and practical recipes please refer to the documents mentioned in the references section.

The following diagram shows a high level overview of the overall process called continuous delivery pipeline:

It starts with the teams working on software components respectively system components, whose source files are checked in a version control system like subversion or git. Each checkin process triggers an automatic process on a continuous integration server like

which checks the system components out, builds them and runs different tests. The report of this process is delivered to the teams to provide feedback in case of problems. The built system components are published to an artifact repository. The subsequent assembly process combines a number of system components in a specific version and publishes different assembly artifacts. The following steps in the delivery pipeline include automatic deployment to clusters with different purposes, execution of different tests like load test, manual tests as well as pre-production tests. At last the new components are deployed to the production system.

The goal of the continuous delivery process is:

  • Provide feedback as fast as possible to the developers
  • Continuously build, integrate, deploy and test the product to be delivered
  • Automate as much as possible to reduce the maintenance efforts of the build system

Glossary


PhraseMeaning
Version Control System (VCS)

Also known as source control, source code management systems (SCM), or revision control systems (RCS). VCS is a mechanism for keeping multiple versions of your files, so that when you modify a file you can still access the previous revisions.

Artifact Repository

Place, where build and package software components are located. Provide a common interface to a dependency management system.

Code AnalysisProcess to analyze source code to calculate metrics, find bugs, etc.
Continuous Delivery PipelineSometimes called Deployment Pipeline, describes the stages, which code artifacts runs through source to production system.
System ComponentA software package of different code artifacts and files, that have to be deployed together.
System Component SetIs a container for system components, that needs to be build and branched together.
AssemblyAn assembly references one or more system components residing in the same or a configured artifact repository in order to deploy or deliver them together.
Build ProcessCompiles and packages files and code artifacts from a source project to deployable artifacts.
Publish ProcessThe process which transfers the deployable artifacts to a configured artifact repository.
Assembly ProcessThis process combines several system components to an assembly.
Deployment ProcessThis process extracts files and code artifacts from an artifact repository and applies the configuration.
Project Gradle DistributionThis is a customized Gradle distribution with the preconfigured artifact repositories and Gradle plugins.
Gradle PluginA Gradle plugin packages up reusable pieces of build logic, which can be used across many different projects and builds.
Project Gradle PluginThis is a Gradle plugin which contains special corporate respectively project settings.
Corporate PluginThe term is used as a synonym for Project Gradle Plugin.
Gradle Extension ObjectJava Bean compliant class holding configurations for Gradle plugins.
Gradle WrapperThe Gradle Wrapper is the preferred way of starting a Gradle build. The wrapper is a batch script on Windows, and a shell script for other operating systems. When you start a Gradle build via the wrapper, Gradle will be automatically downloaded and used to run the build. See for more information The Gradle Wrapper in the Gradle documentation (2.11, 2.7, 2.3, 2.0, 1.8)
Intershop ClusterA number of hosts of different types serving an Intershop 7.
Cluster NodeOne separately deployable part of an Intershop cluster. A host can run multiple nodes of one Intershop cluster.

References

If you are interested in an overview of all other Gradle-related documentation available on Intershop Customer Knowledge Base, please refer to Overview - Build, Assembly and Deployment.

Other infrastructure-related documents:

More in-depth concepts:

You may also have a look at the release notes of Intershop's Gradle Tools:

Third-party documentation:

Infrastructure

The Intershop continuous delivery tools support an environment possessing following infrastructure and running illustrated high level processes:

Version Control System (VCS)

A version control system is required to keep track of:

  • Sources of all system components of the resulting product
  • Build configurations
  • Deployment configurations
  • Environment configurations
  • Build plugins and extension objects
  • Deployment plugins and extension objects
  • etc

The Intershop continuous delivery tools do not depend on any specific version control system (VCS). That is why customer projects are free to choose a suitable VCS, e.g:

Continuous Integration Server (CI server)

The continuous integration server takes the central role to schedule, trigger, execute, monitor, queue and distribute the automated processes in the continuous delivery pipeline. Following main processes are managed:

  • Build of system components
  • Publishing the system components to the artifact repository
  • Assembly of system components
  • Deployment of assemblies to different environments
  • Execution of automated test suites

The Intershop continuous delivery tools also do not depend on the used continuous integration server, because most of them are able to execute arbitrary scripts. It may be easier to use a CI servers supporting the Gradle build system, but is not required. Following CI servers are available:

Artifact Repository

The main purpose of artifact repositories is serving a number of built components (in Apache Ivy: modules) in one or more versions for a particular dependency management system like Apache Ivy or Maven. There are different types of them, that may provide additional functionality.

Repository Management

There are several repository management servers available, that are used to serve artifacts to development teams and provide a centralized approach to manage the built and downloaded software artifacts. Following artifact repository servers work with the Intershop Continuous Delivery Tools:

Local Artifact Repositories

Local artifact repositories exist on build servers to save recurring network traffic and accelerate the build processes. They are simple file structures following a format pattern, which is defined in the build configuration.

In development environments they contain only temporarly software artifacts, that do not need to be distributed to other persons and systems and that are used in the local development of new versions of the software components.

Intershop 7 DVD

Intershop 7 (7.4.x and 7.5.x) releases were distributed via DVD containing a file based artifact repository.

Code Quality Server

Optionally it is possible to execute static code analysis tools on the CI server. The Intershop continuous delivery tools as well as the Intershop Studio provide an integration to Sonarqube, being able to manage execution, rules and reports for different code analyzer like:

The Sonarqube server serves the code quality rules to development environments as well as CI server, which is responsible to regulary execute the code analyzer as well as upload the reports providing an overview about the current code quality.

Overall Architecture

The Intershop continuous delivery tools are a number of plugins in the Gradle build system:


  • Gradle– an increasingly popular framework primarily aiming at build automation. It tries to hit the sweet spot between Ant and Maven, explicit scripting and convention over configuration.
  • Groovy – a JVM based scripting language. Mostly compatible to Java syntax it adds productivity features like closures, optional typing and meta programming. Gradle uses a DSL based on Groovy.
  • Apache Ivy – a dependency management system comparable to, but less strict than Maven. Gradle supports Ivy's standard for artifact meta data and repositories, but uses its own implementation.
  • Intershop Build Plugins are used to build, test and publish software components of Intershop 7
  • Intershop Assembly Plugins combine different built software components, that need to be deployed, delivered or released together.
  • Intershop Deployment Plugins are used to install and configure the software components as well as tools, libraries, servers of an Intershop cluster.

Knowledge of these is not assumed. Instead this document gives an introduction to the most relevant concepts of these technologies. Gradle bursts with innovative concepts that may be hard to grasp for a novice, but once understood are very powerful (and also very conclusive). If the following sections seem too dense or too abstract for your preferred way of learning, we recommend to work through the first dozen chapters of the Gradle User Guide. Or you follow the deep links to specific chapters scattered across the following text.

Basic knowledge and some practical experience with Ant or Maven is helpful as they are Gradle's evolutionary roots. We provide comparisons to Maven and Ant where appropriate.

Further Resources

You may also consult other resources to get a basic understanding of the Gradle's concepts, that are also applicable to Intershop's continuous delivery tooling.

Multiple books cover Gradle from a variety of angles, some of which are even available as e-book for free.

There is also a free introductory course on Udacity about Gradle covering both the generic fundamentals, as well as Android development in specific. You may just leave out the Android specifics.

Intershop Training

Intershop also provides a Technical Training which deals with the DevOps tasks which may arise in the context of continuous integration. These training courses does not exclusively deal with Gradle, but also with best practice approaches for CI project setups.

For more detailed information please refer to the course Intershop 7 - System Administration (IS7-116).

Be aware that the Intershop Technical Training department offers to book a partial training or flexible tailored training sessions.

If you have further questions do not hesitate to contact Intershop's Technical Training department (techtraining@intershop.com).

Gradle Basics

Gradle is an automation framework, primarily aimed at building enterprise software. Gradle tries to combine the flexibility of Ant with the power of conventions known by Maven and adds a lot of revised approaches to building software on top of it. While focusing on building software, it is – similar to Ant – still general purpose enough to be suitable for deployment.

Gradle scripts are written in Groovy, a JVM-based scripting language, instead of XML as in Ant or Maven. Groovy has a shallow learning curve for Java developers - most Java source files are also valid Groovy scripts. Compared to Java Groovy loosens up the syntax and type system, allows dynamic augmentation of classes and adds closures as first class citizens (a more powerful flavor of anonymous Java classes). Because of these features Groovy makes it easy to develop custom domain specific languages (DSLs) that can combine declarative with imperative aspects. Gradle leverages this power introducing its own DSL. Being JVM-based Groovy is fully compatible with other JVM-based languages like Java and the large existing set of Java libraries can be used easily in Groovy.

The Build Lifecycle

Gradle generally executes in three phases: Gradle User Guide -> Build Lifecycle:

  1. Initialization: Gradle supports single and multi-project builds. During the initialization phase, Gradle determines which projects are going to take part in the build, and creates a Project instance for each of these projects.
  2. Configuration: During this phase the project objects are configured. The build scripts of all projects which are part of the build are executed. Gradle 1.4 introduces an incubatingopt-in feature called configuration on demand. In this mode, Gradle configures only relevant projects (see Gradle User Guide: Configuration on demand).
  3. Execution: Gradle determines the subset of the tasks, created and configured during the configuration phase, to be executed. The subset is determined by the task name arguments passed to the gradle command and the current directory. Gradle then executes each of the selected tasks.

Gradle tasks (see Gradle User Guide: Using Tasks and More about Tasks) are objects representing chunks of work to be executed. They make up the user interface of Gradle. If you come from Ant, Gradle tasks blend Ant tasks and targets into a single concept. If you know Maven, Gradle tasks replace Mavens phases and goals. Gradle tasks can have properties that determine their input and output, multiple actions that actually perform the work and dependencies to other tasks.

When starting Gradle you typically supply a list of tasks to be executed. During the configuration phase Gradle builds a graph of these tasks and their dependencies and in the second phase - the execution phase - runs the tasks in order (see: Gradle User Guide: Gradle Command Line).

Gradle Build Configurations

A project and its sub-objects are typically configured by the build.gradle file (see Gradle User Guide: Writing Build Scripts). It mostly replaces property-files used extensively in Ant. You can still use property-files of course, but you should reduce their usage to a minimum. The base location for configuration is the build script. The additional gradle.properties file should only contain properties which you need to override in different environments (CI, developer).

Generally any piece of Groovy or Java code having access to the Project object during the configuration phase can configure it. A place to store reusable configuration logic are the Gradle plugins- Java/Groovy classes with a single method apply expecting a Project as only parameter. It is easy to write custom plugins, especially to turn any piece of Gradle build script into a plugin.

A Project is an extensible object – you may add own sub-objects to provide a custom DSL extension or set additional properties. Gradle's extension properties on projects feel very similar to Ant's properties but are arbitrary objects instead of just strings.

Gradle's core is slim and most of the functionality is provided in theform of plugins. Plugins are available from Gradle's developers but also from third party vendors, following Ant's and Maven's best traditions. They cover building projects in a variety of languages and integrating different tools, like static code analysis or custom code generation. Furthermore, as mentioned above, any existing library in Java-byte-code can easily be included and used without wrapping it up in Gradle specific code first (this is necessary for Ant and Maven, since both use XML as their main language).

Besides Gradle build scripts and plugins Gradle knows two other types of scripts: Settings- and Init scripts. Settings scripts play an important role in multi-project builds (see below). Init scripts are executed before build scripts and can be added to an existing build via command line or by placing them in special folders. They are the place to store environment-specific configuration that should not be stored within the project directly.

Gradle offers a multi-project build feature (see: Gradle User Guide -> Build Lifecycle -> Multi-project builds) that can build multiple projects in a single execution of Gradle. In contrast to Ant's subant-feature for iterating over multiple Ant scripts and just execute them sequentially, Gradle's multi-project however truly integrates multiple builds in any imaginable way (see: Gradle User Guide -> Mutli-project Builds).

The most prominent incarnation of the strong integration are dependencies between projects. When declared they will automatically turn into dependencies between (compile) tasks. This influences ordering of builds, enable parallel builds and make sure that changes across projects are integrated.

In a multi-project build the configuration phase is preceded by the initialization phase. In this phase all projects and their build scripts are determined by configuring a DSL-object of type Settings. Corresponding to build scripts and Project objects this is typically done by a Gradle settings script, by convention called settings.gradle. From these multiple Project objects are created and configured.

Projects in a multi-project build can form arbitrary hierarchies. Each project has a name and a unique path, which is formed by appending all its parents and its own name. The path for the root project is simply ':' and that for each direct child of the root project is ':<project_name>'. The direct children of a parent project with path ':<parent>' have the path ':<parent>:<child>' etc. The same notation is used to identify the tasks of a specific project, e.g. ':<parent_project>:<child_project>:<task>'.

Dependency Management and Artifact Repositories

The following figure describes the required properties, that are used to manage the dependencies of a software project to other projects or software components:

First of all, each component defines a name, an organization and a revision (version) string, that are used to organize different built versions of this component in an artifact repository. Second, a configuration of available artifact repositories is required, which defines the lookup order. Third, the configuration section defines different build configurations. At the end the particular dependendies to other projects or built software components are defined in context of a given build configuration.

As described above artifact repositories store artifacts with their Ivy module descriptor from any number of components in any number of revisions. They are used from a dependency management system like Apache Ivy to resolve dependencies between these artifacts.There arefour basic operations available on a repository:

  1. Publish new module revisions, including upload of their artifacts.
  2. Find a module by name and version expression, also called 'resolving'.
  3. Download meta data of a module.
  4. Download artifacts of a module.

These operations form the main workflow when accessing a repository. The dependency management system wraps logic around these operations to resolve a module including its transitive dependencies, recurring from 3. to 2. This is called transitive resolving.

A single resolve process can work with a list of artifact repositories of different type. If multiple revisions are found that match a given version expression, they are compared and the latest returned.

Common to all repositories is that they use patterns to locate meta-data and artifacts. A pattern is a string containing placeholders for organization, module name, version, artifact name, ext and type. The path (file or URL) where an artifact is stored during publication is determined by taking the pattern and replacing placeholders by concrete values.

Pattern

Misconfigured patterns often results in unresolved artifacts, because the pattern must match to the structure of the connected repository.

See the example below. The component componentA has the artifact content.zip:

<ivy-module version="2.0">
  <info organisation="com.intershop" module="componentA" revision="1.1" />
  <publications>
    <artifact name="content" type="local" ext="zip" conf="runtime"/>
  </publications>
  <dependencies/>
</ivy-module>

To use a repository in the Intershop continuous delivery tools, you have to configure the repository using Gradle's DSL (see below), including type, root and pattern.

ArtifactPatternRepository-Lookup-Path
content.zip

[organisation]/[module]/[revision]/[ext]s/[artifact]-[type]-[revision].[ext]

com.intershop/componentA/1.1/zips/content-local-1.1.zip

[organisation].[module]/[artifact]-[revision].[ext]

com.intershop.componentA/content-1.1.zip

System Components and System Component Sets

A system component consists of a number of artifacts with different states and meta-data within its lifecycle, which is shown in the following state diagram:

Lifecycle State source

In lifecycle state "source" the system component is available in a version control system. It contains a number of source files, data files, libraries and the build.gradle file containing following meta data:

  • The display name describes the system component.
  • A set of dependency declarations to other system components of two types:
    • internal: compile dependency to other system components in the same multi-project build
    • external: declaring a qualified name and a version number. The version number might also be an expression, like 'any version starting with 1.0.'
  • The build configuration defines which build plugins are used, that may require a configuration

Intershop 7 consist of more than 200 cartridges, a bunch of tools, servers and libraries, that are system components. An independent lifecycle for each of them would lead to a complex build and test infrastructure as well as to a challenging dependency management with a huge compatibility matrix. That is why we introduced the system component sets.

A system component setexclusively contains one or more system components of different types. It defines a common versioning, branching strategy as well as the same release cycle. Further, following is defined:

  • The system component set resides within a source repository (e.g. Git repository, project in Subversion). All contained system components reside in sub directories. Branches are created on the level of the system component set. (-> same branching strategy)
  • The version of the components are defined in the system component set. (-> same versioning strategy)
  • All components are built together in the CI server. So, they have the same build number. (-> same release cycles)

Lifecycle State compiled

The Gradle based Intershop build process converts a system component from lifecycle state "source" to "compiled". Depending on build configuration with configured build plugins different Gradle tasks are executed. For instance the build plugin "java-cartridge" processes following tasks:

  • generates code (JAXB, ...)
  • compile source files
  • create Jar archives
  • executes Unit tests
  • transforms the file build.gradle to the ivy.xml file, that is used by artifact repositories as well as the Intershop assembly and deployment plugins for the dependency management. The resulting ivy.xml file defines following meta-data:
    • A qualified name consisting of an organization and a name. It is customary to use reverse domain names as organizations.
    • A version number– an arbitrary string. (Also called revision number, Ivy uses both terms as synonyms.)
    • A set of dependencies to other modules, each declaring a qualified name and a version number. The version number might also be an expression, like 'any version starting with 1.0.'
    • the set of resulting artifacts, each declaring a file name, file extension and a type (an arbitrary string). Commonly (but not necessarily) artifacts are physically stored as archives, like 'jar' or 'zip' files. Additionally they can declare an optional classifier to distinguish artifacts according to a specific context (e.g. platform specific artifacts).

The qualified name and version number form the identity of what is called a "module revision" in the Apache Ivy world. In production it is useful to treat module revisions immutable, e.g. once meta-data and contents were published with a certain version and build number, they should not change anymore. (Instead each build should be published with a different build number.) By this way it is possible to refer to this identity at deployment time with absolute certainty.

Artifact types are used to distinguish different kinds of contents that need to be treated differently during deployment, like 'jar' files, content for the shared file system and content for the local file system.

The build process is executed in development environments as well as on the continuous integration server.

Lifecycle State published

The build process is also responsible to publish the system component to an artifact repository, which can be a local folder on the development environment or an artifact repository server. Depending on the purpose of the artifact repository, the version number may differ as explained above.

All version numbers follow the following pattern: <base version>.<build suffix>. The base version is a version declared in the source of a component (build.gradle orgradle.properties) file. Examples are "7.4.5.0" for Intershop 7 components and"1.0.0.0" for build plugins. When being published a suffix is added by the build process automatically depending on the context it runs in.

Version Pattern
ExampleTermDescription
<base version>.<timestamp>7.4.5.0.20140119162517release (candidate) version

Built by the Continous Integration server on a schedule (like once a day). Published to a shared repository server, like Nexus or Artifactory.

Running the build again will result in a different timestamp - so each build is uniquely identifiable. Release build are never overwritten in a repository (they are immutable). Asking for the contents of a release version with the same timestamp will always result in the same contents.

<base version>-local7.4.5.0-locallocal version

Built by the developer. Published to a local repository (a directory on the developer machine).

Running the build locally again will result in the same version number, so old versions will be overwritten (they are mutable). Depending on when you ask for the contents of a local version the contents maytherefore differ.

<base version>-snapshot7.4.5.0-snapshotsnapshot version

Built by the Continuous Integration server upon check-in. Published to a shared repository server, like Nexus or Artifactory. (Release versions and snapshot versions are published to different repositories on the same repository server.)

Running the build again will result in the same version number, so old versions will be overwritten (they are mutable). Depending on when you ask for the contents of a snapshot version the contents may therefore differ.

Snapshot versions are created only to speed up the Continuous Integration process and to avoid repository overhead. Since they are mutable, you should not use them in your developer or test environment.

Lifecycle State deployed

The lifecycle state "deployed" is the result of the deployment process extracting the static files and libraries to the configured location on the target host. Environment specific configurations of the system component are also applied by the Intershop deployment tool.

Lifecycle State runtime

After a system component is installed it can be started in different runtime environments on the target host. In the Intershop Application Server runtime environment the system components (especially cartridges) take a special role in behavior of particular applications. See detailed information in the Concept - Application Framework (valid to 7.9) as well as Concept - Cartridges (valid to 7.4).

Assemblies

An assembly references a number of system components, that can be deployed or delivered together. The following figure describes the example assembly "A" and its changes from version 1.1 to version 1.2, which are

  • Component A was changed from version 1.0 to 1.1
  • The reference to Component X was removed
  • A new reference to component Z was added

Also it is possible to copy an assembly and all contained components from one repository to another, or – for that matter – create a repository that solely contains the assembly and its contained components (see: Intershop 7 DVD).

Besides pointing to other components an assembly may contain contents in own artifacts. This is useful for content which can only be created once all contained components are known and aggregates information from them. Examples are database dumps, cross-linked documentation like JavaDoc, or (default) configuration that is specific to the set of contained components.

Assemblies can be created from existing assemblies. This allows to form a delivery chain between teams and vendors, each passing one or more assemblies, which are then modified by adding/removing/replacing components.

The referenced system components can be assigned to internal assembly subsets for different purposes described in the next sections.

Assembly Subsset Host Type

An Intershop 7 cluster follows the classical three tier architecture and consists of following cluster nodes:

The Oracle Database is not part of the Intershop 7 delivery, but is required to run an Intershop 7 cluster. The load balancer is an optional node, that is required if several Intershop Web Servers are used.

The Intershop Deployment Tool is responsible to distribute and configure the particular system components to the nodes of the Intershop cluster. In order to reuse deployment configurations of nodes with the same subset of system components the concept of host types is introduced.

Assemblies define one or more host types to reference infrastructure components, that form the runtime environment of a cluster node. A host type is defined by:

  • A unique name within the assembly
  • References to a subset of system components of the assembly

The following sections describing the host types delivered with Intershop 7.

Host Type all

The host type "all" is used by the single host deployment to install all nodes of an Intershop cluster on one host. It is implicitly defined by the Intershop assembly plugins.

Host Type webserver

The Intershop Web Server is responsible among other things to cache and assemble rendered pages, distribute web requests to particular application servers and generate session identifiers. It consists of following parts:

  • Apache Web Server
  • Intershop Web Adapter
  • Web Adapter Agent

Host Type appserver

The Intershop Application Server comprises all infrastructure components necessary to create a run-time environment (engine) for Intershop7 cartridges. The components are grouped as:

  • Apache Tomcat: The servlet container hosting Intershop 7.
  • Node Manager: The watchdog used to control the application server process.
  • JDK (until Intershop 7.4)
  • System Tools: These tools are located in the tools directory IS_HOME/tools and include:
    • DBDelta tool
    • DBExtract tool
    • Apache Ant
    • Shell Scripts
  • Code Base: Starting with Intershop 7.5, the server's code base (see: Concept - Cartridges (valid to 7.4)) may reside on the application server instead of the shared file system.

Host Type share

The Intershop Shared File System is mounted and accessed by all application servers, that are part of the cluster. It consists of three main parts:

  • Code Base: The code base is structured in cartridges (see: Concept - Cartridges (valid to 7.4))
  • Configuration: Configuration of particular application server as weel as entire cluster, which is necessary for running an application server instance
  • File Content: These files are required by particular applications of Intershop 7 (see: Concept - Application Framework (valid to 7.9)) including:
    • Import and Export files
    • Branding Packages
    • Combined File Bundles
    • Images
    • etc

Host Type solr

(since Intershop version 7.5)

The Intershop Solr Server comprises the infrastructure components necessary to create a run-time environment for the Apache Solr search engine. This host contains:

  • Apache Tomcat: The servlet container hosting Apache Solr.
  • Node Manager: The watchdog used to control the server process.
  • Apache Solr: The web-app of the Apache Solr server distribution.

Host Type javadoc

(until Intershop version 7.5)

In a development project it may be desired to have the complete JavaDoc in one central place for all developers to use. This host type provides:

  • JavaDoc: The extracted Java documentation for all system components, that are not available as source.
  • Index page: An overview page for the individual system components' JavaDoc.

Assembly Subset Environment

A published assembly runs through different stages of the continuous delivery pipeline before it is deployed to the production system. The number and characteristics of these stages differ in particular projects and should be freely configurable in the assembly. The assembly subset environment is introduced to fulfill this need of an additional dimension of a deployment configuration. Following sections describe the predefined environments delivered with Intershop 7.

Development Environment

The development environment uses a single node deployment to test the assembly the developer works on. It requires special application server configurations, that:

  • enables the autoload mechanisms in application server for code artifacts like ISML, pipelines, queries and so on
  • suppress preload of these code artifacts on startup
  • enable additional type checks during pipeline processing to avoid errors

Test Environment

The test environment configures the application server in production mode and requires all test cartridges of the assembly to run the automated test suite.

Production Environment

The production environment configures the application server in this way, that incoming requests are processed as fast as possible. Further, no test cartridges are installed.

The Minimal Continuous Delivery Process Flow

The Intershop continuous delivery tools support following simplified process flow. This section describes the three central processes (build, assemble and deploy), that can be used to create complex continuous delivery scenarios.

Build Process

The Gradle based Intershop build tools are used to compile and package system components and publish the result to an artifact repository. The tools expect a multi-project structure in the version control system. Depending on purpose there are different types of build processes described in the following sections.

Local Build Process

The local build is used by developers to work on particular system components, that are published to a local artifact repository (e.g., a directory in file system). As described above the locally published system components have a version suffix -local. A special feature of the local build process is that the developer gets a finegranular control to the intermediate steps of overall build process by calling sub-tasks of the Gradle project. The Intershop Studio provides an integrated user interface to start these steps. Further, the locally deployed development environment of the Intershop cluster is able to load code artifacts from source folder, that do not need to compile.

Snapshot Build Process

Snapshot build processes run on the CI server to provide fast feedback to the developers. The built system components are published to a central artifact repository for further distribution or subsequent build processes. Normally, the process is triggered by checkins of developers. It executes only these intermediate steps of overall build process, that are required to execute different tests like unit tests, DBInit processes or smoke tests.

Release Build Process

The release build process is used to produce potential releases of assemblies with their particular system components, that are continuously processed in the continuous delivery pipeline. These processes should execute all steps, that are required to release the particular system components and assemblies. The version suffix ends with a unique identifier, so that each result of a release build process can be non ambiguously identified on particular cluster installations.

Assembly Process

The Gradle based assembly tools are used to combine the built system components with some generated files and publish the result to the artifact repository.

In the beginning of the process, the ivy.xml file is generated defining dependencies to the system components in a concrete version. Afterwards the assembly is deployed in order to initialize the database via the DBInit tool (see: Concept - DBMigrate and DBInit (valid to 7.7)). In the end the database dump is created and the generated files are published.


Deployment Process

The deployment process expects a published assembly located in an artifact repository. There are two types:

  • Single Node Deployment installs and configures all host types on one host.
  • Cluster Deployment consists of several sub-processes installing and configuring particular nodes of an Intershop cluster.

Independent of the deployment process type it is possible to define the environment type defining a particular set of system components with according global configuration. The following figure describes, how one assembly is deployed in different environments:

Continuous Delivery Chain

It is possible to set up more complex continuous delivery scenarios with the Intershop continuous delivery tools involving several teams in a value added chain. Each team is part of the overall continuous delivery process with different deployment pipelines of their produced components. The artifact repositories are used to connect particular development infrastructures and to create the continuous code flow between the teams. The figure below outlines a possible scenario, that connects three continuous integration environments, which continuously integrates the work of several development teams:

Disclaimer
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Home
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to supportadmin@intershop.de if you experience login issues,
or if you want to register as customer.