Document Tree
Document Properties
Last Modified
Added to KB
Public Access
Doc Type
  • IOM 4.0
  • IOM 4.1
  • IOM 4.2
  • IOM 4.3
  • IOM 4.4
  • IOM 4.5
  • IOM 4.6
  • IOM 4.7
  • IOM 5.0
Guide - IOM Job Scheduling


This guide gives an overview of job scheduling in Intershop Order Management (IOM). It shows concepts, components of the IOM job scheduling, and possibilities to add custom jobs.

This guide is mainly intended to be read by developers and gives architectural insights into IOM.


JDBCJava Database Connectivity
RAMRandom-Access Memory


Types of Jobs

The IOM uses the following two types of jobs:

  • Local jobs
  • Cluster jobs

Local jobs

Local jobs are jobs that do local tasks (e.g., clear the Java cache). These jobs have to be run on every application server (frontend/ backend).

The IOM runs on a JEE7-compliant application server, so it is possible to use the EJB Timer to invoke methods periodically. There is no registration or configuration needed.


Job ClassJob Description
bakery.logic.bean.caching.CheckCacheStatusJobChecks and processes a requested cache clear


Example: The method execute() will be invoked every 10 seconds.

public class TimerService {
    CustomizedService customService;
    @Schedule(second="*/10", minute="*",hour="*", persistent=false)
    public void execute(){

Clustered Jobs

Clustered jobs are jobs that do cluster-wide/system-wide tasks (e.g., jobs of the control artifact).
It must be ensured that such jobs do not run in parallel.
This is guaranteed as all IOM cluster jobs belong to the backend server that can only have one live instance.


The default configuration uses a local RAM store to manage the jobs.
An alternate configuration using a JDBC store is commented out and should not be required as long as all cluster jobs are located on the single backend instance.

Time-Sync for clustered Jobs

Quartz clustered features would require that the involved hosts have to use some form of time-sync service (daemon) that runs very regularly (the clocks must be within a second of each other).

Configuration of clustered job store
# Configure JobStore  

org.quartz.jobStore.misfireThreshold = 60000

#RAM JobStore
org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore

# JDBC store: example to use the database for the quartz JobStore
# org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
# org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.PostgreSQLDelegate
# org.quartz.jobStore.useProperties = true
# org.quartz.jobStore.dataSource = PostgresDS
# org.quartz.jobStore.tablePrefix = system.qrtz223_
# org.quartz.jobStore.isClustered = true
# org.quartz.jobStore.clusterCheckinInterval = 20000

# Configure Datasources  
# org.quartz.dataSource.PostgresDS.jndiURL=java:/OmsDB

For further details please see Configuration Reference - Configure Clustering with JDBC-JobStore and the configuration of the job store in OMS_ETC/


All clustered jobs with their descriptions can be found in OMS_ETC/quartz-jobs-cluster.xml.


The existing configuration in OMS_ETC/ can be used to add further jobs/triggers to the existing scheduler by adding further files to the org.quartz.plugin.jobInitializer.fileNames list of the XMLSchedulingDataProcessorPlugin (Cookbook - Initializing Job Data With Scheduler Initialization).


Within the property file, the variable ${is.oms.dir.etc} can be used to reference further configuration files. The value of ${is.oms.dir.etc} is replaced by a SchedulerBean at startup with the value of system-property is.oms.dir.etc.

Configuration of job / trigger loading
# Configure Job / Trigger Loading

# ${is.oms.dir.etc} could be used in the path in order to dynamically reference the folder, where installation specific properties are located
org.quartz.plugin.jobInitializer.fileNames = ${is.oms.dir.etc}/quartz-jobs-cluster.xml,${is.oms.dir.etc}/my-custom-clustered-quartz-jobs.xml

Another possibility is to customize the existing job file OMS_ETC/quartz-jobs-cluster.xml.

The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.
The Intershop Knowledge Portal uses only technically necessary cookies. We do not track visitors or have visitors tracked by 3rd parties. Please find further information on privacy in the Intershop Privacy Policy and Legal Notice.
Knowledge Base
Product Releases
Log on to continue
This Knowledge Base document is reserved for registered customers.
Log on with your Intershop Entra ID to continue.
Write an email to if you experience login issues,
or if you want to register as customer.