In a CaaS project, after the shop goes live, customers and implementation partners (DEV) have limited access to System Management (SMC) and Organization Management on the production environment, see CaaS DevOps - Access and Permissions.
It is not recommended to manually trigger replication tasks in the Organization Management or run jobs in System Management. Instead, these tasks should be scheduled.
This guide describes scheduling based on one of the most important tasks for daily business as an example, replication.
|DEV||Developer, implementation partner|
|ICM||Intershop Commerce Management|
|SMC||Intershop System Management|
Data replication is a method of transferring large amounts of data from a source cluster to a target cluster. In a CaaS context, it refers to the transfer of data from an edit cluster to a live cluster in the same environment. Replication between separate clusters is not possible because they have different databases.
The following figure shows a simplified basic architecture, see Concept - Mass Data Replication for details.
To perform the replication, the blueprint of the database (tables) should be the same on edit and live clusters. In other words, the same migration must have been performed on both sides, see Overview - DBMigrate and DBInit.
Links to Intershop Commerce Management, Organization Management and System Management can be found in the Links section of your Customer System confluence page.
Once the replication tasks have been defined in Intershop Commerce Management (Mass Data Tasks | Data Replication Tasks), the replication process can be executed manually from Organization Management (Data Replication | Data Replication Tasks).
This requires access to Organization Management, which is always available for INT and UAT environments, but limited for the PRD environment after the go-live. The best practice is described in the next section.
See also the Video Tutorial - Data Replication.
There are several ways to create a scheduled, i.e. automatic, replication process. The simplest of these is described in more detail below. However, there are other possibilities as well.
The process can be summarized as following:
In this XML file, a process ID can be freely defined:
Afterwards, the replication tasks are listed in this file.
For specific case and customization of the replication.xml file, refer to Cookbook - Mass Data Replication - Administration and Cookbook - Mass Data Replication - Customization and Adaption .
This step might be sufficient to fulfill your needs.
The job defined in the last section can be started independently or be included in a process chain. In this case, do not define a recurring interval.
To add the replication job to a process chain, perform the following steps:
Define a processchain.xml file which executes the System Management job Regular Replication Process for the provided replication process ID.
You may use the following template: my-processchain.xml
Create a job configuration which uses the pipeline
ExecuteProcessChain-Start to be able to execute the chain.
The job Regular Replication Process defined previously, or ExecuteProcessChain in the case of a process chain, can be triggered via a manual REST API call. This requires REST privileges in Intershop System Management.
The syntax is described in REST API Jobs - Start a Job. However, only the REST resources are included, for instance:
The full syntax in our case is:
It returns following JSON:
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.