ICM 7.10 now supports MS SQL Server as additional database type besides Oracle.
DBMigrate for Microsoft SQL Server is supported starting with 7.10.2 GA only. This implies that customers who want to migrate need to migrate to 7.10 on Oracle first before they can start to move over to Microsoft SQL Server.
For more information on how to configure MS SQL Edition and Intershop 7 to be used as a development database, see Guide - Setup Microsoft SQL Server as Intershop Development Database.
For more information on how to configure SQL Server and Intershop 7 for a production environment, see Guide - Setup Microsoft SQL Server as Intershop Production Database.
For more details see Guide - 7.10 Database Deployment Configuration.
For development purposes it is recommended to use data created via DBInit. Maybe DDL Preparers in DBMigrate need to be transformed to DBInit to create the initial table structures and indexes if only DBMigrate has been used.
Starting with 7.10 GA, the DBInit will not clear the database content by default. To do that, an additional parameter needs to be added to the DBInit call:
cd %IS_HOME%\bin dbinit.bat --clean-db=<FLAG>
Available values for the cleanup are only
/ yes
/ no
.
Furthermore, it is possible to perform only the DDL database preparation steps in order to prepare table, indexes, stored procedures and functions:
dbinit.bat --ddl=true
More details can be found in Cookbook - DBInit.
sqldialect
can be used.The preparer is only executed if the database system fits the sqldialect
. Currently, the dialects Oracle
and Microsoft
are available, e.g., in this example the script will only be executed if the system is running on an Oracle database system.
pre.Class1 = com.intershop.beehive.core.dbmigrate.preparer.database.ExecuteSQLScriptPreparer \ resources/bc_rating_orm/dbmigrate/scripts/RecreateRatingCommentIndexes.sql \ sqldialect=Oracle
There is no difference in comparison to run dbmigrate on Oracle.
The SQL syntax for Oracle Database and Microsoft SQL Server are not completely identical. They are slightly different. Now there are possibilities to provide both SQL dialects in the same file. For more detailed information on how to achieve this, see Reference - Query File Syntax and Cookbook - Mastering Database Independence.
The syntax for Stored Procedures under Microsoft SQL Server also differs from Oracle Database. See SQL Server Create Procedure for further details.
For Microsoft SQL Server the Stored Procedures are stored in: /staticfiles/cartridge/lib/resources/{cartridge}/dbinit/scripts/microsoft. The Stored Procedures need to be converted.
Sometimes it is necessary to run specific code for different databases. To distinguish which database is currently used, the enum com.intershop.beehive.core.capi.cartridge.SQLDialect
can be used:
... import com.intershop.beehive.core.capi.cartridge.SQLDialect; ... if (SQLDialect.ORACLE.isActive()) { // Oracle specifics } else if (SQLDialect.MSSQL.isActive()) { // Microsoft SQL Server specifics } else { throw new IllegalStateException("Unknown SQL dialect " + SQLDialect.getCurrentSQLDialect()); } ...
To migrate the data from Oracle to Microsoft SQL Server or Azure SQL Managed Instance, the tool Microsoft SQL Server Migration Assistant (SSMA) for Oracle is used.
Provide the login parameters for databases again.
Note
After analyzing both databases, a summary with failed prerequisites is displayed. Please note that warnings for tables which already contain data can be ignored.
When following the steps described in the previous chapter, it must be taken into account that some of these steps can take a considerable amount of time.
The actual required execution duration depends on the following parameters:
The following times have been measured as an example to know about the durations to be expected (data used: inSPIRED-b2x demo data):
Step | Duration |
---|---|
7e) Convert Schema | 61 minutes |
8) Migrate Data | 9 minutes |
12+13) Synchronize Synonyms with Database | 0 minutes |
14+15) Synchronize Sequences with Database | 0 minutes |
16) Execute script | 0 minutes |
Microsoft suggests to adapt one to two connections for each core.
Some customers want to reduce the downtime of their system. A two step migration can reduce the downtime significantly. At the first step all "non-transactional" data will be transferred (like products, content, configurations, ...), in the second step all "transactional" data will be transferred (e.g., users, baskets, orders, ...).
We prepared a list of tables that is suitable for most customers.
Note
In addition, some businesses require a short hand modification of non-transactional data. Therefore it is important to inform the business users about the transfer period. Furthermore it may require additional effort to do the modification a second time on the new platform.
See the attachment for each step (step_1_nontransactional.txt, step_2_transactional.txt).
A lot of time is required for updating the XML- and text-indexes. To be more effective, they can be disabled before and rebuilt after the migration.
This is an example script that helps to disable all of the indexes that are shown in a table that has at least one XML-index.
DECLARE @indexes TABLE (rowid INT IDENTITY(1,1), indexname nvarchar(128), tablename nvarchar(128)); DECLARE @i INT = 1, @count INT, @indexname nvarchar(128), @tablename nvarchar(128), @sql NVARCHAR(MAX); INSERT INTO @indexes SELECT name as indexname, OBJECT_NAME(object_id) as tablename FROM sys.indexes WHERE is_primary_key = 0 and object_id IN (select object_id from sys.indexes where type_desc = 'XML') ORDER BY OBJECT_NAME(object_id), name select @count = COUNT(1) from @indexes WHILE @i <= @count BEGIN SELECT @indexname = indexname, @tablename = tablename from @indexes where rowid = @i SET @sql = CONCAT('ALTER INDEX ', @indexname, ' ON ', @tablename, ' DISABLE') PRINT @sql EXEC sp_executesql @sql SET @i = @i+1 END
The following script shows how to rebuild the indexes.
DECLARE @indexname nvarchar(128), @tablename nvarchar(128), @sql NVARCHAR(MAX); SELECT TOP 1 @indexname = name, @tablename = OBJECT_NAME(object_ID) FROM sys.indexes where is_disabled = 1 AND OBJECT_SCHEMA_NAME(object_id) = SCHEMA_NAME() WHILE @indexname IS NOT NULL BEGIN SET @sql = CONCAT('ALTER INDEX ', @indexname, ' ON ', @tablename, ' REBUILD') PRINT @sql EXEC sp_executesql @sql SET @indexname = NULL SET @tablename = NULL SELECT TOP 1 @indexname = name, @tablename = OBJECT_NAME(object_ID) FROM sys.indexes where is_disabled = 1 AND OBJECT_SCHEMA_NAME(object_id) = SCHEMA_NAME() END
Check the following document for replication with SQL Server: Guide - Replication with Microsoft SQL Server/Azure SQL Managed Instance.
This tool contains some tools for database-related content to help the developer.
For a comprehensive guide, see Cookbook - Developer Tools dev_query.
To check the correctness of the queries, a new test has been added.
See Recipe: Create Test Case for a Particular Query for its usage.
All queries located in the server can be tested if their syntax is correct. To do so, execute the tests.unit.com.intershop.dev.query.tests.QuerySyntaxCheckTest. This test can be found in the cartridge dev_query.
It also scans for suspicious elements in queries which will not run in SQL Server.
There is a prepared launch configuration to start the test. The following attributes are required:
// Microsoft SQL Server configuration -DCONNECTION_URL=jdbc:sqlserver://<host>:<port>;databaseName=<database name> // Oracle configuration # using sid : jdbc:oracle:thin:@<HOST>:<PORT>:<SID> # using service Name : jdbc:oracle:thin:@//<HOST>:<PORT>/<SERVICE_NAME> -DDB_USER_NAME=<db user name> -DDB_USER_PASSWORD=<db password>
Note
This test does not check if the query is semantically correct.
Several tools have been added to help the developer to analyze the sent queries to the database system.
These tools are located in the cartridge dev_query.
It is possible to record all queries sent to a database system. See Recipe: Record Query Call Log Files.
The recorded queries can be replayed against a different database system to check if the syntax and the semantic remain the same or to measure the database system performance.
The tool makes it possible to check differences between databases.
The information provided in the Knowledge Base may not be applicable to all systems and situations. Intershop Communications will not be liable to any party for any direct or indirect damages resulting from the use of the Customer Support section of the Intershop Corporate Web site, including, without limitation, any lost profits, business interruption, loss of programs or other data on your information handling system.