How to Implement Near Zero Downtime for Large Cloud Data Migration?

One of my clients is moving its SAP on-premise instances to the cloud. SAP offers Near Zero Downtime Technology (NZDT) to reduce the migration downtime from approx. one week to 6-60 hours (from 6pm Friday to 6am Monday). The purpose of applying NZDT is to secure business continuity. However, it’s not cheap, to pay a million-dollar bill for a weekend! If you know how, then the estimation of building your own in-house developed NZDT component should be less than 50K.

Please be aware, NZDT is neither a new technology, nor the invention of SAP. The discussion of how to implement it is all over the place on the internet. Our Architects have gained enough experience in this area during the past two decades. We have designed and developed sophisticated non-stop 24*7 data replication tool LiveSync Automation for near real-time database synchronization.

Let’s discover the procedures of NZDT, without paying a million dollars. If you would like to know the details of design and coding of NZDT, please reach us directly.

For large database instance, to reduce the downtime, we need to follow two steps to complete the entire data migration:

  • First, it’s bulk data loading. After the cut-over time, the source instance data needs to be exported, and then loaded into the target instance. This step migrates a large volume of data from source to target. The source keeps running for business needs during the bulk data loading time.
  • Second, it’s incremental data loading. Right after the cut-over time, we need to capture data changes (CDC). Once the bulk data loading to target operation is done, the changed data needs to be loaded into the target, to complete the data migration.

NZDT is a combination of refined migration procedures and CDC technology. Let’s have a look at the procedures:

  • Pre-migration work
    • Develop Oracle stored procedures to generate data table triggers for update, insert and delete operations; Develop Oracle stored procedures to disable and enable triggers in batch processes;
    • Run the Oracle stored procedures to generate triggers above for all data tables which need to capture data changes (CDC), and disable triggers immediately.
    • Set up target database and application server. Ensure application server point to the target database server. Verify whether the application is able to launch normally on target servers.
  • Migration time
    • Once we get the bulk data dump file through the export procedure, enable all triggers to start capturing changed data in the source instance via executing enable triggers stored procedure.
    • Load bulk data into the target database.
    • The source instance keeps running and receiving all the changes from insert, update and delete operations. All these changes will be captured in the buffer tables by triggers that we enabled.
    • Once the bulk data loading is done, test run on the target. If everything works well, then we may proceed with exporting captured data from buffer tables. The volume of the captured data is a lot smaller than that of initial bulk data.
  • Downtime and back online
    • Before the incremental data loading, the source system needs shutdown. Downtime starts. We need to quickly export the changed data from the source and import them into the target. Do data verification, check sum, etc., test run and then back online.

Lionsgate Software consultant worked for Oracle Canada for many years and has over two decades experience on database design, development and data migration. Should you have any questions on NZDT design or implementation, please feel free to contact us.