There are two separate Global Mirror products that support Disaster Recovery and Business Continuity: Global Mirror for ESS and Global Mirror for z/Series. Although these two products are implemented quite differently, they each provide similar services, namely, asynchronous mirroring over any distance and data consistency across multiple disk storage subsystems.
In order to ensure data consistency across multiple subsystems, there must be a single point of control that manages the remote writes. Both the methodology for providing data consistency and the point of control are quite different between the two products. However, both solutions offer a very low Recovery Point Objective.
Global Mirror for ESS is an asynchronous implementation of PPRC and initially may look very similar to Global Copy for ESS (PPRC/XD). Data consistency is provided by additional functionality at the primary site where one of the Enterprise Storage Servers is enabled to control the process. This ESS is designated as the Master Control Server ESS and communicates to the subordinate primary storage subsystems across local fiber links.
Global Mirror builds on Global Copy and FlashCopy in order to ensure asynchronous writes with consistency to the remote storage. This type of implementation provides both high availability and automation without requiring software or application code.
Global Mirror provides data consistency by creating a set of remote volumes every few seconds. In diagram at the right, the Master Global Mirror controller communicates with the other controllers in the Global Mirror configuration and controls the creation of the consistency groups.
To ensure data consistency, Global Mirror performs the following steps:
These steps are then repeated.
In many cases, it is possible to create consistency groups every three to five seconds providing sufficient network bandwidth is available. In this fashion, a Recovery Point Objective of "just a few" seconds can be achieved.
IBM Global Mirror for ESS includes several features that benefit Enterprise System environments with large files and databases spanning multiple volumes and subsystems, including:
Global Mirror for z/Series which was previously called eXtended Remote Copy or XRC is also an asynchronous disk mirroring methodology, but this implementation is quite different than Global Mirror for ESS. Instead of being a controller based solution like Global Mirror for ESS, XRC utilizes application code running on a z/Series processor to perform the data replication and maintain data consistency.
The host-based application that supports and controls the replication process is called the System Data Mover (SDM). Since the SDM is a z/OS application, it provides a greater level of control and reporting capabilities with fewer constraints than the controller based solution.
The Global Mirror for z/Series is the most mature of any of the offerings that provide true data consistency across any distance. The product has been available for over 10 years and offers growth potential and scalability that far outshines the other disk mirroring methodologies.
A sample physical hardware configuration diagram is shown here:
This illustrates a relatively simple yet extremely resilient Global Mirror for z/Series configuration. In this example, all of the local FICON channels are depicted in blue, Red was used to depict the FICON channels that are extended across the network, and Red/Yellow was used to show the network components.
Note that channel extension equipment is required in this configuration. Since the SDM is a z/OS application, the only supported channel protocols are either FICON or ESCON. Shown in this example are McData USD-X channel extenders. These devices convert FICON or ESCON into various different network friendly protocols, compress the data to be transmitted, and provide additional support to Global Mirror for z/Series.
Global Mirror for z/series operates really quite differently than Global Mirror for ESS. The z/Series implementation consists of two complementary asynchronous processes.
In this fashion, the SDM ensures that no dependent writes are made out of sequence and data residing on the secondary volumes will always provide a time consistent copy of the primary volumes being mirrored. The data on the secondary volumes can be used for recovery since it is time consistent, however best practices dictate that the data still be FlashCopied prior to recovery for two very important reasons:
Unlike the controller based implementation, FlashCopy commands are typically issued infrequently in a Global Mirror for z/Series environment. Typically, the tertiary volumes are only refreshed from the secondary volumes:
The FlashCopy from secondary to tertiary volumes is usually controlled by automation running on the SDM processors. This is frequently GDPS running under Netview, but can be accomplished via other means as well.
Simple GDPS automation scripts are often used to provide the operations staff the ability to not only control and query the active recovery environment, but also to initiate Disaster Recovery / Business Continuity exercises with a simple command. In fact, a single GDPS command script perform all of the functions necessary create a tertiary copy of the volumes and IPL the recovery z/OS systems!
The Recovery Point and Recovery Time objectives are slightly variable depending upon a number of factors such as the available bandwidth and the amount of changed data to be mirrored. This affords various tuning opportunities that allow the Enterprise to purchase and configure only the amount of bandwidth that is necessary in order to achieve their RPO. In many environments, a Recovery Time Objective of 30-60 minutes and a Recovery Point Objective of 5-30 seconds are very achievable.
IBM Global Mirror for z/Series includes a number of benefits to the large Enterprise System environments:
This document was printed from http://recoveryspecialties.com/