|VTL||Virtual Tape Library|
|VTS||Virtual Tape Server|
IBMs “Peer-to-Peer” (PtP) VTS, sometimes called “tape mirroring”, is a robust option for using VTLs to support your Disaster Recovery & Business Continuity plans. In a PtP implementation, the data is written directly to a local IBM VTS (Virtual Tape Server) and then automatically mirrored to the remote VTS. This implementation tends to eliminate the distance penalty and provides increased resilience of day-to-day operations as well as improving recoverability.
The mirroring of data from the local VTS to the Remote VTS is performed by code running within the Virtual Tape Servers. There are two copy modes supported:
The first mode is Immediate Copy and is generally used for the data sets that are the most critical to recovery. Using this mode, the job running at the production site creating the data will not end until the data has been received by the remote VTS. Although this does have some impact to the production job by increasing run time, it is the means by which you are guaranteed that the data has been received by the remote VTS before processing continues.
The second mode is Deferred Copy. Deferred is generally specified for the majority of the tape data as it avoids the processing delay incurred by the Immediate copy. Deferred allows the data to be transmitted to the remote VTS when resources and bandwidth are available without impact to the production jobs creating the data.
The amount of delay experienced for deferred data is a function of:
It is possible for each IT installation to determine how much delay is appropriate for their environment and configure just the right amount of network bandwidth to support those requirements. Some IT environments may choose to implement sufficient network capacity so that even during peak processing periods, all tape data (both immediate and deferred) is received by the remote VTL within 10-15 minutes. Others might choose to implement less bandwidth and allow the transmission delay to be somewhat greater.
The point is that a PtP tape solution provides configuration options that allow for a cost vs. benefit analysis to be performed as part of deciding the appropriate amount of bandwidth to implement in support of a PtP solution.
However, before sizing the bandwidth, it is necessary to understand how the PtP copy operation differs from remote tape vaulting. Instead of sending each write across the network individually, the local VTS buffers the data until end-of-volume has been reached and then transmits the entire logical volume to the remote VTS as efficiently as possibly. The data is compressed before transmission and the I/O stream is built in such a way to minimize network delay. In this way, the processing delays for jobs creating tape using Immediate Mode copy are minimized while the bandwidth utilization is maximized.
In order to illustrate the economies delivered by this type of operation, let us again look at the 20GB image copy that was discussed in Remote Tape Vaulting. In that example, a DB2 image copy job wrote slightly over 20GB of data into a remote vault in about 15 minutes. Of that 15 minute total, approximately 5 minutes was attributed to the delays introduced by writing the data at network speed instead of local channel speed.
If this same file is written into a local VTS (the local peer of a PtP VTS environment) using deferred copy mode, the job would complete in under 10 minutes as the writes are occurring at local channel speed.
But, if the output file is deemed critical and identified for immediate copy, then the execution time of the job creating the file will be increased by some amount. The amount of the increase is a function of the speed and availability of network resources and the amount of data to be transmitted. However, the amount of additional time will be less that if it were written into a Remote Tape Vaulting environment because of the processing improvements inherent in the PtP solution.
In the remote tape vaulting discussion, we assumed that the network component consisted of an OC12 and that there were no network or processing constraints. Using those same assumptions, we can see that only about one minute will be added to the execution time when writing into a PtP environment.
This example is shown here:
Note that the copy operation for each logical volume begins as soon as the creating task has completed writing to that volume. This allows the copy operation for the first two volumes to begin while the job is writing the subsequent volume. In the example shown above, the job only has to wait while the last logical volume is copied.
Since immediate copy is specified, the creating job will wait one additional minute – while the final copy operation was completed – before ending. (This is again assuming that the environment is not constrained and that sufficient bandwidth is available).
This illustrates one of the important design points regarding how the data is mirrored in a PtP environment in that the job does not wait for each copy operation to complete, but instead, overlaps as much processing as possible so that the increased run time is minimized.
The issue of immediate vs. deferred copy is often an emotional point with application support personnel. There certainly are some instances where immediate copy is appropriate and absolutely required, but deferred copy mode works well is many installations.
This document was printed from http://recoveryspecialties.com/