Sections
Performance During Steady State and
Performance During Copy and Merge Operations describe the performance
impacts on a shadow set in steady state and while a copy or merge
operation is in progress. In general, performance during steady
state compares with that of a nonshadowed disk. Performance is affected
when a copy or a merge operation is in progress to a shadow set. In
the case of copy operations, you control when the operations are
performed.
However, merge operations are not started because of user
or program actions. They are started automatically when a system
fails, or when a shadow set on a system with outstanding application
write I/O enters mount verification and times out. In this case,
the shadowing software reduces the utilization of system resources
and the effects on user activity by throttling itself dynamically.
Minimerge operations consume few resources and complete rapidly
with little or no effect on user activity.
The actual resources that are utilized during a copy or merge
operation depend on the access path to the member units of a shadow
set, which in turn depends on the way the shadow set is configured.
By far, the resources that are consumed most during both operations
are the adapter and interconnect I/O bandwidths.
You
can control resource utilization by setting the SHADOW_MAX_COPY
system parameter to an appropriate value on a system based on the
type of system and the adapters on the machine. SHADOW_MAX_COPY
is a dynamic system parameter that controls the number of concurrent
copy or merge threads that can be active on a single system. If
the number of copy threads that start up on a particular system
is more than the value of the SHADOW_MAX_COPY parameter on that
system, only the number of threads specified by SHADOW_MAX_COPY
will be allowed to proceed. The other copy threads are stalled until
one of the active copy threads completes.
For example, assume that the SHADOW_MAX_COPY parameter is
set to 3. If you mount four shadow sets that all need a copy operation,
only three of the copy operations can proceed; the fourth copy operation
must wait until one of the first three operations completes. Because
copy operations use I/O bandwidth, this parameter provides a way
to limit the number of concurrent copy operations and avoid saturating interconnects
or adapters in the system. The value of SHADOW_MAX_COPY can range
from 0 to 200. The default value is OpenVMS version specific.
Chapter 3 explains how to set the SHADOW_MAX_COPY parameter.
Keep in mind that, once you arrive at a good value for the parameter
on a node, you should also reflect this change by editing the MODPARAMS.DAT file
so that when invoking AUTOGEN, the changed value takes effect.
In addition to setting the SHADOW_MAX_COPY parameter, the
following list provides some general guidelines to control resource
utilization and the effects on system performance when shadow sets
are in transient states.
Create or add members to shadow sets
when your system is lightly loaded.
The amount of data that a system can transfer during
copy operations varies depending on the type of disks, interconnect,
controller, the number of units in the shadow set, and the shadow
set configuration on the system. For example, approximately 5% to
15% of the Ethernet or CI bandwidth might be consumed for each copy
operation (for disks typically configured in CI or Ethernet environments).
When you create unassisted, three-member shadow
sets consisting of one source member and two target devices, add
both target devices at the same time in a single mount command rather
than in two separate mount commands. Adding all members at once
optimizes the copy operations by starting a single copy thread that
reads from the source member once, and performs write I/O requests
to the target members in parallel.
For satellite nodes in a mixed-interconnect or local
area OpenVMS Cluster system, set the system parameter SHADOW_MAX_COPY
to a value of 0 for nodes that do not have local disks as shadow
set members.
Do not use the MOUNT/CLUSTER command to mount every shadow
set across the cluster unless all nodes must access the set. Instead,
use the MOUNT/SYSTEM command to mount the shadow sets on only those
nodes that need to access a particular set. This reduces the chances
of a shadow set going into a merge state. Because a shadow set goes
into a merge state only when a node that has the set mounted fails,
you can reduce the chances of this happening by limiting the number
of nodes that mount a shadow set, especially when there is no need
for a node to access the shadow sets.
Because a copy operation can occur only on nodes
that have the shadow set mounted, create and mount shadow sets on
the nodes that are local (have direct access) to the shadow set
members. This allows the copy threads to run on these nodes, resulting
in faster copy operations with fewer resources utilized.
If you have shadow sets configured across nodes
that are accessed through the MSCP server, you might need to increase
the value of the MSCP_BUFFER system parameter in order to avoid
fragmentation of application I/O. Be aware that each shadow
set copy or merge operation normally consumes 127 buffers.
Dual-pathed and dual-ported shadowed disks in a
OpenVMS Cluster system can provide additional coverage against the
failure of nodes that are directly connected to shadowed disks.
This type of configuration provides higher data availability with
reasonable performance characteristics.
Use the preferred path option to ensure dual-ported
drives are accessed via the same controller so that the shadowing
software will perform assisted copy operations.