| Document revision date: 30 March 2001 | |
![]() |
|
|
|
| Previous |
DSSI configurations use HSD intelligent controllers to connect disk drives to an OpenVMS Cluster. HSD controllers serve the same purpose with DSSI as HSJ controllers serve with CI: they enable you to configure more storage.
Alternatively, DSSI configurations use integrated storage elements
(ISEs) connected directly to the DSSI bus. Each ISE contains either a
disk and disk controller or a tape and tape controller.
1.10.7 Multiple DSSI Adapters
Multiple DSSI adapters are supported for some systems, enabling higher throughput than with a single DSSI bus.
Table 1-8 lists the limitations for multiple DSSI adapters. You can also refer to the most recent OpenVMS Cluster SPD for the latest information about DSSI adapters.
| System | Embedded | KFPSA1 | KFQSA2 | KFESA | KFESB | KFMSA3 | KFMSB3 |
|---|---|---|---|---|---|---|---|
| AlphaServer 8400 | - | 4 | - | - | - | - | 12 |
| AlphaServer 8200, 4100 | - | 4 | - | - | - | - | - |
| AlphaServer 2100 | - | 4 | - | - | - | - | - |
| AlphaServer 2000, 1000 | - | 4 | - | - | 4 | - | - |
|
DEC 4000
(embedded N710) |
2 | - | - | - | - | - | - |
| DEC 7000/10000 | - | - | - | - | - | - | 12 |
| MicroVAX II, 3500, 3600, 3800, 3900 | - | - | 2 | - | - | - | - |
|
MicroVAX 3300/3400
(embedded EDA640) |
1 | - | 2 | - | - | - | - |
|
VAX 4000 Model 105A
(embedded SHAC 4) |
1 + 1 4 | - | 2 5 | - | - | - | - |
|
VAX 4000 Model 200
(embedded SHAC 4) |
1 | - | 2 | - | - | - | - |
| VAX 4000 Model 300, 400, 500, 600 | 2 | - | 2 | - | - | - | - |
|
VAX 4000 Model 505A/705A
(embedded SHAC 3) |
2 + 2 6 | - | 2 | - | - | - | - |
| VAX 6000 | - | - | - | - | - | 6 | - |
| VAX 7000 | - | - | - | - | - | 12 | - |
The following configuration guidelines apply to all DSSI clusters:
Reference: For more information about DSSI, see the
DSSI OpenVMS Cluster Installation and Troubleshooting Manual.
1.11 LAN Interconnects
LAN interconnects provide:
Ethernet (including Fast Ethernet and Gigabit Ethernet), FDDI, and ATM
are LAN-based interconnects.
The Ethernet (10/100) and Gigabit Ethernet interconnects offer the
following advantages:
1.11.1 Ethernet (10/100) and Gigabit Ethernet Advantages
1.11.2 Ethernet (10/100) and Gigabit Ethernet Throughput
The Ethernet technology offers a range of baseband transmission speeds:
Ethernet adapters do not provide hardware assistance, so processor overhead is higher than for CI or DSSI.
Consider the capacity of the total network design when you configure an OpenVMS Cluster system with many Ethernet-connected nodes or when the Ethernet also supports a large number of PCs or printers. General network traffic on an Ethernet can reduce the throughput available for OpenVMS Cluster communication. Fast Ethernet and Gigabit Ethernet can significantly improve throughput. Multiple Ethernet adapters can be used to improve cluster performance by offloading general network traffic.
Reference: For extended LAN configuration guidelines,
see <REFERENCE>(extended_guide).
1.11.3 Multiple Ethernet Load Balancing
If only Ethernet paths are available, the choice between which path the
OpenVMS Cluster software uses is based on latency (computed network
delay). If delays are equal, either path can be used. Otherwise, the
OpenVMS Cluster software chooses the channel with the least latency.
The network delay across each segment is calculated approximately every
3 seconds. Traffic is then balanced across all communication paths
between local and remote adapters.
1.11.4 Supported Adapters and Buses
The following are Ethernet adapters and the internal bus that each supports:
Reference: For complete information about each adapter's features and order numbers, access the Compaq website at:
http://www.compaq.com/ |
Under Products, select Servers, then AlphaServers, then the Alpha system of interest. You can then obtain detailed information about all options supported on that system.
You can use transparent Ethernet-to-FDDI translating bridges to provide an interconnect between a 10-Mb/s Ethernet segment and a 100-Mb/s FDDI ring. These Ethernet-to-FDDI bridges are also called "10/100" bridges. They perform high-speed translation of network data packets between the FDDI and Ethernet frame formats.
Reference: See <REFERENCE>(cosmic_lan) for an example of these bridges.
You can use switches to isolate traffic and to aggregate bandwidth, which can result in greater throughput.
Use the following guidelines when configuring systems in a Gigabit Ethernet cluster:
FDDI is an ANSI standard LAN interconnect that uses
fiber-optic or copper cable. FDDI augments the 10 Mb/s Ethernet by
providing a high-speed interconnect for multiple Ethernet segments in a
single OpenVMS Cluster system.
FDDI offers the following advantages in addition to the general LAN
advantages:
1.12.1 FDDI Advantages
1.12.2 Types of FDDI Nodes
The FDDI standards define the following two types of nodes:
FDDI limits the total fiber path to 200 km (125 miles). The maximum
distance between adjacent FDDI devices is 40 km with single-mode fiber
and 2 km with multimode fiber. In order to control communication delay,
however, it is advisable to limit the maximum distance between any two
OpenVMS Cluster nodes on an FDDI ring to 40 km.
1.12.4 Throughput
The maximum throughput of the FDDI interconnect (100 Mb/s) is 10 times higher than that of Ethernet.
In addition, FDDI supports transfers using large packets (up to 4468 bytes). Only FDDI nodes connected exclusively by FDDI can make use of large packets.
Because FDDI adapters do not provide processing assistance for OpenVMS
Cluster protocols, more processing power is required than for CI or
DSSI.
1.12.5 Supported Adapters and Bus Types
Following is a list of FDDI adapters and the buses they support:
Reference:
Reference: For complete information about each
adapter's features and order numbers, access the Compaq website at:
Under Products, select Servers, then AlphaServers, then the Alpha
system of interest. You can then obtain detailed information about all
options supported on that system.
http://www.compaq.com/
1.12.6 Configuration Guidelines for FDDI-Based Clusters
FDDI-based configurations use FDDI for node-to-node communication. The following general guidelines apply to FDDI configurations:
TBD
TBD
|
| privacy and legal statement | ||
| 6318INTERCONNECT_002.HTML | ||