| Document revision date: 30 March 2001 | |
![]() |
|
|
|
An interconnect is a physical path that connects
computers to other computers, and to storage subsystems. OpenVMS
Cluster systems support a variety of interconnects, also referred to as
buses, so that members can communicate with each other and with
storage, using the most appropriate and effective method available.
The software that enables OpenVMS Cluster systems to communicate over
an interconnect is the System Communications Services (SCS). An
interconnect that supports node-to-node SCS communications is called a
cluster interconnect. An interconnect that provides
node-to-storage connectivity within a cluster is called a
shared storage interconnect. Some interconnects, such
as CI and DSSI, can serve as both cluster and storage interconnects.
The interconnects supported on OpenVMS in a cluster and their
interconnect type are shown next:
1.1 Characteristics
The interconnects described in this chapter share some general characteristics. Table 1-1 describes these characteristics.
| Characteristic | Description |
|---|---|
| Throughput |
The quantity of data transferred across the interconnect.
Some interconnects require more processor overhead than others. For example, Ethernet and FDDI interconnects require more processor overhead than do CI or DSSI. Larger packet sizes allow higher data-transfer rates (throughput) than do smaller packet sizes. |
| Cable length | Interconnects range in length from 3 m to 40 km. |
| Maximum number of nodes | The number of nodes that can connect to an interconnect varies among interconnect types. Be sure to consider this when configuring your OpenVMS Cluster system. |
| Supported systems and storage | Each OpenVMS Cluster node and storage subsystem requires an adapter to connect the internal system bus to the interconnect. First consider the storage and processor I/O performance, then the adapter performance, when choosing an interconnect type. |
Table 1-2 shows key statistics for a variety of interconnects.
Interconnect
Maximum Throughput (Mb/s)
Hardware-assisted Data Link1
Storage Connection
Topology
Maximum Nodes per Cluster
Maximum
Length
General purpose
ATM
155/622
No
MSCP served
Radial to a
switch
96
2
2 km
3
/300 m
3
Ethernet/
Fast/
Gigabit
10/100/1000
No
MSCP served
Linear or radial to a hub or switch
96
2
100 m
4
/100 m
4
/550 m
3
FDDI
100
No
MSCP served
Dual ring to a tree, radial to a hub or
switch
96
2
40 km
5
CI
140
Yes
Direct and MSCP served
Radial to a hub
32
6
45 m
DSSI
32
Yes
Direct and MSCP served
Bus
8
7
6
m
8
Shared storage only
Fibre Channel
1000
No
Direct
9
Radial to a
switch
96
2
10 km
10
/100 km
11
SCSI
160
No
Direct
9
Bus or radial to a hub
8-16
12
25 m
Node-to-node (SCS traffic only)
MEMORY CHANNEL
800
No
MSCP served
Radial
4
3 m
1Hardware-assisted data link reduces the processor overhead.
2OpenVMS Cluster computers.
3Based on multimode fiber (MMF). Longer distances can be
achieved by bridging between this interconnect and WAN inter-switch
links using common carriers such as ATM, DS3, etc.
4Based on unshielded twisted pair wiring (UTP). Longer
distances can be achieved by bridging between this interconnect and WAN
inter-switch links (ISLs), using common carriers such as ATM, DS3, etc.
5Based on single-mode fiber, point-to-point link. Longer
distances can be achieved by bridging between FDDI and WAN inter-switch
links (ISLs) using common carriers such as ATM, DS3, etc.
6Up to 16 OpenVMS Cluster computers; up to 31 HSJ
controllers.
7Up to 4 OpenVMS Cluster computers; up to 7 storage devices.
8DSSI cabling lengths vary based on cabinet cables.
9Direct-attached SCSI and Fibre Channel storage can be MSCP
served over any of the general purpose cluster interconnects.
10Based on single-mode fiber, point-to-point link.
11Support for longer distances (up to 100 km) based on
inter-switch links (ISLs) using single-mode fiber. In addition, DRM
configurations provide longer distance ISLs using the Open Systems
Gateway and Wave Division Multiplexors.
12Up to 3 OpenVMS Cluster computers, up to 4 with the
DWZZH-05 and fair arbitration; up to 15 storage devices.
1.3 Multiple Interconnects
You can use multiple interconnects to achieve the following benefits:
A mixed interconnect is a combination of two or more different types of
interconnects in an OpenVMS Cluster system. You can use mixed
interconnects to combine the advantages of each type and to expand your
OpenVMS Cluster system.
For example, an Ethernet cluster that requires more
storage can expand with the addition of Fibre Channel, SCSI, or CI
connections.
1.5 Interconnects Supported by Alpha and VAX Systems
Table 1-3 shows the OpenVMS Cluster interconnects supported by Alpha and VAX systems.
You can also refer to the most recent OpenVMS Cluster SPD to see the
latest information on supported interconnects.
Systems
CI
DSSI
FDDI
Ethernet
ATM
MEMORY
CHANNEL SCSI
Fibre
Channel
AlphaServer GS160, GS320
X
X
X
X
X
X
X
X
AlphaServer GS60, GS80, GS140
X
X
X
1
X
X
X
X
X
AlphaServer ES40
X
X
X
X
X
X
X
AlphaServer DS20E, DS10L, DS10
X
X
X
X
X
X
X
AlphaStation ES40
X
X
X
X
X
X
2
X
AlphaStation DS20E
X
X
X
X
X
X
2
X
AlphaStation DS10/XP900
X
X
X
X
X
X
2
X
AlphaServer 8400, 8200
X
X
X
X
1
X
X
X
X
AlphaServer 4100, 2100, 2000
X
X
X
1
X
1
X
X
X
X
3
AlphaServer 1000
X
X
X
X
1
X
X
AlphaServer 400
X
X
X
1
X
DEC 7000/10000
X
X
X
1
X
DEC 4000
X
X
X
1
DEC 3000
X
1
X
1
X
DEC 2000
X
X
1
VAX 6000/7000/10000
X
X
X
X
VAX 4000, MicroVAX 3100
X
X
X
1
VAXstation 4000
X
X
1
1Able to boot over the interconnect as a satellite node.
2Support for MEMORY CHANNEL Version 2.0 hardware, only.
3Support on AlphaServer 4100 only.
As Table 1-3 shows, OpenVMS Clusters support a wide range of interconnects. The most important factor to consider is how much I/O you need, as explained in <REFERENCE>(biz_app_chap).
In most cases, the I/O requirements will be less than the capabilities of any one OpenVMS Cluster interconnect. Ensure that you have a reasonable surplus I/O capacity, then choose your interconnects based on other needed features.
Reference:: For detailed information
about the interconnects and adapters supported on each AlphaServer
system, refer to the Compaq web site:
Select Servers, then AlphaServers, then the server family (for example,
DS, ES, or GS), then the server model. You can then select from all
supported options.
You can also select the Alpha Systems Technology Technical Information
web page, from which you can access documentation, supported options,
firmware updates, and the software patch service for each Alpha system
model.
www.compaq.com
http://www5.compaq.com/alphaserver/technology/index.html
| Next |
|
| privacy and legal statement | ||
| 6318INTERCONNECT.HTML | ||