skip book previous and next navigation links
go up to top of book: HP OpenVMS Alpha Partitioning and Galaxy Guide HP OpenVMS Alpha Partitioning and Galaxy Guide
go to beginning of chapter: Communicating With Shared Memory Communicating With Shared Memory
 
go to next page: LAN Shared Memory Device DriverLAN Shared Memory Device Driver
end of book navigation links

Shared Memory Cluster Interconnect (SMCI)  



The Shared Memory Cluster Interconnect (SMCI) is a System Communications Services (SCS) port for communications between Galaxy instances. When an OpenVMS instance is booted as both a Galaxy and an OpenVMS Cluster member, the SMCI driver is loaded. This SCS port driver communicates with other cluster instances in the same Galaxy through shared memory. This capability provides one of the major performance benefits of the OpenVMS Galaxy Software Architecture. The ability to communicate to another clustered instance through shared memory provides dramatic performance benefits over traditional cluster interconnects.

The following sections discuss drivers and devices that are used with SMCI.

SYS$PBDRIVER Port Devices  

When booting as both a Galaxy and a cluster member, SYS$PBDRIVER is loaded by default. The loading of this driver creates a device PBAx, where x represents the Galaxy partition ID. As other instances are booted, they also create PBAx devices. The SMCI quickly identifies the other instances and creates communications channels to them. Unlike traditional cluster interconnects, a new device is created to communicate with the other instances. This device also has the name PBAx, where x represents the Galaxy partition ID for the instance with which this device is communicating.

For example, consider an OpenVMS Galaxy that consists of two instances: MILKY and WAY. MILKY is instance 0 and WAY is instance 1. When node MILKY boots, it creates device PBA0. When node WAY boots, it creates PBA1. As the two nodes find each other, MILKY creates PBA1 to talk to WAY and WAY creates PBA0 to talk to MILKY.

            MILKY                 WAY
 
            PBA0:                 PBA1:
 
            PBA1:   <------->     PBA0:

Multiple Clusters in a Single Galaxy  

SYS$PBDRIVER can support multiple clusters in the same Galaxy. This is done in the same way that SYS$PEDRIVER allows support for multiple clusters on the same LAN. The cluster group number and password used by SYS$PEDRIVER are also used by SYS$PBDRIVER to distinguish different clusters in the same Galaxy community. If your Galaxy instances are also clustered with other OpenVMS instances over the LAN, the cluster group number is set appropriately by CLUSTER_CONFIG. To determine the current cluster group number, enter:

$ MCR SYMAN
SYSMAN> CONFIGURATION SHOW CLUSTER_AUTHORIZATION
Node: MILKY   Cluster group number: 0
Multicast address: xx-xx-xx-xx-xx-xx
SYSMAN>
If you are not clustering over a LAN and you want to run multiple clusters in the same Galaxy community, then you must set the cluster group number. You must ensure that the group number and password are the same for all Galaxy instances that you want to be in the same cluster as follows:
$ MCR SYSMAN
SYSMAN> CONFIGURATION SET CLUSTER_AUTHORIZATION/GROUP_NUMBER=222/PASSWORD=xxxx
SYSMAN>
If your Galaxy instances are also clustering over the LAN, CLUSTER_CONFIG asks for a cluster group number, and the Galaxy instances use that group number. If you are not clustering over a LAN, the group number defaults to zero. This means that all instances in the Galaxy are in the same cluster.

SYSGEN Parameters for SYS$PBDRIVER  

In most cases, the default settings for SYS$PBDRIVER should be appropriate; however, several SYSGEN parameters are provided. Two SYSGEN parameters control SYS$PBDRIVER: SMCI_PORTS and SMCI_FLAGS.

SMCI_PORTS  

The SYSGEN parameter SMCI_PORTS controls the initial loading of SYS$PBDRIVER. This parameter is a bit mask in which bits 0 through 25 each represent a controller letter. If bit 0 is set, PBAx is loaded; this is the default setting. If bit 1 is set, PBBx is loaded, and so on all the way up to bit 25, which causes PBZx to be loaded. For OpenVMS Alpha Version 7.3-1 and later, HP recommends leaving this parameter at the default value of 1.

Loading additional ports allows for multiple paths between Galaxy instances. For OpenVMS Alpha Version 7.3-1, having multiple communications channels does not provide any advantages because SYS$PBDRIVER does not support Fast Path. A future release of OpenVMS will provide Fast Path support for SYS$PBDRIVER. When Fast Path support is enabled, instances with multiple CPUs can achieve improved throughput by having multiple communications channels between instances.

SMCI_FLAGS  

The SYSGEN parameter SMCI_FLAGS controls operational aspects of SYS$PBDRIVER. The only currently defined flag is bit 1. Bit 1 controls whether or not the port device supports communications with itself. Supporting SCS communications to itself is primarily used for test purposes. By default, this bit is turned off and thus support for SCS communication locally is disabled, which saves system resources. This parameter is dynamic and by turning this bit on, an SCS virtual circuit should soon form.

The following table shows the values of the bits and the bit mask in the SMCI_FLAGS parameter.

Bit Mask Description
0
0
0 = Do not create local communications channels (SYSGEN default). Local SCS communications are primarily used in test situations and not needed for normal operations. Leaving this bit off saves resources and overhead.
1 = Create local communications channels.

1
2
0 = Load SYS$PBDRIVER if booting into both a Galaxy and a Cluster (SYSGEN Default).
1 = Load SYS$PBDRIVER if booting into a Galaxy.

2
4
0 = Minimal console output (SYSGEN default).
1 = Full console output, SYS$PBDRIVER displays console messages when creating communication channels and tearing down communication channels.



 
go to next page: LAN Shared Memory Device DriverLAN Shared Memory Device Driver