| Document revision date: 30 March 2001 | |
![]() |
|
|
|
In any OpenVMS Cluster environment, it is best to share resources as
much as possible. Resource sharing facilitates workload balancing
because work can be distributed across the cluster.
1.1 Shareable Resources
Most, but not all, resources can be shared across nodes in an OpenVMS Cluster. The following table describes resources that can be shared.
| Shareable Resources | Description |
|---|---|
| System disks | All members of the same architecture 1 can share a single system disk, each member can have its own system disk, or members can use a combination of both methods. |
| Data disks | All members can share any data disks. For local disks, access is limited to the local node unless you explicitly set up the disks to be cluster accessible by means of the MSCP server. |
| Tape drives | All members can share tape drives. (Note that this does not imply that all members can have simultaneous access.) For local tape drives, access is limited to the local node unless you explicitly set up the tapes to be cluster accessible by means of the TMSCP server. Only DSA tapes can be served to all OpenVMS Cluster members. |
| Batch and print queues | Users can submit batch jobs to any queue in the OpenVMS Cluster, regardless of the processor on which the job will actually execute. Generic queues can balance the load among the available processors. |
| Applications | Most applications work in an OpenVMS Cluster just as they do on a single system. Application designers can also create applications that run simultaneously on multiple OpenVMS Cluster nodes, which share data in a file. |
| User authorization files | All nodes can use either a common user authorization file (UAF) for the same access on all systems or multiple UAFs to enable node-specific quotas. If a common UAF is used, all user passwords, directories, limits, quotas, and privileges are the same on all systems. |
1.1.1 Local Resources
The following table lists resources that are accessible only to the
local node.
| Nonshareable Resources | Description |
|---|---|
| Memory | Each OpenVMS Cluster member maintains its own memory. |
| User processes | When a user process is created on an OpenVMS Cluster member, the process must complete on that computer, using local memory. |
| Printers | A printer that does not accept input through queues is used only by the OpenVMS Cluster member to which it is attached. A printer that accepts input through queues is accessible by any OpenVMS Cluster member. |
Figure 1-1 shows an example of an OpenVMS Cluster system that has both an Alpha system disk and a VAX system disk, and a dual-ported disk that is set up so the environmental files can be shared between the Alpha and VAX systems.
Figure 1-1 Resource Sharing in Mixed-Architecture Cluster System
Depending on your processing needs, you can prepare either an environment in which all environmental files are shared clusterwide or an environment in which some files are shared clusterwide while others are accessible only by certain computers.
The following table describes the characteristics of common- and multiple-environment clusters.
| Cluster Type | Characteristics | Advantages |
|---|---|---|
| Common environment | ||
| Operating environment is identical on all nodes in the OpenVMS Cluster. |
The environment is set up so that:
|
Easier to manage because you use a common version of each system file. |
| Multiple environment | ||
| Operating environment can vary from node to node. |
An individual processor or a subset of processors are set up to:
|
Effective when you want to share some data among computers but you also want certain computers to serve specialized needs. |
The installation or upgrade procedure for your operating system
generates a common system disk, on which most
operating system and optional product files are stored in a common root
directory.
1.3.1 Directory Roots
The system disk directory structure is the same on both Alpha and VAX systems. Whether the system disk is for Alpha or VAX, the entire directory structure---that is, the common root plus each computer's local root---is stored on the same disk. After the installation or upgrade completes, you use the CLUSTER_CONFIG.COM command procedure described in <REFERENCE>(build_cluster) to create a local root for each new computer to use when booting into the cluster.
In addition to the usual system directories, each local root contains a
[SYSn.SYSCOMMON] directory that is a directory alias for
[VMS$COMMON], the cluster common root directory in which cluster common
files actually reside. When you add a computer to the cluster,
CLUSTER_CONFIG.COM defines the common root directory alias.
1.3.2 Directory Structure Example
Figure 1-2 illustrates the directory structure set up for computers JUPITR and SATURN, which are run from a common system disk. The disk's master file directory (MFD) contains the local roots (SYS0 for JUPITR, SYS1 for SATURN) and the cluster common root directory, [VMS$COMMON].
Figure 1-2 Directory Structure on a Common System Disk
The logical name SYS$SYSROOT is defined as a search list that points first to a local root (SYS$SYSDEVICE:[SYS0.SYSEXE]) and then to the common root (SYS$COMMON:[SYSEXE]). Thus, the logical names for the system directories (SYS$SYSTEM, SYS$LIBRARY, SYS$MANAGER, and so forth) point to two directories.
Figure 1-3 shows how directories on a common system disk are searched when the logical name SYS$SYSTEM is used in file specifications.
Figure 1-3 File Search Order on Common System Disk
Important: Keep this search order in mind when you manipulate system files on a common system disk. Computer-specific files must always reside and be updated in the appropriate computer's system subdirectory.
Examples
$ EDIT SYS$SPECIFIC:[SYSEXE]MODPARAMS.DAT |
$ EDIT SYS$SYSTEM:MODPARAMS.DAT |
$ EDIT SYS$SYSDEVICE:[SYS0.SYSEXE]MODPARAMS.DAT |
$ SET DEFAULT SYS$COMMON:[SYSEXE] $ RUN SYS$SYSTEM:AUTHORIZE |
$ SET DEFAULT SYS$SYSDEVICE:[SYS0.SYSEXE] $ RUN SYS$SYSTEM:AUTHORIZE |
OpenVMS Version 7.2 offers clusterwide logical names on both OpenVMS Alpha and OpenVMS VAX. Clusterwide logical names extend the convenience and ease-of-use features of shareable logical names to OpenVMS Cluster systems.
Existing applications can take advantage of clusterwide logical names without any changes to the application code. Only a minor modification to the logical name tables referenced by the application (directly or indirectly) is required.
New logical names are local by default. Clusterwide is an attribute of a logical name table. In order for a new logical name to be clusterwide, it must be created in a clusterwide logical name table.
Some of the most important features of clusterwide logical names are:
To support clusterwide logical names, the operating system creates two clusterwide logical name tables and their logical names at system startup, as shown in Table 1-1. These logical name tables and logical names are in addition to the ones supplied for the process, job, group, and system logical name tables. The names of the clusterwide logical name tables are contained in the system logical name directory, LNM$SYSTEM_DIRECTORY.
| Name | Purpose |
|---|---|
| LNM$SYSCLUSTER_TABLE | The default table for clusterwide system logical names. It is empty when shipped. This table is provided for system managers who want to use clusterwide logical names to customize their environments. The names in this table are available to anyone translating a logical name using SHOW LOGICAL/SYSTEM, specifying a table name of LNM$SYSTEM, or LNM$DCL_LOGICAL (DCL's default table search list), or LNM$FILE_DEV (system and RMS default). |
| LNM$SYSCLUSTER | The logical name for LNM$SYSCLUSTER_TABLE. It is provided for convenience in referencing LNM$SYSCLUSTER_TABLE. It is consistent in format with LNM$SYSTEM_TABLE and its logical name, LNM$SYSTEM. |
| LNM$CLUSTER_TABLE | The parent table for all clusterwide logical name tables, including LNM$SYSCLUSTER_TABLE. When you create a new table using LNM$CLUSTER_TABLE as the parent table, the new table will be available clusterwide. |
| LNM$CLUSTER | The logical name for LNM$CLUSTER_TABLE. It is provided for convenience in referencing LNM$CLUSTER_TABLE. |
The definition of LNM$SYSTEM has been expanded to include LNM$SYSCLUSTER. When a system logical name is translated, the search order is LNM$SYSTEM_TABLE, LNM$SYSCLUSTER_TABLE. Because the definitions for the system default table names, LNM$FILE_DEV and LNM$DCL_LOGICALS, include LNM$SYSTEM, translations using those default tables include definitions in LNM$SYSCLUSTER.
The current precedence order for resolving logical names is preserved. Clusterwide logical names that are translated against LNM$FILE_DEV are resolved last, after system logical names. The precedence order, from first to last, is process --> job --> group --> system --> cluster, as shown in Figure 1-4.
Figure 1-4 Translation Order Specified by LNM$FILE_DEV
You might want to create additional clusterwide logical name tables for the following purposes:
To create a clusterwide logical name table, you must have create (C) access to the parent table and write (W) access to LNM$SYSTEM_DIRECTORY, or the SYSPRV (system) privilege.
You can apply the following types of ownership and access to a clusterwide logical name table:
You can create additional clusterwide logical name tables in the same way that you can create additional process, job, and group logical name tables---with the CREATE/NAME_TABLE command or with the $CRELNT system service. When creating a clusterwide logical name table, you must specify the /PARENT_TABLE qualifier and provide a value for the qualifier that is a clusterwide name. Any existing clusterwide table used as the parent table will make the new table clusterwide.
The following example shows how to create a clusterwide logical name table:
$ CREATE/NAME_TABLE/PARENT_TABLE=LNM$CLUSTER_TABLE - _$ new-clusterwide-logical-name-table |
Alias collisions involving clusterwide logical name tables are treated differently from alias collisions of other types of logical name tables. Table 1-2 describes the types of collisions and their outcomes.
| Collision Type | Outcome |
|---|---|
| Creating a local table with same name and access mode as an existing clusterwide table | New local table is not created. The condition value SS$_NORMAL is returned, which means that the service completed successfully but the logical name table already exists. The existing clusterwide table and its names on all nodes remain in effect. |
| Creating a clusterwide table with same name and access mode as an existing local table |
New clusterwide table is created. The condition value
SS$_LNMCREATED
is returned, which means that the logical name table was created. The
local table and its names are deleted. If the table was created with
the DCL command DEFINE, a message is displayed:
DCL-I-TABSUPER, previous table table_name has been superseded If the table was created with the $CRELNT system service, $CRELNT returns the condition value: SS$_SUPERSEDE . |
| Creating a clusterwide table with same name and access mode as an existing clusterwide table | New clusterwide table is not created. The condition value SS$_NORMAL is returned, which means that the service completed successfully but the logical name table already exists. The existing table and all its names remain in effect, regardless of the setting of the $CRELNT system service's CREATE-IF attribute. This prevents surprise implicit deletions of existing table names from other nodes. |
To create a clusterwide logical name, you must have write (W) access to the table in which the logical name is to be entered, or SYSNAM privilege if you are creating clusterwide logical names only in LNM$SYSCLUSTER. Unless you specify an access mode (user, supervisor, and so on), the access mode of the logical name you create defaults to the access mode from which the name was created. If you created the name with a DCL command, the access mode defaults to supervisor mode. If you created the name with a program, the access mode typically defaults to user mode.
When you create a clusterwide logical name, you must include the name of a clusterwide logical name table in the definition of the logical name. You can create clusterwide logical names by using DCL commands or with the $CRELNM system service.
The following example shows how to create a clusterwide logical name in the default clusterwide logical name table, LNM$CLUSTER_TABLE, using the DEFINE command:
$ DEFINE/TABLE=LNM$CLUSTER_TABLE logical-name equivalence-string |
To create clusterwide logical names that will reside in a clusterwide logical name table you created, you define the new clusterwide logical name with the DEFINE command, specifying your new clusterwide table's name with the /TABLE qualifier, as shown in the following example:
$ DEFINE/TABLE=new-clusterwide-logical-name-table logical-name - _$ equivalence-string |
If you attempt to create a new clusterwide logical name with the same access mode and identical equivalence names and attributes as an existing clusterwide logical name, the existing name is not deleted, and no messages are sent to remote nodes. This behavior differs from similar attempts for other types of logical names, which delete the existing name and create the new one. For clusterwide logical names, this difference is a performance enhancement. The condition value SS$_NORMAL is returned. The service completed successfully, but the new logical name was not created. |
When using clusterwide logical names, observe the following guidelines:
| Next |
|
| privacy and legal statement | ||
| 4477STARTUP.HTML | ||