skip book previous and next navigation links
go up to top of book: HP OpenVMS Alpha Partitioning and Galaxy Guide HP OpenVMS Alpha Partitioning and Galaxy Guide
go to beginning of chapter: NUMA Implications on OpenVMS Applications NUMA Implications on OpenVMS Applications
go to previous page: OpenVMS NUMA Awareness OpenVMS NUMA Awareness
go to next page: Batch Job Support for NUMA Resource Affinity DomainsBatch Job Support for NUMA Resource Affinity Domains
end of book navigation links

Application Resource Considerations  



Each application environment is different. An application's structure may dictate which options are best for achieving the desired goals. Some of the deciding factors include:

There are few absolute rules, but the following sections present some basic concepts and examples that usually lead to the best outcome. Localizing (on-RAD) memory access is always the goal, but it is not always achievable and that is where tradeoffs are most likely to be made.

Processes and Shared Data  

If you have hundreds, or perhaps thousands, of processes that access a single global section, then you probably want the default behavior of the operating system. The pages of the global section are equally distributed in the memory of all RADs, and the processes' home RAD assignments are equally distributed over the CPUs. This is the distributed, or "uniform," effect where over time all processes have similar performance potential given random accesses to the global section. None are optimal but none are at a severe disadvantage compared to the others.

On the other hand, a small number of processes accessing a global section can be located in a single RAD as long as four CPUs can handle the processing load and a single RAD contains sufficient memory for the entire global section. This localizes most memory access, enhancing the performance of those specifically located processes. This strategy can be employed multiple times on the same system by locating one set of processes and their data in one RAD and a second set of processes and their data in another RAD.

Memory  

AlphaServer GS Series systems have the potential for large amounts of memory. Take advantage of the large memory capacity whenever possible. For example, consider duplicating code or data in multiple RADs. It takes analysis, may seem wasteful of space, and requires coordination. However, it may be worthwhile if it ultimately makes significantly more memory references local.

Consider using a RAM disk product. Even if NUMA is involved, in-memory references outperform real device I/O.

Sharing and Synchronization  

Sharing data usually requires synchronization. If the coordination mechanism is a single memory location (sometimes called a latch, a lock, or a semaphore), then it may be the cause of many remote accesses and therefore degrade performance if the contention is high enough. Multiple levels of such locks distributed throughout the data may reduce the amount of remote access.

Use of OpenVMS Features  

Heavy use of certain base operating system features will result in much remote access because the data to support these functions resides in the memory of RAD0.


go to previous page: OpenVMS NUMA Awareness OpenVMS NUMA Awareness
go to next page: Batch Job Support for NUMA Resource Affinity DomainsBatch Job Support for NUMA Resource Affinity Domains