The virtual I/O
cache is a clusterwide, write-through, file-oriented, disk cache
that can reduce the number of disk I/O operations and increase performance.
The purpose of the virtual I/O cache is to increase system throughput
by reducing file I/O response times with minimum overhead. The virtual
I/O cache operates transparently of system management and application
software, and maintains system reliability while it significantly
improves virtual disk I/O read performance.
Understanding How the Cache Works The virtual I/O cache
can store data files and image files. For example, ODS-2 disk file
data blocks are copied to the virtual I/O cache the first time they
are accessed. Any subsequent read requests of the same data blocks are
satisfied from the virtual I/O cache (hits) eliminating any physical
disk I/O operations (misses) that would have occurred.
Depending on your system work load, you should see increased
application throughput, increased interactive responsiveness, and
reduced I/O load.
Applications that initiate single read and write requests
do not benefit from virtual I/O caching as the data is never reread
from the cache. Applications that rely on implicit I/O delays might abort
or yield unpredictable results.
Several policies
govern how the cache manipulates data, as follows:
Write-through--All write I/O
requests are written to the cache as well as to the disk.
Least Recently Used (LRU)--If the cache
is full, the least recently used data in the cache is replaced.
Cached data maintained across file close--Data remains in the cache after a file is closed.
Allocate on read and write requests--Cache
blocks are allocated for read and write requests.