Previous Topic: Dynamic BufferingNext Topic: JCL Requirements


Local Shared Resources (LSR)

With LSR buffering, several aspects of VSAM processing are different than with NSR buffering. LSR buffers are tuned for random access, and can dramatically reduce I/Os for random access applications.

As you have seen, NSR buffering can reduce I/Os for sequential processing by transferring multiple CIs with a single I/O operation, effectively pre-staging data for the sequential process. Since the sequential process (predictably) requires the read-ahead data, performance is enhanced.

There is no pre-staging of data when LSR buffers are used. Therefore, applications that perform significant amounts of sequential processing should avoid using LSR buffers. Since there is no read-ahead with LSR, sequential applications generally require more I/Os with LSR than with NSR buffers.

For random applications, LSR can dramatically improve performance over NSR buffering. With NSR, you can expect to perform at least 1 index I/O (for the sequence set) and 1 data I/O for each random access request. With LSR, you can effectively avoid all index I/O, after the first I/O for each CI to make it resident - an effective I/O reduction of 50 percent.

Additionally, with LSR you can defer writes for CIs that can be updated multiple times within a short period of time (with NSR, each CI is written back to disk every time data in the CI is altered; therefore, the in-storage copy of a single CI can be written to disk several times when a transaction changes fields in several records that reside in the CI).

The LSR buffers act as a main storage cache of records, both index and data. Records that are referenced frequently tend to remain in storage, and the application will avoid I/O for each record that is found in this cache.

To further reduce I/O, relatively large quantities of LSR data can be staged in hiperspace. Data in a hiperspace can be accessed synchronously without any paging, I/O or interrupt overhead.