Compression is relatively inexpensive and easy to justify with random access, which is typical in the case of VSAM, because savings due to compression occur for each record, whether you ever read it or not, and overhead is exacted only for the relatively few records actually processed. However, sequential processing incurs overhead for every record because in most cases all are read whenever the data set is processed.
Moreover, programs like SORT, which normally can optimize I/O, are forced to use the subsystem interface so that CA Compress can compress or expand each logical record, and this adds substantially to I/O overhead.
For these reasons, not all sequential data sets should be compressed. Good examples are multivolume tapes written once and seldom read, or DASD data sets created at night and seldom accessed during the day.
| Copyright © 2012 CA. All rights reserved. |
|