As a Database Administrator, in CA Datacom/DB Version 15.0 you can log data maintenance for tables, including tables with long rows, using high speed logging.
Segmented Logging
All logging in CA Datacom/DB Version15.0 can use high speed variable logging. Tables that have rows too large to fit in 32 K of DASD, previously required spanned logging, a slower LXX option. Spanned logging is supported, because users of spanned logging might need to migrate easily from Version 14.0 to Version 15.0, with the ability to do a fallback. However, when Version 15.0 is fully established at your site and there is no danger of needing a fallback, a best practice is to convert from spanned logging to high speed variable logging.
Segmented log records are used to support large log records within variable logging. Tables with rows too large to fit in 32 K of DASD are required by hardware limitations to use segmented log records. Records are segmented into one to three physical block records, depending on their size.
Segmented logging means the following:
LXX and RXX
The read functions of the LXX and RXX detect segmented log records and rebuild them into a single record before that record is read. Recovery is unhindered by segmented logging, because segmented records are not spilled using different spill functions. The method by which segmented records are written to the LXX ensures there are no errors in restart processing.
Release Upgrade and Fallback.
Variable logging produces substantial performance improvement over spanned logging. When sites that have tables with large rows convert from spanned to segmented (variable) logging, make certain that any restart processing required after a MUF termination is restarted with Version 15.0 code, not Version 14.0 code. If restart is not required, fallback to Version 14.0 is fully supported.
Version 14.0 has been enhanced to allow the reading of Recovery File(s) (RXX) that are built by Version 15.0.
Note: RXX files are required so that a Version 15.0 MUF can have its LXX read and handled by the various readers of the Version 14.0 LXX, including the DBUTLTY functions RECOVERY and READRXX. The restart process is different, however. Version 14.0 has not been enhanced to support an LXX with segmented log records.
The Force Area (FXX)
The Force Area (FXX) has been enhanced in Version 15.0 to fully support variable LXX areas and large log records. The FXX can be and should be a fixed block area initialized at half track. Alternately, the FXX can be a spanned block area to provide compatibility with prior releases.
Index swap is a new Version 15.0 ability to ‘move’ an index definition and set of values from one index area to another. For example, assume a table with five index definitions (keys) that has all present in the IXX index area. The Database Administrator wants to move one of the five from the IXX area to a new I01 index area to promote better performance by lowering the index levels to save I/O and CPU time, or by allowing the new index area to be in its own buffer pool for potentially better performance, or by allowing the new index area to be covered where only the moved index is covered, not the entire database.
The normal process to achieve the goals just described involves the closing of the database, a new database catalog to define particular index definition from the IXX to the I01 area; and a RETIX with MINIMAL=YES to first delete the index information from the IXX and second to scan all data, to build up the index information in I01.
The outage that is required for the steps just described would take a significant amount of time for a large area. But the new Index Swap ability is a very small step in a process to perform the move in a 24x7 way with zero outage or no new outage, or a very short outage. The process is to first define a new key definition for the same fields of the table having a different key name, a different key ID, and a different index area name. This new key is set to no key usage so it is not used by any program. The new key is cataloged and a task will be started to read every row for the table and add the key value to the new index. This process takes as long as needed, based on number of records. It runs in full parallel with all user program activity. When the new key is fully populated, the Index Swap occurs in the CXX on DASD without hurting memory or any user applications with the table open and processing.
For sites using the MUFPLEX Shadow ability to migrate with no outage to data access, when the Shadow MUF is next part of a migration, the work there will be with the new key in the new index area, and the old index in the IXX is not used. For sites using the MUFPLEX Shadow ability where the enabled MUF is canceled and the Shadow takes over will have the short time between enabled to enable, but the new key is fully used in the new MUF. For other sites, the new key takes effect when the table is next closed and so reopened.
Once the new key is in use by any of the methods above, the old key definition in the IXX area can be set to be deleted in the 24x7 way, having no outage.
The details of the feature allow you to use the 1000 SWAP transaction with the -UPD header transaction for a KEY with a KEY-TYPE attribute value of "I" (Index). Under this new process, you can add a KEY definition into the TABLE structure reflecting the KEY occurrence to be replaced with the desired modifications. This KEY must be set to provide no key usage. This KEY occurrence is then applied to the CXX using the APPPLYCXX function. Once the new index is built, the new index is used to replace the old index with minimal processing interruption. Once the new index is in place and you are satisfied, the old index can be deleted using the APPLYCXX function.
Note: This feature is limited to KEY structures containing the same fields.
In addition to the primary purpose of moving an index definition from one index area to another, index swapping allows two other rarely changed key definition attributes that do not affect application programs to be changed in the same 24x7 way. By supporting changes without application program changes, index swapping allows a Database Administrator to address, without risk, potential tuning benefits that are related to performance.
The first attribute that can be different is a key set as INCLUDE-NIL-KEY YES or NO. The INCLUDE-NIL-KEY attribute defaults to YES and is not often used set to NO. A NIL-KEY is one that has an entire key length of fields containing all binary zeroes or all blanks. With INCLUDE-NIL-KEY set to YES, these index entries exist and are available to the RAAT and SAAT (set-at-a-time) commands, which are needed if the key is used to find all records in the table. The ability to set INCLUDE-NIL-KEY to NO forces those index entries to not exist and are therefore not available to the RAAT and SAAT commands. This is considered a performance or tuning ability. The option must be set to YES for the Master Key and the Native Sequence Key. An example of the option being useful is a secondary key that is defined as fields that are a previous address for a person. If an address is known, it would be present and subject to finding the row. If there is not an address, the blank (or binary zeros) in the field have no value to be added to the index or searched for in the index. A savings of DASD and time to add or delete the blank field would not be a “value.”
The second attribute that can be different is a key set as UNIQUE. This attribute defaults to NO and can be NO or YES. If set to YES, it ensures that a row cannot be added with the same value as another row in the table.
All three attributes require changes to the index areas on DASD and are grouped in this key swap feature. The following steps are needed to complete the process:
For more information, see the 1000 SWAP batch transaction in the CA Datacom Datadictionary Batch Reference Guide.
From the CXX point of view, the swap is processed in the following simple parts.
Note: Neither key can be a relative record key.
Processing is as follows:
|
Copyright © 2014 CA.
All rights reserved.
|
|