Previous Topic: 31-bit Memory Available for Link-Edit in DSF Calling ProgramsNext Topic: DBUTLTY Function INIT for ‘All’ Data Areas in a Database


Variable Log Area (LXX) Support of Larger Row Sizes

As a Database Administrator, in CA Datacom/DB Version 15.0 you can log data maintenance for tables, including tables with long rows, using high speed logging.

Segmented Logging

All logging in CA Datacom/DB Version15.0 can use high speed variable logging. Tables that have rows too large to fit in 32 K of DASD, previously required spanned logging, a slower LXX option. Spanned logging is supported, because users of spanned logging might need to migrate easily from Version 14.0 to Version 15.0, with the ability to do a fallback. However, when Version 15.0 is fully established at your site and there is no danger of needing a fallback, a best practice is to convert from spanned logging to high speed variable logging.

Segmented log records are used to support large log records within variable logging. Tables with rows too large to fit in 32 K of DASD are required by hardware limitations to use segmented log records. Records are segmented into one to three physical block records, depending on their size.

Segmented logging means the following:

LXX and RXX

The read functions of the LXX and RXX detect segmented log records and rebuild them into a single record before that record is read. Recovery is unhindered by segmented logging, because segmented records are not spilled using different spill functions. The method by which segmented records are written to the LXX ensures there are no errors in restart processing.

Release Upgrade and Fallback.

Variable logging produces substantial performance improvement over spanned logging. When sites that have tables with large rows convert from spanned to segmented (variable) logging, make certain that any restart processing required after a MUF termination is restarted with Version 15.0 code, not Version 14.0 code. If restart is not required, fallback to Version 14.0 is fully supported.

Version 14.0 has been enhanced to allow the reading of Recovery File(s) (RXX) that are built by Version 15.0.

Note: RXX files are required so that a Version 15.0 MUF can have its LXX read and handled by the various readers of the Version 14.0 LXX, including the DBUTLTY functions RECOVERY and READRXX. The restart process is different, however. Version 14.0 has not been enhanced to support an LXX with segmented log records.

The Force Area (FXX)

The Force Area (FXX) has been enhanced in Version 15.0 to fully support variable LXX areas and large log records. The FXX can be and should be a fixed block area initialized at half track. Alternately, the FXX can be a spanned block area to provide compatibility with prior releases.

Index Swap

Index swap is a new Version 15.0 ability to ‘move’ an index definition and set of values from one index area to another. For example, assume a table with five index definitions (keys) that has all present in the IXX index area. The Database Administrator wants to move one of the five from the IXX area to a new I01 index area to promote better performance by lowering the index levels to save I/O and CPU time, or by allowing the new index area to be in its own buffer pool for potentially better performance, or by allowing the new index area to be covered where only the moved index is covered, not the entire database.

The normal process to achieve the goals just described involves the closing of the database, a new database catalog to define particular index definition from the IXX to the I01 area; and a RETIX with MINIMAL=YES to first delete the index information from the IXX and second to scan all data, to build up the index information in I01.

The outage that is required for the steps just described would take a significant amount of time for a large area. But the new Index Swap ability is a very small step in a process to perform the move in a 24x7 way with zero outage or no new outage, or a very short outage. The process is to first define a new key definition for the same fields of the table having a different key name, a different key ID, and a different index area name. This new key is set to no key usage so it is not used by any program. The new key is cataloged and a task will be started to read every row for the table and add the key value to the new index. This process takes as long as needed, based on number of records. It runs in full parallel with all user program activity. When the new key is fully populated, the Index Swap occurs in the CXX on DASD without hurting memory or any user applications with the table open and processing.

For sites using the MUFPLEX Shadow ability to migrate with no outage to data access, when the Shadow MUF is next part of a migration, the work there will be with the new key in the new index area, and the old index in the IXX is not used. For sites using the MUFPLEX Shadow ability where the enabled MUF is canceled and the Shadow takes over will have the short time between enabled to enable, but the new key is fully used in the new MUF. For other sites, the new key takes effect when the table is next closed and so reopened.

Once the new key is in use by any of the methods above, the old key definition in the IXX area can be set to be deleted in the 24x7 way, having no outage.

The details of the feature allow you to use the 1000 SWAP transaction with the -UPD header transaction for a KEY with a KEY-TYPE attribute value of "I" (Index). Under this new process, you can add a KEY definition into the TABLE structure reflecting the KEY occurrence to be replaced with the desired modifications. This KEY must be set to provide no key usage. This KEY occurrence is then applied to the CXX using the APPPLYCXX function. Once the new index is built, the new index is used to replace the old index with minimal processing interruption. Once the new index is in place and you are satisfied, the old index can be deleted using the APPLYCXX function.

Note: This feature is limited to KEY structures containing the same fields.

In addition to the primary purpose of moving an index definition from one index area to another, index swapping allows two other rarely changed key definition attributes that do not affect application programs to be changed in the same 24x7 way. By supporting changes without application program changes, index swapping allows a Database Administrator to address, without risk, potential tuning benefits that are related to performance.

The first attribute that can be different is a key set as INCLUDE-NIL-KEY YES or NO. The INCLUDE-NIL-KEY attribute defaults to YES and is not often used set to NO. A NIL-KEY is one that has an entire key length of fields containing all binary zeroes or all blanks. With INCLUDE-NIL-KEY set to YES, these index entries exist and are available to the RAAT and SAAT (set-at-a-time) commands, which are needed if the key is used to find all records in the table. The ability to set INCLUDE-NIL-KEY to NO forces those index entries to not exist and are therefore not available to the RAAT and SAAT commands. This is considered a performance or tuning ability. The option must be set to YES for the Master Key and the Native Sequence Key. An example of the option being useful is a secondary key that is defined as fields that are a previous address for a person. If an address is known, it would be present and subject to finding the row. If there is not an address, the blank (or binary zeros) in the field have no value to be added to the index or searched for in the index. A savings of DASD and time to add or delete the blank field would not be a “value.”

The second attribute that can be different is a key set as UNIQUE. This attribute defaults to NO and can be NO or YES. If set to YES, it ensures that a row cannot be added with the same value as another row in the table.

All three attributes require changes to the index areas on DASD and are grouped in this key swap feature. The following steps are needed to complete the process:

  1. Add a new key definition to the table that has a key defined, similar to the key to be changed, except it has a different Datacom Name (a temporary name that is not used), different Key ID, and different attribute of the three above that can be changed. This key definition is temporary and should be set to key usage NONE.
  2. If the index name is to be changed and the name is not currently in use by another key in the database that is initialized and loaded, the index name needs to be subjected to the DBUTLTY function PREINIT so that is it initialized and ready for use.
  3. The new key definition needs to be cataloged using DDUPDATE to the CXX. This process is explained in more detail in CA Datacom Datadictionary documentation that discusses adding or deleting index definitions in "real time," that is, in a 24X7 manner. After the catalog of the key, time must pass to all the MUF system to build the new index definition requirements.
  4. After the new key definition is fully in place, the key swap can occur. The two key definitions to be swapped must both be in a LOADED status in the CXX. The key in use that is to be replaced is selected and the new key that is not in use that is to be the replacement is provided with the batch transaction 1000 SWAP. From the CXX point of view, the two keys have their key IDs swapped, as well as the three options subject to the swap. The swap occurs to the CXX on DASD, not in memory. The memory information supports existing user programs that are running and sees no effect from the swap. Therefore, the existing user programs are not impacted in any way. The change is in place for applications once the database is closed and then reopened. The close might be delayed until the next natural planned outage, including migrating to a Shadow MUF.
  5. After the database with the table that has the keys that were swapped has been subject to being closed in all appropriate MUF environments , the old key definition can be deleted using the "real time" key deletion feature that promotes 24x7 operations as described in CA Datacom Datadictionary documentation.

For more information, see the 1000 SWAP batch transaction in the CA Datacom Datadictionary Batch Reference Guide.

From the CXX point of view, the swap is processed in the following simple parts.

Note: Neither key can be a relative record key.

Processing is as follows:

  1. The key ID is swapped.
  2. If the existing key is neither the Master Key, the Native Sequence Key, nor a Unique Key, the key include (KEY INC) value is swapped. The swapped values need not be the same.
  3. The unique value is swapped. The swapped values need not be the same.
  4. The index area name is swapped. The swapped names need not be the same.