Previous Topic: 4.3.12.5 Internal Step Restart Considerations

Next Topic: 4.3.13 CA MICS Audit and History Archive Tapes

4.3.12.6 Incremental Update Considerations


Incremental update can significantly reduce time and resource
usage in the DAILY job by letting you split out a major
portion of daily database update processing into multiple,
smaller, incremental updates executed throughout the day.

o  Standard CA MICS database update processing involves the
   following steps:

   1.  Reading and processing raw input data to generate
       DETAIL and DAYS level CA MICS database files.

   2.  Summarization of DETAIL/DAYS level data to update
       week-to-date and month-to-date database files.

o  When you activate incremental update, the following
   happens:

   -  You can execute the first-stage processing (raw data
      input to create DETAIL/DAYS files) multiple times
      throughout the day, each time processing a subset of
      the total day's input data.

   -  During the final update of the day (in the DAILY job),
      the incremental DETAIL/DAYS files are "rolled-up" to
      the database DETAIL and DAYS timespans, and then
      summarized to update the week-to-date and month-to-date
      files.

o  Incremental update is independent of your internal step
   restart or DBSPLIT specifications.  You have the option
   to perform incremental updates with or without internal
   step restart support.

o  Incremental update is activated and operates
   independently by product.  The incremental update job
   for one product, INCRccc (where ccc is the product ID)
   can execute concurrently with the incremental update job
   for another product in the same unit database.

o  The CA MICS database remains available for reporting and
   analysis during INCRccc job execution.

*************************************************************
*                                                           *
*  Note:  CA MICS is a highly configurable system,          *
*         supporting up to 36 unit databases, each of which *
*         can be configured and updated independently.      *
*         Incremental update is just one of the options you *
*         can use to configure your CA MICS complex.        *
*                                                           *
*         All efforts should be made to employ CA MICS      *
*         configuration capabilities to minimize issues     *
*         prior to activating incremental update.  For      *
*         example:                                          *
*                                                           *
*         o  Splitting work to multiple units is an         *
*            effective way to enable parallel database      *
*            update processing                              *
*                                                           *
*         o  Adjusting account code definitions to ensure   *
*            adequate data granularity while minimizing     *
*            total database space and processing time       *
*                                                           *
*         o  Tailoring the database to drop measurements    *
*            and metrics of lesser value to your            *
*            site, thereby reducing database update         *
*            processing and resource consumption            *
*                                                           *
*         While incremental update is intended to reduce    *
*         DAILY job elapsed time, total resource usage of   *
*         the combined INCRccc and DAILY jobs steps can     *
*         increase due to the additional processing         *
*         required to maintain the incremental update       *
*         "to-date" files and for roll-up to the unit       *
*         database.  The increased total resource usage     *
*         will be more noticeable with small data volumes   *
*         where processing code compile time is a greater   *
*         percentage of total processing cost.              *
*                                                           *
*************************************************************

Incremental update processing reads and processes raw
measurement data to create and maintain DETAIL and DAYS level
"to-date" files for the current day.

o  These incremental update database files are maintained
   on unique OS/390 data sets, independent of the standard
   CA MICS database files, and independent of any other
   product's incremental update database files.  There is
   one data set each for DETAIL and DAYS level "to-date"
   data and a single incremental update checkpoint data set
   for this product in this unit.

o  The incremental update DETAIL and DAYS files can be
   permanent DASD data sets, or they can be allocated
   dynamically as needed and deleted after DAILY job
   processing completes.  Optionally, you can keep the
   incremental update DETAIL and DAYS files on tape, with
   the data being loaded onto temporary DASD space as
   needed for incremental update or DAILY job processing.

After activating incremental update, you will use four
incremental update facility jobs found in prefix.MICS.CNTL,
where ccc is the product ID:

o  cccIUALC

   You execute this job to allocate and initialize the
   incremental update checkpoint file, and optionally the
   incremental update DETAIL and DAYS database files.
   cccIUALC is generally executed just ONE time.

o  cccIUGDG

   You execute this job to add generation data group (GDG)
   index definitions to your system catalog in support of
   the INCRDB TAPE option.  cccIUGDG is generally executed
   just ONE time.

o  INCRccc

   This is the job you execute for each incremental update.
   You will integrate this job into your database update
   procedures for execution one or more times per day
   to process portions of the total day's measurement data.

   Note:  The DAILY job is run once at the end of the day.
   It will perform the final incremental update for the day's
   data, and then roll up the incremental DETAIL/DAYS files
   to the database DETAIL and DAYS timespans and update the
   week-to-date and month-to-date files.

o  SPLITSMF

   You optionally execute this job to split the input data
   into separate data sets for one or more INCRccc jobs.  The
   SPLITSMF job is a standalone version of the DAILY job's
   DAYSMF step designed specifically for INCRccc processing.
   Each execution of the SPLITSMF job is normally followed by
   one or more INCRccc jobs.

   The SPLITSMF job is activated by specifying both the
   INCRUPDATE YES and INCRSPLIT USE parameters in the cccOPS
   parameter member for one or more products.  You would then
   integrate this job into your database update procedures
   for execution one or more times per day to preprocess
   portions of the total day's measurement data.  You would
   schedule INCRccc jobs to execute immediately after
   SPLITSMF completes.

See the product guides for more information on each product's
incremental update implementation.


Overhead

Incremental update is intended to reduce DAILY job resource
consumption and elapsed time by offloading a major portion of
database update processing to one or more executions of the
INCRccc job.  In meeting this objective, incremental update
adds processing in the INCRccc and DAILY jobs to accumulate
data from each incremental update execution into the
composite "to-date" DETAIL and DAYS incremental update files,
and also adds processing in the DAILY job to copy the
incremental update files to the unit database DETAIL and DAYS
timespans.  The amount of this overhead, and the savings in
the DAILY job are site-dependent, and will vary based on
input data volume and on the number of times INCRccc is
executed each day.

In addition, activating incremental update will cause
additional "compile" based CPU time to be consumed in the
product's DAYnnn DAILY job step.  The increase in compile
time is due to additional code included for each file
structure in support of the feature.  This increase should be
static based on the scope of the CA MICS data integration
product in terms of files.  This "compile" time increase does
not imply an increase in elapsed or execution time.
Incremental update allows I/O bound, intensive processing
(raw data inputting, initial CA MICS transformation, etc.) to
be distributed outside of the DAILY job.  I/O processing is
the largest contributor to elapsed time in large volume
applications.  Thus, the expected overall impact is a
decrease in the actual run-time of the product's DAYnnn job
step.


Increased "Prime Time" Workload

By offloading work from the DAILY job to one or more INCRccc
executions throughout the day, you are potentially moving
system workload and DASD work space usage from the
"off-hours" (when the DAILY job is normally executed) to
periods of the day where your system resources are in highest
demand.  You should schedule INCRccc executions carefully to
avoid adverse impact to batch or online workloads.  For
example, if your site's "prime shift" is 8:00 AM to 5:00 PM,
you might choose to schedule incremental updates for 7:00 AM
(just before "prime shift") and 6:00 PM (just after "prime
shift"), with the DAILY job executing just after midnight.


Increased DASD Usage

The DASD space required for the incremental update DETAIL and
DAYS database files is in addition to the DASD space already
reserved for the CA MICS database.  By default, the
incremental update database files are permanently allocated,
making this DASD space unavailable for other applications.
In general, you can assume that the incremental update
database files will require space equivalent to two cycles of
this product's DETAIL and DAYS timespan files.

Alternatively, the incremental update database files can be
allocated in the first incremental update of the day and
deleted by the DAILY job (see the INCRDB DYNAM option in the
product guides).  This approach reduces the amount of time
that the DASD space is dedicated to incremental update, and
lets the amount of DASD space consumed increase through the
day as you execute each incremental update.

A third option is to store the incremental update database
files on tape (see the INCRDB TAPE option).  With this
approach, the DASD space is required just for the time that
each incremental update or DAILY job step is executing.
Note that while this alternative reduces the "permanent" DASD
space requirement, the total amount of DASD space required
while the incremental update or DAILY jobs are executing is
unchanged.  In addition, the TAPE option adds processing to
copy the incremental update files to tape, and to reload the
files from tape to disk.

Note:  The incremental update checkpoint file is always a
permanently allocated disk data set.  This is a small data
set and should not be an issue.

Operational Complexity

Incremental update expands your measurement data management
and job scheduling issues.  You must ensure that each
incremental update (or optionally the associated SPLITSMF
job) and the DAILY job process your measurement data
chronologically, that is, each job must see data that is
newer than the data processed by the prior job.  By
incrementally updating the database, you have more
opportunities to miss a log file, or to process a log out of
order.

Interval End Effects

Each incremental update processes a subset of the day's
measurement data, taking advantage of early availability of
some of the day's data, for example, when a measurement log
fills and switches to a new volume.  This can cause a problem
if the measurement log split occurs while the data source is
logging records for the end of a measurement interval, thus
splitting the data for a single measurement interval across
two log files.

o  When an incremental update processes the first log file,
   the checkpoint high end timestamp is set to indicate that
   this split measurement interval has been processed.

o  Then, when the rest of the measurement interval's data is
   encountered in a later update, it can be dropped as
   duplicate data (because data for this measurement
   interval end timestamp has already been processed).

Appropriate scheduling of log dumps and incremental updates
can avoid this problem.  For example, if you plan to run
incremental updates at 7:00 AM and 6:00 PM, you could force a
log dump in the middle of the measurement interval just prior
to the scheduled incremental update executions.  This is an
extension of the procedure you may already be using for
end-of-day measurement log processing.  The objective is to
ensure that all records for each monitor interval are
processed in the same incremental update.


Dynamic Allocation

Incremental update facilities optionally employ dynamic
allocation for the incremental update DETAIL and DAYS
timespan data sets.  If your site restricts dynamic
allocation of large, cataloged data sets, you may need to
direct incremental update data set allocation to a generic
unit or storage class where dynamic allocation is allowed.

If your site automatically assigns batch job classes and/or
schedules work based on the amount of DASD space a job
requires, you may need to direct the CA MICS INCRccc and/or
DAILY jobs to a specific job class and/or processing system.
If you choose the incremental update dynamic allocation
options, your scheduling facilities will no longer be able to
determine DASD space requirements by examining the INCRccc or
DAILY job JCL.

Data set allocation parameters for IUDETAIL and IUDAYS are
specified by product in prefix.MICS.PARMS(cccOPS) and
permanent changes to data set allocation parameters (e.g., to
increase the space allocation for the IUDETAIL data set)
require both changing the cccOPS parameter and executing the
corresponding cccPGEN job.  However, in restart situations,
you can temporarily override data set allocation parameters
for one or more dynamically allocated data sets by using the
//PARMOVRD facility.  See Section 2.3.6 in this guide for
more information.