Previous Topic: 4.3.12.5 Internal Step Restart ConsiderationsNext Topic: 4.3.13 CA MICS Audit and History Archive Tapes


4.3.12.6 Incremental Update Considerations

 Incremental update can significantly reduce time and resource
 usage in the DAILY job by letting you split out a major
 portion of daily database update processing into multiple,
 smaller, incremental updates executed throughout the day.
 
 o  Standard CA MICS database update processing involves the
    following steps:
 
    1.  Reading and processing raw measurement data to
        generate DETAIL and DAYS level CA MICS database files.
 
    2.  Summarization of DETAIL/DAYS level data to update
        week-to-date and month-to-date database files.
 
 o  When you activate incremental update, the following
    happens:
 
    -  The incremental update job executes one or more times
       throughout the day, each time processing a subset of
       the total day's measurement data to create DETAIL/DAYS
       level files in an incremental update database.
 
    -  During the final update of the day (in the DAILY job),
       the incremental DETAIL/DAYS files are "rolled-up" to
       the database DETAIL and DAYS timespans, and then
       summarized to update the week-to-date and month-to-date
       files.
 
 o  Incremental update is independent of your internal step
    restart or DBSPLIT specifications.  You have the option
    to perform incremental updates with or without internal
    step restart support.
 
 o  Incremental update is activated and operates independently
    by component. The incremental update job for one
    component, INCRccc (where ccc is the component ID), can
    execute concurrently with the incremental update job for
    another component in the same unit database.
 
 o  The CA MICS database remains available for reporting and
    analysis during INCRccc job execution.
 
 o  Incremental update is not available for the SPECIAL
    database processing described in appendix A.
 *************************************************************
 *                                                           *
 *  Note:  CA MICS is a highly configurable system,          *
 *         supporting up to 36 unit databases, each of which *
 *         can be configured and updated independently.      *
 *         Incremental update is just one of the options you *
 *         can use to configure your CA MICS complex.        *
 *                                                           *
 *         All efforts should be made to employ CA MICS      *
 *         configuration capabilities to minimize issues     *
 *         prior to activating incremental update.  For      *
 *         example:                                          *
 *                                                           *
 *         o  Splitting work to multiple units is an         *
 *            effective way to enable parallel database      *
 *            update processing                              *
 *                                                           *
 *         o  Adjusting account code definitions to ensure   *
 *            adequate data granularity while minimizing     *
 *            total database space and processing time       *
 *                                                           *
 *         o  Tailoring the database to drop measurements    *
 *            and metrics of lesser value to your            *
 *            site, thereby reducing database update         *
 *            processing and resource consumption            *
 *                                                           *
 *         While incremental update is intended to reduce    *
 *         DAILY job elapsed time, total resource usage of   *
 *         the combined INCRccc and DAILY jobs steps can     *
 *         increase due to the additional processing         *
 *         required to maintain the incremental update       *
 *         "to-date" files and for roll-up to the unit       *
 *         database.  The increased total resource usage     *
 *         will be more noticeable with small data volumes   *
 *         where processing code compile time is a greater   *
 *         percentage of total processing cost.              *
 *                                                           *
 *************************************************************
 
 Incremental update processing reads and processes raw
 measurement data to create and maintain DETAIL and DAYS level
 "to-date" files for the current day.
 
 o  These incremental update database files are maintained
    on unique OS/390 data sets, independent of the standard
    CA MICS database files, and independent of any other
    component's incremental update database files. There is
    one data set each for DETAIL and DAYS level "to-date" data
    and a single incremental update checkpoint data set for
    this component in this unit.
 o  The incremental update DETAIL and DAYS files can be
    permanent DASD data sets, or they can be allocated
    dynamically as needed and deleted after DAILY job
    processing completes.  Optionally, you can keep the
    incremental update DETAIL and DAYS files on tape, with
    the data being loaded onto temporary DASD space as
    needed for incremental update or DAILY job processing.
 
 After activating incremental update, you will use four
 incremental update facility jobs found in prefix.MICS.CNTL,
 where ccc is the component ID:
 
 o  cccIUALC
 
    You execute this job to allocate and initialize the
    incremental update checkpoint file, and optionally the
    incremental update DETAIL and DAYS database files.
    cccIUALC is generally executed just ONE time.
 
 o  cccIUGDG
 
    You execute this job to add generation data group (GDG)
    index definitions to your system catalog in support of
    the INCRDB TAPE option.  cccIUGDG is generally executed
    just ONE time.
 
 o  INCRccc
 
    This is the job you execute for each incremental update.
    You will integrate this job into your database update
    procedures for execution one or more times per day
    to process portions of the total day's measurement data.
 
    Note:  The DAILY job is run once at the end of the day.
    It will perform the final incremental update for the day's
    data, and then roll up the incremental DETAIL/DAYS files
    to the database DETAIL and DAYS timespans and update the
    week-to-date and month-to-date files.
 
 o  SPLITSMF
 
    You optionally execute this job to split the measurement
    data into separate data sets for one or more INCRccc jobs.
    The SPLITSMF job is a standalone version of the DAILY
    job's DAYSMF step designed specifically for INCRccc
    processing.  Each execution of the SPLITSMF job is
    normally followed by one or more INCRccc jobs.
 
    The SPLITSMF job is activated by specifying both the
    INCRUPDATE YES and INCRSPLIT USE parameters in the cccOPS
    parameter member for one or more components. You would then
    integrate this job into your database update procedures
    for execution one or more times per day to preprocess
    portions of the total day's measurement data.  You would
    schedule INCRccc jobs to execute immediately after
    SPLITSMF completes.
 
 See the component guides for more information on each
 component's incremental update implementation.
 
 
 Operational Complexities
 
 Since one DAILY job is being replaced by one or more INCRccc
 jobs, followed by a final DAILY update, the operational
 complexity of daily CA MICS processing increases.
 Consequently, incremental updating requires considerable
 up-front planning, strategizing, discussion with systems
 programming, and perhaps testing.
 
 Here are several items to consider:
 
 o  Incremental update expands the scope of measurement data
    management and job scheduling.  You must ensure that each
    incremental update (or the optional SPLITSMF job) and
    finally the DAILY job process your measurement data
    chronologically, that is, each subsequent job must process
    data that is newer than the data processed by the prior
    job.  Because incremental update increases the number of
    operational jobs that execute, there are more
    opportunities to miss a dumped data set, or to process a
    dumped data set out of order.
 
 o  For SMF-processing MICS components, you must manage the
    timing of SMF dumps.  Periodic and likely more frequent
    dumping of the SMF data sets is needed to coincide with
    the times you want incrementals and finally the DAILY to
    run.  This could involve time-released SWITCH SMF commands
    followed by execution of one of the SMF dump programs
    IFASMFDP or IFASMFDL.
 
    Similar data preparation actions are required for non-SMF
    processing MICS components.
 
 o  Many more "dumped" data sets must be managed.  In a
    multiple LPAR configuration, some may come from LPARs
    that do not run the MICS jobs and must be available.
 
 o  If generation data group (GDG) data sets are used to hold
    the raw measurement data, you may need to increase the GDG
    limit to account for multiple data sets being created each
    day.
 
 o  If DETAIL Tape processing is being used to create optional
    DETAIL timespan data sets (documented in MICS component
    guides) and a GDG is being used to hold these data sets,
    you may need to increase the GDG limit to account for
    multiple data sets being created each day.
 
 o  MICS DAILY and intra-day INCRccc jobs must be scheduled.
 
 o  Planning to use the incremental DETAIL and DAYS databases
    for inquiry and reporting during the day involves some
    planning.  INCRccc jobs cannot run while reports are using
    the incremental databases, and reporting jobs cannot use
    the incremental databases while an INCRccc job is running.
 
 o  The potential for dataset contention for the "dumped" SMF
    data sets when the data sets are members of a GDG.  You
    cannot create a new generation such as
    SYS1.SMF.TODAY.DUMP(+1) while the INCRccc job is reading
    one or more existing generations by means of
    SYS1.SMF.TODAY.DUMP(0).
 
 o  If you read GDG SMF data sets into MICS using relative
    generation values, for example SYS1.SMF.TODAY.DUMP(0),
    rather than explicit values, for example
    SYS1.SMF.TODAY.DUMP.G2802V00, there is potential for
    impact on SMF dump processing.  This might occur when SMF
    dump processing attempts to create the next GDG using
    relative values, for example SYS1.SMF.TODAY.DUMP(+1).
    This consideration exists for non-incremental DAILY jobs,
    but with incremental updates, the number of jobs
    processing SMF data is increased, thus increasing the
    chance of a conflict.
 
 o  Obtaining "dumped" SMF data for use by CA MICS is much
    simpler when the CA SMF DIRECTOR component is used to
    manage SMF data.  Details are documented in section 4.10.
 
 
 Processing Overhead
 
 Incremental update is intended to reduce the resource
 consumption and elapsed time required by the DAILY job by
 offloading a major portion of database update processing to
 one or more executions of the INCRccc job. This lets the CA
 MICS database become available for reporting and analysis
 earlier in the day.
 
 In meeting this objective, the overall resource consumption
 incurred by the INCRccc and DAILY jobs is somewhat greater
 than in the non-incremental update scenario.  This is due to
 the overhead involved in managing the incremental DETAIL and
 DAYS timespan database files and rolling them up during the
 DAILY update.  In addition, a minor amount of CPU time is
 required to compile the code included in the modules that
 support incremental update.
 
 The amount of overhead and the elapsed time savings in the
 DAILY job are site-dependent and will vary based on the
 measurement data volume and the number of times INCRccc is
 executed each day.
 
 
 Increased "Prime Time" Workload
 
 By offloading work from the DAILY job to one or more INCRccc
 executions throughout the day, you are potentially moving
 system workload and DASD work space usage from the
 'off-hours' (when the DAILY job normally executes) to periods
 of the day where your system resources are in highest demand.
 You should schedule INCRccc executions carefully to avoid
 adverse impact to batch or online workloads.  For example, if
 your site's prime shift is 8:00 AM to 5:00 PM, you might
 choose to schedule incremental updates for 7:00 AM (just
 before the prime shift) and 6:00 PM (just after the prime
 shift), with the DAILY job executing just after midnight.
 
 
 Increased DASD Usage
 
 The DASD space required for the incremental update DETAIL and
 DAYS database files is in addition to the DASD space already
 reserved for the CA MICS database.  By default, the
 incremental update database files are permanently allocated,
 making this DASD space unavailable for other applications.
 In general, you can assume that the incremental update
 database files will require space equivalent to two cycles of
 this component's DETAIL and DAYS timespan files.
 
 Alternatively, the incremental update database files can be
 allocated in the first incremental update of the day and
 deleted by the DAILY job (see the INCRDB DYNAM option in the
 component guides). This approach reduces the amount of time
 that the DASD space is dedicated to incremental update and
 results in an increasing amount of DASD space consumed
 throughout the day as each incremental update is executed.
 
 A third option is to store the incremental update database
 files on tape (see the INCRDB TAPE option).  With this
 approach, DASD space is required only while each incremental
 update or DAILY job is executing.  While this alternative
 eliminates the need to permanently allocate DASD space, the
 total amount of DASD space required while the incremental
 update or DAILY jobs are executing is unchanged.  This TAPE
 option adds processing time to copy the incremental update
 files to tape, and to reload the files from tape to disk.
 
 Note that the incremental update checkpoint file is always a
 permanently allocated disk data set.  This is a small data
 set and should not be a concern.
 
 
 Interval End Effects
 
 Some measurement data is comprised of interval records,
 meaning, records that are written regularly at specific
 intervals.
 
 A problem might result if a dump of the measurement data
 occurs at the same time that the data source (monitor) is
 writing interval records at the end of a measurement
 interval.  This has the effect of splitting the data for a
 single measurement interval across two "dumped" data sets.
 Processing would proceed as follows:
 
 o  When an incremental update processes the first "dumped"
    data set, the checkpoint high end timestamp is set to
    indicate that this split measurement interval has been
    processed.
 
 o  Then, when the rest of the measurement interval's data is
    encountered in the next update job, it can be dropped as
    duplicate data (because data for this measurement interval
    end timestamp has already been processed).
 
 Appropriate scheduling of measurement data dumps and
 incremental update jobs can avoid this problem.  For example,
 if you plan to run incremental updates at 7:00 AM and 6:00
 PM, you could force a measurement data dump in the middle of
 the measurement interval just prior to the scheduled
 incremental update executions.  This is an extension of the
 procedure you may already be using for end-of-day measurement
 dump processing.  The objective is to ensure that all records
 for each monitor interval are processed by the same
 incremental update job.
 Dynamic Allocation
 
 Incremental update facilities optionally employ dynamic
 allocation for the incremental update DETAIL and DAYS
 timespan data sets.  If your site restricts dynamic
 allocation of large, cataloged data sets, you may need to
 direct incremental update data set allocation to a generic
 unit or storage class where dynamic allocation is allowed.
 
 If your site automatically assigns batch job classes and/or
 schedules work based on the amount of DASD space a job
 requires, you may need to direct the CA MICS INCRccc and/or
 DAILY jobs to a specific job class and/or processing system.
 If you choose the incremental update dynamic allocation
 options, your scheduling facilities will no longer be able to
 determine DASD space requirements by examining the INCRccc or
 DAILY job JCL.
 
 Data set allocation parameters for IUDETAIL and IUDAYS are
 specified by component in prefix.MICS.PARMS(cccOPS) and
 permanent changes to data set allocation parameters (for
 example, to increase the space allocation for the IUDETAIL
 data set) require both changing the cccOPS parameter and
 executing the corresponding cccPGEN job.  However, in restart
 situations, you can temporarily override data set allocation
 parameters for one or more dynamically allocated data sets by
 using the //PARMOVRD facility.  See Section 2.3.6 in this
 guide for more information.