Previous Topic: 4.1.1 CA MICS Database OrganizationNext Topic: 4.1.3 Processing Schedule


4.1.2 Operational Processes, Jobs, and Steps


 An operational process is a series of batch jobs that make up
 a logical unit of work.  Operational processes update and
 maintain the CA MICS Database.
 
 There are four operational processes in CA MICS:
 
 o  DAILY process   - the DAILY job, followed by the BACKUP
                      job (a function of the MICS SCHEDULE
                      facility)
 
                      optionally
 
                      one or more SPLITSMF (optional) and
                      INCRccc jobs followed by the DAILY job
                      and the BACKUP job.
 
 o  WEEKLY process  - the DAILY job, followed by the WEEKLY
                      job followed by the BACKUP job (a
                       function of the MICS SCHEDULE
                      facility)
 
 o  MONTHLY process - the DAILY job, followed by the MONTHLY
                      job, followed by the BACKUPM job
 
 o  YEARLY process  - the DAILY job followed by the YEARLY job
                      followed by the BACKUP job (a function
                      of the MICS SCHEDULE facility)
 
 CA MICS provides standard operational jobs for updating,
 reporting, maintaining, and recovering its database:
 
 o  DAILY    - Run each day to update the DETAIL and DAYS
               timespan files plus week-to-date and month-
               to-date files in the WEEKS and MONTHS
               timespans
 
 o  INCRccc  - (Optional) Run one or more times a day for
               each product for which you have activated the
               CA MICS incremental update facility.  The
               INCRccc jobs update the product's incremental
               update DETAIL and DAYS level files, which are
               then "rolled up" to the DETAIL and DAYS
               timespan files by the DAILY job.
 
 o  SPLITSMF - (Optional) Run one or more times a day to split
               the SMF input data into multiple files for
               processing in the INCRccc jobs.  SPLITSMF is a
               stand-alone version of the DAILY job's DAYSMF
               step and applies only to those products which
               take their input from the SMF files, and which
               are marked as INCRUPDATE YES and INCRSPLIT USE
               in prefix.MICS.PARMS(cccOPS).
 o  WEEKLY   - Run once each week after the DAILY job for
               WEEKS timespan cycle close-out, weekly
               archive audit, and weekly archive history
               processing
 
 o  MONTHLY  - Run once each month after the DAILY job for
               MONTHS timespan cycle close-out and monthly
               archive history processing and for updating
               year-to-date files
 
 o  YEARLY   - Run once each year after the MONTHLY job for
               YEARS timespan cycle close-out
 
 o  BACKUP   - Run daily, bi-daily, or weekly (per user-
               specified backup frequency) to generate a tape
               backup of the entire database
 
 o  BACKUPM  - Run once each month after the MONTHLY job to
               generate a tape backup of the entire Database.
               The monthly backup is run as this stand-alone
               BACKUPM job when you specify
 
                    AUTOSUBMIT YES
 
               in prefix.MICS.PARMS(JCLDEF).
 
 o  SCHEDULE - Run each day to submit scheduled processing
 
 o  RESTORE  - Run whenever the database is damaged or must
               be recovered
 
 o  AUDIT    - (Optional) Run after WEEKLY to perform optional
               Archive Audit processing.  The AUDIT job is
               used when you specify
 
                    ARCHIVE AUDIT YES JOB
 
               in prefix.MICS.PARMS(JCLDEF).
 
               NOTE:  You can execute AUDIT more frequently
                      (e.g., twice a week or daily) if DASD
                      space is inadequate for retaining
                      sufficient DETAIL/DAYS cycles for weekly
                      audit tape creation.
 
                      In this case, it is necessary to add the
                      following to prefix.MICS.PARMS(EXECDEF)
 
                           USERDEF AUDITCWK YES
 
                      This parameter overrides the default
                      audit archive processing so that data
                      for the current week is retained and
                      copied to the new audit archive tape
                      cycle.
 
 o  HISTW    - (Optional) Run after WEEKLY to perform optional
               Archive Weekly History processing.  The HISTW
               job is used when you specify
 
                    ARCHIVE HISTW YES JOB
 
               in prefix.MICS.PARMS(JCLDEF).
 
 o  HISTM    - (Optional) Run after MONTHLY to perform
               optional Archive Monthly History processing.
               The HISTM job is used when you specify
 
                    ARCHIVE HISTM YES JOB
 
               in prefix.MICS.PARMS(JCLDEF).
 
 o  IUDBINIT - (Optional) Run to re-initialize in-progress
               incremental update processing after restoring
               the unit database files (i.e., after running
               the RESTORE job).  Messages in the RESTORE job
               MICSLOG will prompt you to execute IUDBINIT
               when needed.
 
 o  DAILYRPT - Run after DAILY to produce daily production
               reports
 
 o  WEEKRPT  - Run after WEEKLY to produce weekly production
               reports
 
 o  MONTHRPT - Run after MONTHLY to produce monthly
               production reports
 
 o  RSTRTBLS - Run to restore the TABLES and SCREENS data
               sets
 
 o  RSTRTLIB - Run to restore ISPF-based information
 
 o  DAYSMFR  - Run as necessary to recreate DAILY job work
               files normally populated by the DAYSMF step.
 
 o  ACTDAY1R - Run as necessary to restore the CA MICS
               Accounting and Chargeback DAY1 audit file
 
 o  RSTATUS  - Run as necessary to update Operational Status
               and Tracking control tables or to replace a
               lost Run Status Report
 
 These jobs are generated as part of the installation process
 because each job is tailored to the installation's
 environment.  Individual CA MICS products may also generate
 additional operational jobs for unique, product-specific
 processing.
 
 The CA MICS operational jobs in each unit database are
 unique.  Job steps are defined based on the products
 installed in the unit.  For example, the DAILY job for a unit
 that contains the Hardware and SCP Analyzer (step 20,
 component identifier RMF) and the Batch and Operations
 Analyzer (step 30, component identifier SMF) would contain
 the following steps:
 
 o  DAYALL - allocates work file space and removes expired
             checkpoint entries if CKPTLIMIT is specified
 o  DAYSMF - Selects and splits SMF input data to work files
             for processing by Database update steps
 o  DAY020 - Updates the RMF product Database files
 o  DAY030 - Updates the SMF product Database files
 o  DAY200 - Updates exceptions (EXC02nnn for RMF and
             EXC03nnn for SMF)
 o  DAY400 - Submits daily report processing
 o  DAY500 - Processes non-standard user reporting, if any
 o  DAY900 - Terminates the daily job and frees work file
             space
 o  DAYRSR - Produces the Run Status Report
 
 CA MICS operational jobs are built by unit and made up of
 independent steps for the following reasons:
 
 o  Reduced run time.  Separate jobs for each unit enable
    CA MICS Database updates to run concurrently.  For
    example, for a complex containing two units (unit A
    having the SMF, RMF, and SNT analyzers and unit B having
    the IMS and DB2 analyzers), unit A's update can be
    processed while unit B is being updated, because the
    units are independent.
 
    If the units were dependent, the time needed to update
    the database would be greater, because unit A would need
    to complete its processing before unit B's could begin.
 
    NOTE:  With the CA MICS incremental update facility,
           individual product incremental updates can execute
           concurrently with incremental updates for other
           products in the unit database.  However,
           end-of-day processing for the unit database is
           still a serial process, and if the units were
           dependent, unit A would need to complete its
           end-of-day processing before unit B's could begin.
 
 o  Input data independence.  Separate jobs for each unit
    enable Database update scheduling based on input data
    availability.  In the prior example, unit A's update can
    be processed before the IMS and DB2 input data used to
    update unit B are available.
 
    When all products are installed in a single unit
    database, no updates can occur until ALL input data is
    available.
 
    NOTE:  With the CA MICS incremental update facility,
           individual product incremental updates can occur
           when the product's data becomes available.
           Individual product incremental updates are
           independent of any other product's processing in
           the unit database.  However, the end-of-day DAILY
           job cannot execute until ALL products are ready for
           end-of-day processing.
 
 o  Flexibility.  The ability to restart after a failure at
    the job step that failed saves:
 
    -  time, because the work that had been completed prior to
       the failed step is retained
 
    -  resources, because the system does not reprocess work
       it had successfully completed
 
    For example, if an operational job fails part way through
    processing due to an I/O error with the input tape, the
    work completed in prior steps is retained and the job can
    be restarted at the step that failed when the error is
    corrected.
 
    If CA MICS operational jobs were not step-restartable,
    the entire run would be lost.  In a unit database that
    contains multiple products, this loss could be very
    significant.