Previous Topic: 6.3.3.1 Data Element Changes

Next Topic: 6.3.3.1.3 Value Change With Order and Granularity Changes

6.3.3.1.1 Value Change Only

The simplest case is where there is a change to a non-key
data element, or to a key data element where there is a
one-to-one mapping of old key data element values to new
values.  That is, for every old value there is one and ONLY
one new value.

For key data elements, the changed key will take the same
order as the original key.  For example, assume a file's key
contains a character data element whose values are

    AA01
    AA02
    BA01

    If these values are changed to

    AB01
    AB02
    BB01

then re-sorting the file observations results in no change to
the order of observations.  In this case, the retrofit job
described in the previous section performs the change
correctly.

Perform the data element content correction according to
the previous sections.  This correction must accompany the
change to CA MICS or user exit logic that corrects the data
element's value during the daily update cycle.  For example,
if the change is to sharedprefix.MICS.PARMS(ACCTRTE) to
change SMF account level data elements, make the change to
ACCTRTE and treat the online database without running an
intervening daily update job.

Note that such complex-level changes require changes to
all files on which the data elements occur, in all timespans
and for all database units that have the component.  Offline
databases need not be retrofitted during the same break in
daily update processing, though they must be updated before
their next use.  Note that the next use of the offline
database may be an ad hoc interrogation or may be the next
WEEKLY or MONTHLY update run that creates new generations of
the archive.

Be sure to apply the change to the test database first.
Testing the retrofit ensures that the change is complete and
correct before it is implemented in production.  This is
particularly necessary for archive changes, due to the added
complexity of applying the retrofit.

The general format of a data element recovery job is that
of a SAS data statement.  The data statement reads the
existing file, modifies the value, and creates a new file
with observations that have the proper data element values.
In SAS, the data statement follows this template:

    DATA outddname.iiifff;
      SET  inddname.iiifff;
      (correction logic)
    RUN;

where "iii" is the information area ID
      "fff" is the file ID

The DDnames referenced refer to the DDname of the input
or output tape DDname in the JCL for the archive data set's
access.  History tape DDnames take the form:

    outddname - Htiiifff
    inddname  - Itiiifff

where "iii" is the information area ID
      "fff" is the file ID
      "t" is W for weekly history, M for monthly history

    Audit tape DDnames take the form:

    outddname - AUiiifff
    inddname  - AIiiifff

where "iii" is the information area ID
      "fff" is the file ID

The example of retrofitting a new value into a data
element of the SMF files is used again.  The composite
template for this method for treating history files is:

    //xxx JOB (your job card)
    //STEP1 EXEC MICSSHRS     (assume proc identifier "S")
    //HMBATJOB DD DISP=(,CATLG,DELETE),
    //            DCB=(tapeprefix.MICS.MODEL,DEN=density),
    //            DSN=tapeprefix.MICS.HISTM.BATJOB(+1),
    //            UNIT=tapeunit,LABEL=(1,EXPDT=...)
    //IMBATJOB DD DISP=OLD,
    //            DSN=tapeprefix.MICS.HISTM.BATJOB(0),
    //            VOL=(PRIVATE,RETAIN),UNIT=(,,DEFER)
    //HMBATPGM DD DISP=(,CATLG,DELETE),
    //            DCB=(tapeprefix.MICS.MODEL,DEN=density),
    //            DSN=tapeprefix.MICS.HISTM.BATPGM(+1),
    //            UNIT=AFF=HMBATJOB,LABEL=(2,EXPDT=...)
    //IMBATPGM DD DISP=OLD,
    //            DSN=tapeprefix.MICS.HISTM.BATPGM(0),
    //            VOL=(PRIVATE,RETAIN),UNIT=AFF=IMBATJOB
    //SYSIN DD *
    %MACRO RETFIT(DDI,DDO,FN);
      DATA &DDO..&FN;
        SET  &DDI..&FN;
        (correction logic)
      RUN;
    %MEND RETFIT;
    %RETFIT(IMBATJOB,HMBATJOB,BATJOB);
    %RETFIT(IMBATPGM,HMBATPGM,BATPGM);
    ...

where the DD statement coding is patterned after the
associated coding in the WEEKLY or MONTHLY job 300 step.

After the history retrofit is complete, the next WEEKLY
or MONTHLY job would call for extra tape mounts.  This is
because some of the history data sets would be input from the
retrofitted tape just created, and some would still be input
from the previous history tape.

Audit tape treatment is significantly more complicated.
The audit files exist on one generation data group entry per
week of audit data archived.  That means every previous
generation of audit archive for the file(s) in question must
be regenerated.

Assume there were 4 generations of audit archive saved,
and that the current processing week is week 41.  The data
set names cataloged would be, for example:

    Data Set Name     Data Contained
    ---------------   -----------------------------------
    dsn(0)            detail data from week 40, untreated
    dsn(-1)           detail data from week 39, untreated
    dsn(-2)           detail data from week 38, untreated
    dsn(-3)           detail data from week 37, untreated

There are various methods to process the generations to
arrive at the treated configuration:

    Data Set Name     Data Contained
    ---------------   -----------------------------------
    dsn(0)            detail data from week 40, treated
    dsn(-1)           detail data from week 39, treated
    dsn(-2)           detail data from week 38, treated
    dsn(-3)           detail data from week 37, treated


    For treating audit files, this is a sample template:

    //xxx JOB (your job card)
    //STEP1 EXEC MICSSHRS     (assume proc identifier "S")
    //AUBATJOB DD DISP=(,CATLG,DELETE),
    //            DCB=(tapeprefix.MICS.MODEL,DEN=density),
    //            DSN=tapeprefix.MICS.AUDIT.BATJOB(+1),
    //            UNIT=tapeunit,LABEL=(1,EXPDT=...)
    //AIBATJOB DD DISP=OLD,
    //            DSN=tapeprefix.MICS.AUDIT.BATJOB(-nn),
    //            VOL=(PRIVATE,RETAIN),UNIT=(,,DEFER)
    //AUBATPGM DD DISP=(,CATLG,DELETE),
    //            DCB=(tapeprefix.MICS.MODEL,DEN=density),
    //            DSN=tapeprefix.MICS.AUDIT.BATPGM(+1),
    //            UNIT=AFF=AUBATJOB,LABEL=(2,EXPDT=...)
    //AIBATPGM DD DISP=OLD,
    //            DSN=tapeprefix.MICS.AUDIT.BATPGM(-nn),
    //            VOL=(PRIVATE,RETAIN),UNIT=AFF=AIBATJOB
    //SYSIN DD *
    %MACRO RETFIT(DDI,DDO,FN);
      DATA &DDO..&FN;
      SET  &DDI..&FN;
      (correction logic)
      RUN;
    %MEND RETFIT;
    %RETFIT(AIBATJOB,AUBATJOB,BATJOB);
    %RETFIT(AIBATPGM,AUBATPGM,BATPGM);
    ...

where the DD statement coding is patterned after the
associated coding in the WEEKLY or MONTHLY job 300 step.

The "-nn" in the input data set name relates to the
number of generations of audit archive retained.  For 4
generations, for example, "-nn" equal to "-3" would get the
last retained generation.  Coding    -3 in our example would
yield the following data:

    After one run:

    Data Set Name     Data Contained
    ---------------   -----------------------------------
    dsn(0)            detail data from week 37, treated
    dsn(-1)           detail data from week 40, untreated
    dsn(-2)           detail data from week 39, untreated
    dsn(-3)           detail data from week 38, untreated

    After the second run:

    Data Set Name     Data Contained
    ---------------   -----------------------------------
    dsn(0)            detail data from week 38, treated
    dsn(-1)           detail data from week 37, treated
    dsn(-2)           detail data from week 40, untreated
    dsn(-3)           detail data from week 39, untreated

    After the third run:

    Data Set Name     Data Contained
    ---------------   -----------------------------------
    dsn(0)            detail data from week 39, treated
    dsn(-1)           detail data from week 38, treated
    dsn(-2)           detail data from week 37, treated
    dsn(-3)           detail data from week 40, untreated

    After the fourth run:

    Data Set Name     Data Contained
    ---------------   -----------------------------------
    dsn(0)            detail data from week 40, treated
    dsn(-1)           detail data from week 39, treated
    dsn(-2)           detail data from week 38, treated
    dsn(-3)           detail data from week 37, treated

This is the configuration we wanted.  Coding the JCL for a
retrofit of the audit archive files as shown requires that
the job step be run a number of times exactly equal to the
number of generations of audit archive tapes in the catalog
at the time the retrofit is done.

There are various ways to improve the running of such an
audit retrofit.  Other ways involve uncataloging and
recataloging the audit data sets for the audited files
affected.