This section shows you how to specify the operational
statements that control CA MICS Space Analyzer Option
processing.
Operational statements are stored in the prefix.MICS.PARMS
cccOPS member, where ccc is the component identifier, and are
incorporated into the CA MICS system by running the
prefix.MICS.CNTL(cccPGEN) job.
*************************************************************
* *
* NOTE: CHANGES to prefix.MICS.PARMS(cccOPS) members *
* REQUIRE EXECUTION of prefix.MICS.CNTL(cccPGEN) *
* to take effect. *
* *
* In addition, any change to parameters that *
* impact the DAILY operational job JCL such as, *
* *
* o changing RESTART NO to RESTART YES, *
* *
* o WORK parameter changes when RESTART NO is in *
* effect, *
* *
* o Specifying TAPEfff (if this product supports *
* a DETAIL level TAPE option), *
* *
* o or changes to prefix.MICS.PARMS(INPUTccc), *
* *
* will require regeneration of the DAILY job by *
* executing prefix.MICS.CNTL(JCLGEND) or by *
* specifying DAILY in prefix.MICS.PARMS(JCLGENU) *
* and executing prefix.MICS.CNTL(JCLGENU). *
* *
* Refer to the checklist (if provided) for updating *
* cccOPS parameters and running required generation *
* jobs. *
*************************************************************
Review the defaults provided in VCAOPS. In general, the
defaults have been chosen to reduce the size of the CA MICS
database. If the defaults meet your data center's
requirements, you do not need to tailor VCAOPS.
General Syntax Rules
o Comments are accepted and signified by an asterisk (*) in
column 1.
o Statement names and values can be entered in either upper
case or lower case characters.
o Statements can start anywhere in columns 1-72. Only one
statement per card image is supported.
o Statements can appear in any order. If you code multiple
OPTIONS statements, they must be grouped together.
Statements
o Description of the ACCOUNTING statement
o Description of the BCSREQUIRED statement
o Description of the EXTENTDETAIL statement
o Description of the OPTIONS statement
o Description of the VCAFMT statement
o Description of the SUPDFSMS statement
o Description of the WORK file options
o Description of the Internal Step Restart options
o Description of the Incremental Update options
ACCOUNTING Timespan
--------------------
Valid values for timespan are DETAIL and DAYS. The default
value is DAYS. This parameter specifies to VCA when it
should invoke the accounting routines generated by CA MICS
Accounting and Chargeback Option.
DAYS results in VCA presenting a summarized observation to
the accounting routine. The sequence/summary elements of the
VCA file at the DAYS timespan are used to create a summarized
observation that represents all DASD space occupied by that
control break.
DAYS allows reasonable flexibility in qualification pricing
in the accounting routine. For example, assume the sequence/
summary elements at the DAYS level are:
SYSID VCAACT1 VCAACT2 DAADSTYP DEVTYPE
STORSTGC STORMGTC YEAR MONTH DAY
CA MICS summarization routines will create a single
observation for data sets whose values for these fields form
a unique combination.
To build on this example, it follows that all data sets that
have the following will be summarized into one observation:
SYSID='ASYS';
VCAACT1='DIV100';
VCAACT2='DEPT22';
DAADSTYP='VS';
DEVTYPE='3390-2';
STORSTGC='NEVCACHE'
STORMGTC='TESTDATA';
YEAR=90;
MONTH=07;
DAY=24;
Accounting qualification techniques and algorithms allow
for pricing of DASD space by applying rates based on the
actual values in these control or sequence elements in the
above example.
Note that VOLSER and DSNAME are absent from the list. If you
need exception level pricing by having the accounting
routines examine either VOLSER or DSNAME (or any other
character data element not in the SORT key of the DAYS
timespan), consider coding ACCOUNTING DETAIL here in VCAOPS.
DETAIL causes VCA to invoke the accounting routine to price a
data set at the DETAIL timespan. Each data set can be
examined by the accounting cost algorithms and all data
elements carried at the DETAIL timespan are available for
inspection by the accounting code.
To continue with the example above, VOLSER could be tested
and a special rate could be applied to some DSNAMEs if they
appeared on a given VOLSER.
The choice of DETAIL versus DAYS on this parameter should be
worked out with the people responsible for the CA MICS
Accounting and Chargeback Option. Since there will naturally
be more observations at the DETAIL timespan than at the DAYS
level, it follows that DETAIL will increase the space
required for cycles of the accounting journal files
(ACTJDA01, ACTJVS01, etc.). There is a one-for-one
correspondence between the number of observations on the VCA
file and the accounting journal file that holds the charges
developed from that VCA file.
BCSREQUIRED operand
-------------------
Valid operands are YES and NO. The default is NO. The
BCSREQUIRED option affects the manner in which observations
for the VCA_VS file (VSAM data sets) are constructed. Specify
YES to require the BCS records for VSAM data sets or NO to
allow observations for VSAM data sets to be written to the
CA MICS database with BCS data missing.
Often, when excluding volumes during the Space Collector
(VCC) run, a Base Catalog Structure (BCS) (user catalog) is
not available for processing. When this happens, VCC only
collects VVDS records describing the VSAM data sets.
The BCSREQUIRED parameter lets you specify that VSAM
data set allocation information is or is not to be saved in
the VCA_VS file when matching BCS records are collected.
The VCC Processing Flags (DAAPFLAG) is set to indicate the
presence or absence of the BCS record as follows:
'11......'B - BCS and VVDS elements are present
'01......'B - BCS elements are not present
VCA processes the BCS to obtain the following data elements.
When the BCS records are not present, some data elements are
not valid in the VCA_VS file. See Section 5.2.2.3 for a list
of elements affected by this.
EXTENTDETAIL fff record
-----------------------
The EXTENTDETAIL statement has two operands:
fff is optional and is a file identifier. Valid values are
DAA and _VS. If fff is not specified, the value for record
applies to both files.
record is required. Valid values are:
YES - record details for extents
NO - do not record details for extents
If EXTENTDETAIL is not coded, the default is as follows:
EXTENTDETAIL DAA NO
EXTENTDETAIL _VS NO
VSAM data sets that have more than 16 extents have one or
more extra observations in the VCADAA and VCA_VS DETAIL
timespans that describe the 17th through 128th extents. Each
extra observation describes the next set of 16 extents. The
extra observations take DASD space and time to build and are
important only if you run the TRACK MAP standard report or do
very detailed extent reporting.
To save processing time and DASD space, you can specify that
the extra observations are to be deleted during the DAILY
DAY090 step. They can be deleted from either or both of the
VCADAA and VCA_VS files by coding the EXTENTDETAIL parameter
accordingly.
For example, to keep all the details for extents 17 through
128 for VSAM on the VCA_VS file but keep none on the VCADAA
file, code two statements as follows:
EXTENTDETAIL _VS YES
EXTENTDETAIL DAA NO
To eliminate all recording of extent data beyond 16 extents
in both files, code:
EXTENTDETAIL NO
Note: Saving space in the DETAIL timespan by coding
EXTENTDETAIL NO makes the Volume Track Map Report unable to
map the physical locations of the entire volume. Refer to
Section 3.6 of this guide for an example of this report.
Because the Track Map Report uses the VCADAA file, you could
save some space and still produce the report by coding:
EXTENTDETAIL _VS NO
EXTENTDETAIL DAA YES
Certain reports in the CA MICS StorageMate Option look at
extent locations of the VTOC and the VVDS and are not
affected adversely if you code EXTENTDETAIL NO.
$SCOL
OPTIONS sysid default_duration
---------------------------------
The OPTIONS statement is optional. If coded, both operands
must be coded.
Valid values for sysid are 4-character SMF SYSIDs of VCA
systems at your site or an asterisk, *. The value of sysid
must match an SMF SYSID coded in prefix.MICS.PARMS(SYSID).
Valid values for default_duration are integers, from 1 to
999, representing hours. This value is used to compute DASD
storage occupancy for a data sets if VCC does not generate an
indication that a particular data set was processed
previously. For data sets previously processed, the value
for default_duration is computed by the VCA input format
routine from the current and previous sample times recorded
by VCC.
This operand allows you to specify a different default
DURATION for each SYSID being processed by VCA. If your site
allows VCA to accept data from any SMF SYSID and if the
duration between VCA runs is a constant, use the statement
format:
OPTIONS * nn
where * means any SYSID and nn is the duration in hours.
If your site restricts the data coming into VCA or if the
duration between VCC runs is not a constant, code individual
OPTIONS statements for each SMF SYSID. OPTIONS statements
must be grouped together.
The examples below illustrate valid and invalid VCAOPS
coding:
Valid Example Invalid Example
------------- ---------------
ACCOUNTING DETAIL ACCOUNTING DETAIL
BCSREQUIRED YES OPTIONS SYS1 12
OPTIONS SYS1 12 BCSREQUIRED YES
OPTIONS TST1 24 OPTIONS TST1 24
OPTIONS REMT 12 OPTIONS REMT 12
The example on the right causes improper code to be generated
during the VCAPGEN process and unpredictable results during
DAILY processing because the OPTIONS statements are not
adjacent to each other.
VCAFMT
------
There are no operands for this parameter, which is optional.
Adding VCAFMT to VCAOPS and executing VCAPGEN causes the
following to be generated:
%INCLUDE SOURCE(VCAFMT);
This causes VCAPGEN to compile the PROC FORMATs that are
distributed in sharedprefix.MICS.SOURCE(VCAFMT). You only
need to do this if directed to by CA MICS maintenance
instructions or if you have made a local modification to
VCAFMT.
SUPDFSMS
--------
There are no operands for this parameter, which is optional.
Adding SUPDFSMS to VCAOPS and executing VCAPGEN suppresses
the VCA00602W and VCA00603W messages from printing in the
MICSLOG. These messages are printed for data sets that are
not SMS-managed but they reside on a volume that is
SMS-managed.
WORK
----
This statement is optional. It enables sites experiencing
either SAS WORK space allocation problems or out of work
space conditions during DAYnnn or INCRnnn (where nnn is the
job step number), daily or incremental update processing, to
allocate multiple WORK files.
You can allocate multiple WORK files for use during the daily
and/or incremental update job step. The maximum number of
WORK files you can allocate varies by product. These
additional work files are used in conjunction with the single
work data set allocated by default using the JCLDEF
parameters WORKUNIT and WORKSPACE.
Because the individual space allocation requirement for each
WORK file is typically much smaller, it is more likely to be
satisfied.
To take advantage of multiple WORK files support, edit
prefix.MICS.PARMS(cccOPS) and insert a WORK statement as
shown below:
WORK n data_set_allocation_parameters
where n is the number of WORK data sets
NOTE: The default is three (3).
The maximum is nine (9).
data_set_allocation_parameters is one or more data
set allocation parameters (for example, STORCLAS or
SPACE) separated by spaces.
You can also specify the WORK parameter as the following:
WORK n XXX pppp ssss
where:
n is the number of WORK data sets
XXX is TRK or CYL
pppp is the primary allocation
ssss is the secondary allocation
Note: When allocating any number of SAS WORK data sets, be
aware that one additional SAS WORK data set is automatically
allocated to facilitate sorting. For example, if you
allocate six SAS WORK data sets, you will actually get seven.
If you omit the data_set_allocation_parameters or the WORK
parameter, the work data sets are allocated according to the
values you specified for the WORKUNIT and WORKSPACE
parameters in prefix.MICS.PARMS(JCLDEF). Use the
data_set_allocation_parameters to override this default,
either to alter the space allocation or to use System Managed
Storage (SMS) parameters to control data set placement and
characteristics.
Note: If you allocate insufficient space for the WORK data
sets, DAYnnn and/or INCRnnn processing will fail and can only
be restarted from the beginning.
Note: If internal step restart is active, you can override
the WORK data set allocation parameters at execution-time
using the //PARMOVRD facility. For more information about
execution-time override of dynamic data set allocation
parameters, see the PIOM, section 2.3.6.
Specify data set allocation parameters, separated by blanks,
according to SAS LIBNAME statement syntax. If you need
multiple lines, repeat the WORK keyword on the continuation
line.
WORK accepts the engine/host options documented in the SAS
Companion for the z/OS environment, including STORCLAS, UNIT,
SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Important! Do not specify the DISP parameter.
Example 1:
WORK n STORCLAS=MICSTEMP SPACE=(XXX,(pppp,ssss),RLSE)
where:
n - is the number of WORK data sets.
STORCLAS - specifies a storage class for a new data set.
The name can have up to 8 characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
XXX - is TRK or CYL.
pppp - is the primary allocation.
ssss - is the secondary allocation.
RLSE - specifies that free-space should be released
when the data set is closed.
Example 2:
WORK n XXX pppp ssss
where:
n - is the number of WORK data sets.
XXX - is TRK or CYL.
pppp - is the primary allocation.
ssss - is the secondary allocation.
Example 3 (multiple lines):
WORK n STORCLAS=MICSTEMP UNIT=SYSDA
WORK SPACE=(xxxx,(pppp,ssss),,,ROUND))
where:
n - is the number of WORK data sets.
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
XXX - is TRK or CYL.
pppp - is the primary allocation.
ssss - is the secondary allocation.
Note: Since there is some performance impact when using
multiple WORK files, you should specify the minimum number of
WORK data sets to meet your work space requirements. As a
start, try incrementing the number gradually beginning from
the default.
WORK Considerations
--------------------
How Much Space Should You Allocate?
o First Time Implementation of Multiple Work Files
If this is the first time you are implementing multiple
work files for this product in this unit, review
prefix.MICS.PARMS(JCLDEF) and find the WORKSPACE
parameter. It will resemble this sample statement:
WORKSPACE TRK 500 250
The value shows the current SAS WORK space allocation for
the unit as a single data set. It also serves as the
default value used in the unit's DAYnnn daily update
(and/or INCRnnn incremental update) step unless you
provide a WORK parameter.
To achieve the equivalent work space allocation of
WORKSPACE TRK 500 250 using multiple WORK data sets that
will collectively share the work space requirements of
the daily and/or incremental update step, you could code
either one of these:
WORK 2 SPACE=(TRK,(250,125))
WORK 5 SPACE=(TRK,(100,50))
To determine the total work space, multiply the number of
WORK files (n) by the primary (pppp) and secondary (ssss)
values specified.
Note: To simplify the example, only the SPACE parameter
is shown above. You can follow either with data set
allocation parameters like UNIT or STORCLAS as required
for your site.
o Adjusting Allocation for Existing Multiple WORK Files
If you have previously implemented multiple WORK file
support for this product in this unit, and you want to
change either the number of WORK files or the space
allocations, examine prefix.MICS.PARMS(cccOPS) and find
the existing WORK statement.
- If the existing WORK statement only specifies the
number of WORK files but does not contain space
allocation information as shown below:
WORK 5
Then each of the multiple WORK files is allocated
using the values from the WORKSPACE parameter of
prefix.MICS.PARMS(JCLDEF), as described earlier under
First Time Implementation of Multiple Work Files.
To increase workspace, you can increase the number of
WORK files (for example, change WORK 5 to WORK 6,7,8,
or 9), increase the space allocation in the WORKSPACE
parameter, or do both.
To decrease workspace, you can decrease the number of
WORK files (for example, change WORK 5 to WORK 4,3,2,
or 1), decrease the space allocation in the WORKSPACE
parameter, or do both.
You can also elect to explicitly specify the multiple
WORK file space allocation by adding the space
allocation values directly to the WORK statement. This
will remove the link to the prefix.MICS.PARMS(JCLDEF)
WORKSPACE parameter for multiple WORK file space
allocation. This is recommended as it serves to
clearly document, in one place, how multiple WORK files
are allocated.
- If the existing WORK statement does include space
allocation as shown in the examples below:
WORK 5 TRK 200 100
or
WORK 5 SPACE=(TRK,(200,100)) STORCLAS=MICSTEMP
Simply change the values to meet your needs.
If you need more work space, you can increase the
number of WORK files (for example, change WORK 5 to
WORK 6,7,8, or 9), increase the space allocation (for
example, change TRK 200 100 to TRK 250 120), or do
both.
To decrease work space, you can decrease the number of
WORK files (for example, change WORK 5 to WORK 4,3,2,
or 1), decrease the space allocation (for example,
change TRK 200 100 to TRK 150 80), or do both.
Note: If internal step restart is NOT active (RESTART NO)
and you change the WORK parameter, you must:
o Run cccPGEN
o Run JCLGENU for DAILY (to regenerate DAILY) and, if
incremental update is enabled, INCRccc
When internal step restart is active, (RESTART YES), then,
when you change WORK and run cccPGEN, changes take effect
immediately. There is no need to run JCLGENU.
SASWORK
-------
This statement is optional.
The WORK DD statement in the CA MICS procedures allocates
a temporary data set where SAS keeps its temporary data
files and other items that SAS uses during processing of
the current job.
By default, the space allocated is defined in the member
prefix.MICS.PARMS(JCLDEF) with the WORKSPACE and WORKUNIT
parameters, then generated into all the JCL procedures for
a given unit.
With the SASWORK statement you have the option to override
this unit-wide definition to specify the space allocation
individually for the current step.
The format of the SASWORK statement is:
SASWORK data_set_allocation_parameters
where data_set_allocation_parameters is one or more data set
allocation parameters (for example, STORCLAS or SPACE)
separated by spaces.
You can also specify the SASWORK parameter as the following:
SASWORK XXX pppp ssss
where:
XXX is TRK or CYL
pppp is the primary allocation
ssss is the secondary allocation
If you omit the data_set_allocation_parameters or the SASWORK
statement, the WORK data set is allocated according to the
values you specified for the WORKUNIT and WORKSPACE
parameters in prefix.MICS.PARMS(JCLDEF). Use the
data_set_allocation_parameters to override this default,
either to alter the space allocation or to use System Managed
Storage (SMS) parameters to control data set placement and
characteristics.
Specify data set allocation parameters, separated by blanks,
according to SAS LIBNAME statement syntax. If you need
multiple lines, repeat the SASWORK keyword on the
continuation line.
Example:
SASWORK STORCLAS=MICSTEMP SPACE=(XXX,(pppp,ssss))
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to 8 characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
XXX - is TRK or CYL.
pppp - is the primary allocation.
ssss - is the secondary allocation.
Note: If you change the SASWORK parameter, you must:
o Run cccPGEN
o Run JCLGENU for DAILY (to regenerate DAILY) and, if
incremental update is enabled, INCRccc
MULTWORK|NOMULT fff fff ... fff
-------------------------------
Since multiple work files usage impacts performance, this
product provides these optional parameters so you can
restrict multiple work files usage to only those files having
excessive space requirements.
Note: You can only use one of these optional parameters with
the WORK statement, NOT both.
The MULTWORK parameter restricts the use of multiple WORK
files to ONLY those listed after the MULTWORK keyword.
MULTWORK fff fff ... fff
where fff is the unique three character identifier
If you need multiple lines, repeat the MULTWORK on the
continuation line.
The NOMULT parameter forces the use of multiple WORK files
for all files EXCEPT those specified after the NOMULT
keyword.
NOMULT fff fff ... fff
where fff is the unique three character identifier
If you need multiple lines, repeat the NOMULT on the
continuation line.
The default is as follows if neither MULTWORK nor NOMULT
parameters are specified:
MULTWORK _VS _VT BCS BCT DAW EDA EVS EVT NVR SMS TAW VOA
The following files are eligible for multiple WORK support:
_VS VSAM DATA SET ALLOCATION FILE (<16 EXTENTS)
_VT Internal Work File Parallel to _VS File
BCS BCS DATA SET ALLOCATION FILE
BCT Internal Work File Parallel to BCS File
DAW DATA SET ALLOCATION FILE (<16 EXTENTS)
EDA DATA SET ALLOCATION FILE (>16 EXTENTS)
EVS VSAM DATA SET ALLOC. FILE (>16 EXTENTS)
EVT Internal Work File Parallel to EVS File
NVR NON-VSAM RECORDS FILE
SMS STORAGE MANAGEMENT SYSTEM DATASET FILE
TAW Internal Work File Parallel to DAW File
VOA VOLUME ALLOCATION FILE
DIR USS Directory Entry File
DIW USS Directory Entry Work File (Update Phase)
FIL USS File System Work File
FIW USS File System Work File (Update Phase)
RESTART YES/NO
--------------
This statement is optional. Specify this to activate
internal step restart for this product's DAILY and/or INCRccc
database update job steps:
RESTART YES
If you do not specify or enable the RESTART parameter, then
this option defaults to the following and internal step
restart is disabled:
RESTART NO
*************************************************************
* *
* Note: Changing the RESTART parameter (either from NO *
* to YES or from YES to NO) requires regeneration *
* of the DAILY operational job by executing *
* prefix.MICS.CNTL(JCLGEND) or by specifying *
* DAILY in prefix.MICS.PARMS(JCLGENU) and *
* executing prefix.MICS.CNTL(JCLGENU). *
* *
* If incremental update is active for this product, *
* you must also regenerate the INCRccc job. *
* *
*************************************************************
Internal step restart can significantly reduce time and
resource usage to recover from daily and/or incremental
update processing failures. CA MICS uses a
checkpoint/restart technique.
o When internal step restart is activated, the database
update job step "checkpoints" (or saves) intermediate
results (work file contents) and the operational
environment at the end of each processing phase.
o Then, if required, the database update step can resume
execution at the beginning of the processing phase in
which the failure occurred.
o Restart is accomplished by restoring the operational
environment from the last checkpoint, bypassing completed
processing phases, and resuming execution using
intermediate results (work files) from the last
checkpoint.
Note: When you activate internal step restart (RESTART YES),
the following optional restart parameters are enabled.
These parameters have no effect if restart is disabled
(RESTART NO). For more details, see the individual
parameter descriptions later in this section.
o RESTARTCKPT data_set_allocation_parameters
o RESTARTWORK data_set_allocation_parameters
o DYNAMWAIT minutes
Processing Phases:
------------------
This product employs three database update processing phases
followed by the two common roll-up phases.
Phase Description
------------- ------------------------------------------
FORMAT Read raw input data, convert to SAS
format, and output to intermediate work
files.
SORT Sort intermediate work file contents,
eliminate duplicate input data, and
prepare for DETAIL cycle creation.
DBUPDATE Merge data across optional multiple work
files, enhance data content, and create
the new DETAIL cycle.
DYSUM Summarize DETAIL data to create new DAYS
cycles and to update current week-to-date
and month-to-date cycles.
DYAGE Cutover new database cycles to production
and "age" existing cycles.
RESTART Considerations
----------------------
o Overhead
Enabling internal step restart adds some overhead to the
database update job step -- the cost of taking
checkpoints and managing saved materials. Since this
overhead is relatively constant and independent of input
data volume, you may find that costs outweigh potential
savings when input data volume is low, for example in a
test unit. For high volume, production units, internal
step restart support overhead should be a minor portion
of total resource usage.
o Cataloged Work Files
When internal step restart is enabled, the SAS work data
set, internal step restart control data set, and multiple
work file data sets are allocated and cataloged with
permanent dataset names so they will be retained for use
in restart if the step abends. These data sets are
deleted when the step completes successfully.
Prior to enabling internal step restart support, these
data sets were probably allocated on system "scratch"
space with a temporary, system assigned data set names.
If your installation standards do not allow "permanent"
data sets on DASD volumes used for temporary work space,
you may need to use the WORK, RESTARTCKPT, and
RESTARTWORK parameters to direct the internal step
restart data sets to a generic unit or storage class that
allows cataloged data sets.
o Dynamic Allocation
When internal step restart is active, dynamic allocation
is employed for the work data sets. If your installation
restricts dynamic allocation of large, cataloged data
sets, you may need to use the WORK, RESTARTCKPT, and
RESTARTWORK parameters to direct work data set allocation
to a generic unit or storage class where dynamic
allocation is allowed.
o Data Set Names
The SAS work data set, internal step restart control data
set, and multiple work file data sets are allocated and
cataloged according to the standard CA MICS unit database
data set name conventions. The default DDNAME and data
set names are:
o SAS work data set,
//cccXWORK DD DSN=prefix.MICS.cccXWORK,.....
o Internal step restart control data set,
//cccXCKPT DD DSN=prefix.MICS.cccXCKPT,.....
o Multiple work file data sets,
//WORKnn DD DSN=prefix.MICS.cccWRKnn,.....
Since these data sets conform to the same data set name
conventions as your existing CA MICS data sets, there
should be few, if any, data set name related allocation
issues. However, it is possible to override the data set
names if required. Please contact CA MICS Product
Support for assistance if you must alter data set names.
RESTARTCKPT
-----------
This statement is optional. Specify the following to
override default data set allocation parameters for the
internal step restart checkpoint data set:
RESTARTCKPT data_set_allocation_parameters
Note: RESTARTCKPT is ignored when you specify RESTART NO.
The internal step restart checkpoint data set (or cccXCKPT
data set) contains processing status, control, and SAS
environmental information for internal step restart
processing checkpoints. This includes a copy of the SAS WORK
format and macro catalogs, current macro variable values, and
a description of work files that may be needed to restart
DAYnnn processing.
By default, the cccXCKPT data set is allocated according to
the values you specified for the WORKUNIT and WORKSPACE
parameters in prefix.MICS.PARMS(JCLDEF). Specify RESTARTCKPT
to override this default, either to alter the space
allocation or to use System Managed Storage (SMS) parameters
to control data set placement and characteristics.
Note: If you allocate insufficient space for the cccXCKPT
data set, DAYnnn processing will fail and can only be
restarted from the beginning.
Note: You can override the RESTARTCKPT data set allocation
parameters at execution-time using the //PARMOVRD facility.
For more information about execution-time override of dynamic
data set allocation parameters, see the PIOM, section 2.3.6.
Specify data set allocation parameters, separated by blanks,
according to SAS LIBNAME statement syntax. If you need
multiple lines, repeat the RESTARTCKPT keyword on the
continuation line.
RESTARTCKPT accepts the engine/host options documented in the
SAS Companion for the z/OS Environment, including STORCLAS,
UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Important! DO NOT SPECIFY THE DISP PARAMETER.
Example 1:
RESTARTCKPT STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated, where:
xxxx is TRK, CYL, or blklen
pp is the primary allocation
ss is the secondary allocation
and ROUND specifies that the allocated space be
"rounded" to a cylinder boundary when the unit
specified was a block length. ROUND is ignored
with the TRK or CYL options.
Example 2 (multiple lines):
RESTARTCKPT STORCLAS=MICSTEMP UNIT=SYSDA
RESTARTCKPT SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
RESTARTWORK
-----------
This statement is optional. Specify the following to
override default data set allocation parameters for the
internal step restart WORK data set:
RESTARTWORK data_set_allocation_parameters
Note: RESTARTWORK is ignored when you specify RESTART NO.
The internal step restart WORK data set (or cccXWORK data
set) contains the intermediate work files that are not
enabled to multiple work file support, including those files
you may have specified on the optional NOMULT statement.
By default, the cccXWORK data set is allocated according to
the values you specified for the WORKUNIT and WORKSPACE
parameters in prefix.MICS.PARMS(JCLDEF). Specify RESTARTWORK
to override this default, either to alter the space
allocation or to use System Managed Storage (SMS) parameters
to control data set placement and characteristics.
Note: If you allocate insufficient space for the cccXWORK
data set, DAYnnn processing will fail and can only be
restarted from the beginning.
Note: You can override the RESTARTWORK data set allocation
parameters at execution-time using the //PARMOVRD facility.
For more information about execution-time override of dynamic
data set allocation parameters, see the PIOM, section 2.3.6.
Specify data set allocation parameters, separated by blanks,
according to SAS LIBNAME statement syntax. If you need
multiple lines, repeat the RESTARTWORK keyword on the
continuation line.
RESTARTWORK accepts the engine/host options documented in
"SAS Companion for the z/OS Environment", including STORCLAS,
UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Important! DO NOT SPECIFY THE DISP PARAMETER.
Example 1:
RESTARTWORK STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated, where:
xxxx is TRK, CYL, or blklen
pp is the primary allocation
ss is the secondary allocation
and ROUND specifies that the allocated space be
"rounded" to a cylinder boundary when the unit
specified was a block length. ROUND is ignored
with the TRK or CYL options.
Example 2 (multiple lines):
RESTARTWORK STORCLAS=MICSTEMP UNIT=SYSDA
RESTARTWORK SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to 8 characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
INCRUPDATE
----------
This statement is optional. Specify this to enable
incremental update for this product:
INCRUPDATE YES
If you do not specify or enable the INCRUPDATE parameter,
then this option defaults to this and incremental update is
disabled:
INCRUPDATE NO
*************************************************************
* *
* Note: Changing the INCRUPDATE parameter (either from NO *
* to YES or from YES to NO) requires regeneration *
* of the DAILY operational job by executing *
* prefix.MICS.CNTL(JCLGEND) or by specifying *
* DAILY in prefix.MICS.PARMS(JCLGENU) and *
* executing prefix.MICS.CNTL(JCLGENU). *
* *
* If you specify INCRUPDATE YES, you must also *
* generate the INCRccc, cccIUALC, and cccIUGDG jobs *
* (where ccc is the 3 character product ID). *
* Depending on the options you select, you may also *
* need to execute the cccIUALC and/or cccIUGDG *
* jobs. *
* *
*************************************************************
Incremental update can significantly reduce time and resource
usage in the DAILY job by letting you split out a major
portion of daily database update processing into multiple,
smaller, incremental updates executed throughout the day.
o Standard CA MICS database update processing involves (1)
reading and processing raw input data to generate DETAIL
and DAYS level CA MICS database files, followed by (2)
summarization of DETAIL/DAYS level data to update
week-to-date and month-to-date database files.
o When you activate incremental update:
- You can execute the first-stage processing (raw data
input to create DETAIL/DAYS files) multiple times
throughout the day, each time processing a subset of
the total day's input data.
- Then, during the final update of the day (in the
DAILY job), the incremental DETAIL/DAYS files are
"rolled-up" to the database DETAIL and DAYS
timespans, and then summarized to update the
week-to-date and month-to-date files.
o Incremental update is independent of your internal step
restart or DBSPLIT specifications. You have the option
to perform incremental updates with or without internal
step restart support.
o Incremental update is activated and operates
independently by product. The incremental update job
for this product, INCRccc (where ccc is the product ID),
can execute concurrently with the incremental update job
for another product in the same unit database.
o The CA MICS database remains available for reporting and
analysis during INCRccc job execution.
*************************************************************
* *
* Note: CA MICS is a highly configurable system *
* supporting up to 36 unit databases, each of which *
* can be configured and updated independently. *
* Incremental update is just one of the options you *
* can use to configure your CA MICS complex. *
* *
* All efforts should be made to employ CA MICS *
* configuration capabilities to minimize issues *
* prior to activating incremental update. For *
* example: *
* *
* o Splitting work to multiple units is an *
* effective way to enable parallel database *
* update processing *
* *
* o Adjusting account code definitions to ensure *
* adequate data granularity while minimizing *
* total database space and processing time *
* *
* o Tailoring the database to drop measurements *
* and metrics of lesser value to your *
* data center, thereby reducing database update *
* processing and resource consumption *
* *
* While incremental update is intended to reduce *
* DAILY job elapsed time, total resource usage of *
* the combined INCRccc and DAILY jobs steps can *
* increase due to the additional processing *
* required to maintain the incremental update *
* "to-date" files and for roll-up to the unit *
* database. The increased total resource usage *
* will be more noticeable with small data volumes, *
* where processing code compile time is a greater *
* percentage of total processing cost. *
* *
*************************************************************
Note: When you activate incremental update (INCRUPDATE YES),
the following optional incremental update parameters are
enabled. These parameters have no effect if incremental
update is disabled (INCRUPDATE NO). For more details, see
the individual parameter descriptions later in this section.
o INCRDB PERM/TAPE/DYNAM
o INCRDETAIL data_set_allocation_parameters
o INCRDAYS data_set_allocation_parameters
o INCRCKPT data_set_allocation_parameters
o INCRSPLIT USE/IGNORE data_set_allocation_parameters
Incremental update processing reads and processes raw
measurement data to create and maintain DETAIL and DAYS level
"to-date" files for the current day.
o These incremental update database files are maintained on
unique z/OS data sets, independent of the standard CA MICS
database files, and independent of any other product's
incremental update database files. There is one data set
each for DETAIL and DAYS level "to-date" data and a single
incremental update checkpoint data set for this product in
this unit.
o The incremental update DETAIL and DAYS files can be
permanent DASD data sets, or they can be allocated
dynamically as needed and deleted after DAILY job
processing completes. Optionally, you can keep the
incremental update DETAIL and DAYS files on tape, with
the data being loaded onto temporary DASD space as
needed for incremental update or DAILY job processing.
See the INCRDB PERM/TAPE/DYNAM option for more
information.
After activating incremental update, you will use three
incremental update facility jobs found in prefix.MICS.CNTL
(Note that ccc is the product ID):
o cccIUALC
You execute this job to allocate and initialize the
incremental update checkpoint file, and optionally the
incremental update DETAIL and DAYS database files.
cccIUALC is generally executed just ONE time.
o cccIUGDG
You execute this job to add generation data group (GDG)
index definitions to your system catalog in support of
the INCRDB TAPE option. cccIUGDG is generally executed
just ONE time.
o INCRccc
This is the job you execute for each incremental update.
You will integrate this job into your database update
procedures for execution one or more times per day
to process portions of the total day's measurement data.
Note: The DAILY job is run once at the end of the day.
It will perform the final incremental update for the day's
data, and then roll-up the incremental DETAIL/DAYS files
to the database DETAIL and DAYS timespans and update the
week-to-date and month-to-date files.
INCRUPDATE Considerations
-------------------------
o Overhead
Incremental update is intended to reduce DAILY job
resource consumption and elapsed time by offloading a
major portion of database update processing to one or
more executions of the INCRccc job. In meeting this
objective, incremental update adds processing in the
INCRccc and DAILY jobs to accumulate data from each
incremental update execution into the composite "to-date"
DETAIL and DAYS incremental update files, and also adds
processing in the DAILY job to copy the incremental
update files to the unit database DETAIL and DAYS
timespans. The amount of this overhead and the savings in
the DAILY job are site-dependent, and will vary based on
input data volume and on the number of times INCRccc is
executed each day.
In addition, activating incremental update will cause
additional compile-based CPU time to be consumed in the
DAYnnn DAILY job step. The increase in compile time is
due to additional code included for each file structure in
support of the feature. This increase should be static
based on the scope of the CA MICS data integration product
in terms of files. This compile-time increase does not
imply an increase in elapsed or execution time.
Incremental update allows I/O bound, intensive processing
(raw data inputting, initial CA MICS transformation, etc.)
to be distributed outside of the DAILY job. I/O
processing is the largest contributor to elapsed time in
large volume applications. Thus, the expected overall
impact is a decrease in the actual runtime of the DAYnnn
job step.
o Increased "Prime Time" Workload
By offloading work from the DAILY job to one or more
INCRccc executions throughout the day, you are
potentially moving system workload and DASD work space
usage from the "off-hours," (when the DAILY job is
normally executed) to periods of the day where your
system resources are in highest demand. You should
schedule INCRccc executions carefully to avoid adverse
impact to batch or online workloads. For example, if your
site's "prime shift" is 8:00 AM to 5:00 PM, you might
choose to schedule incremental updates for 7:00 AM (just
before "prime shift") and 6:00 PM (just after "prime
shift"), with the DAILY job executing just after midnight.
o Increased DASD Usage
The DASD space required for the incremental update DETAIL
and DAYS database files is in addition to the DASD space
already reserved for the CA MICS database. By default,
the incremental update database files are permanently
allocated, making this DASD space unavailable for other
applications. In general, you can assume that the
incremental update database files will require space
equivalent to two cycles of this product's DETAIL and
DAYS timespan files.
Alternatively, the incremental update database files can
be allocated in the first incremental update of the day
and deleted by the DAILY job (see the INCRDB DYNAM option
later in this section). This approach reduces the amount
of time that the DASD space is dedicated to incremental
update, and lets the amount of DASD space consumed
increase through the day as you execute each incremental
update.
A third option is to store the incremental update
database files on tape (see the INCRDB TAPE option).
With this approach, the DASD space is required just for
the time that each incremental update or DAILY job step
is executing. Note that while this alternative reduces
the "permanent" DASD space requirement, the total amount
of DASD space required while the incremental update or
DAILY jobs are executing is unchanged. In addition, the
TAPE option adds processing to copy the incremental
update files to tape, and to reload the files from tape
to disk.
Note: The incremental update checkpoint file is always a
permanently allocated disk data set. This is a small data
set and should not be an issue.
o Operational Complexity
Incremental update expands your measurement data
management and job scheduling issues. You must ensure
that each incremental update and the DAILY job processes
your measurement data chronologically; that is, each job
must see data that is newer than the data processed by the
prior job. By incrementally updating the database, you
have more opportunities to miss a log file, or to process
a log out of order.
o Interval End Effects
Each incremental update processes a subset of the day's
measurement data, taking advantage of early availability
of some of the day's data, for example, when a
measurement log fills and switches to a new volume. This
can cause a problem if the measurement log split occurs
while the data source is logging records for the end of a
measurement interval, thus splitting the data for a
single measurement interval across two log files. When
an incremental update processes the first log file, the
checkpoint high end timestamp is set to indicate that
this split measurement interval has been processed.
Then, when the rest of the measurement interval's data is
encountered in a later update, it can be dropped as
duplicate data (because data for this measurement
interval end timestamp has already been processed).
Appropriate scheduling of log dumps and incremental
updates can avoid this problem. For example, if you plan
to run incremental updates at 7:00 AM and 6:00 PM, you
could force a log dump in the middle of the measurement
interval just prior to the scheduled incremental update
executions. This is an extension of the procedure you
may already be using for end-of-day measurement log
processing. The objective is to ensure that all records
for each monitor interval are processed in the same
incremental update.
o Dynamic Allocation
When you activate incremental update and specify TAPE or
DYNAM for the INCRDB parameter, dynamic allocation is
employed for the incremental update database files. If
your site restricts dynamic allocation of large, cataloged
data sets, you must use the INCRDETAIL and INCRDAYS
parameters to direct incremental update data set
allocation to a generic unit or storage class where
dynamic allocation is allowed.
o Data Set Names
The incremental update database files are allocated and
cataloged according to standard CA MICS unit database
data set name conventions. The DDNAME and default data
set names are (where ccc is the product ID):
o Incremental update checkpoint file,
//IUCKPT DD DSN=prefix.MICS.ccc.IUCKPT,.....
o Incremental update DETAIL
//IUDETAIL DD DSN=prefix.MICS.ccc.IUDETAIL,.....
o Incremental update DAYS
//IUDAYS DD DSN=prefix.MICS.ccc.IUDAYS,....
Since these data sets conform to the same data set name
conventions as your existing CA MICS data sets, there
should be few, if any, data-set-name-related allocation
issues. However, it is possible to override the data set
names if required. Contact Technical Support at
http://ca.com/support for assistance if you must change
data set names.
INCRDB
------
This statement is optional. The default is this:
INCRDB PERM
Note: INCRDB is ignored when you specify INCRUPDATE NO.
Specify this statement or take the default, to keep the incremental
update database DETAIL and DAYS files on permanently
allocated DASD data sets:
INCRDB PERM
Execute the prefix.MICS.CNTL(cccIUALC) job to allocate the
incremental update database files.
*************************************************************
* *
* Note: The incremental update checkpoint file is always *
* a permanently allocated DASD data set. *
* *
*************************************************************
Specify this to offload the incremental update DETAIL and
DAYS files to tape between incremental update executions:
INCRDB TAPE #gdgs UNIT=name
With the TAPE option, the incremental update DETAIL and DAYS
DASD data sets are dynamically allocated at the beginning of
the incremental update job or DAILY job step, and then are
deleted after the job step completes.
o The first incremental update job of the day allocates
and initializes the incremental update database files.
At the end of the job, the DETAIL and DAYS files are
copied to a new (+1) generation of the incremental
update tape data sets. Then the DASD files are deleted.
o Subsequent incremental update jobs restore the DASD
incremental update database files from the current, (0)
generation, incremental update tape data sets before
processing the input measurement data. At the end of
the job, the DETAIL and DAYS files are copied to a new
(+1) generation of the incremental update tape data
sets. Then the DASD files are deleted.
o The DAILY job step also restores the DASD incremental
update database files from the (0) generation tape files
before processing the input data, but does NOT copy the
incremental update database files to tape. Thus, the
DAILY job actually creates a new, null (+1) generation.
o Use the #gdgs parameter to specify the maximum number of
incremental update tape generations. The minimum is 2
and the maximum is 99, with a default of 5.
Set the number of generations equal to or greater than
the number of incremental updates, including the DAILY
job you plan to execute each day. This facilitates
restart and recovery if you encounter problems requiring
you to reprocess portions of the daily measurement data.
o Use the optional UNIT=name parameter to specify a tape
unit name for the incremental update database output
tapes. The default is to use the same tape unit as the
input tapes.
o A special index must be created in your system catalog for
each of the incremental update tape data set generation
data groups. The prefix.MICS.CNTL(cccIUGDG) job will
generate the statements to create the incremental update
GDG index definitions.
- Before each index is built, it is deleted. These
DLTX (or DELETE) statements causes an error
message if no entry exists. This is done so that you
can change the number of entries without having to
delete each of the index entries.
- DLTX and BLDG (or DELETE and DEFINE) fail if
there is a cataloged data set with the same index.
IDCAMS (or IEHPROGM) issues a message and gives
a return code of 8. This issue is not a problem for
non-GDG entries or if the GDG already has the
desired number of entries.
- If you want to change the number of entries kept in
a GDG with cataloged data sets, do the
following:
1. Uncatalog any existing entries in the GDG.
2. Delete the index with a DLTX (or DELETE).
3. Create the index with a BLDG (or DEFINE).
4. Catalog any entries that are uncataloged in step 1.
o The incremental update tape data set names are as follows,
where ccc is the product ID:
- Incremental update tape DETAIL file
tapeprefix.MICS.ccc.IUXTAPE.GnnnnV00
- Incremental update tape DAYS file
tapeprefix.MICS.ccc.IUDTAPE.GnnnnV00
*************************************************************
* *
* Note: The INCRDETAIL and INCRDAYS parameters are *
* required when you specify INCRDB TAPE. *
* *
*************************************************************
Specify this parameter to allocate dynamically the
incremental update DETAIL and DAYS DASD data sets in the first
incremental update of the day, and then delete these data sets
at the end of the DAILY job step:
INCRDB DYNAM
o With this option, no space is used for the incremental
update database files during the time between the end of
the DAILY job step and the beginning of the next day's
first incremental update.
o With this approach, you can set the data set allocation
parameters so that the incremental update DETAIL and DAYS
data sets start out with a minimum allocation and then
grow through secondary allocations as more space is
required for subsequent incremental updates. For
example, enough space for one incremental update.
*************************************************************
* *
* Note: The INCRDETAIL and INCRDAYS parameters are *
* required when you specify INCRDB DYNAM. *
* *
*************************************************************
INCRCKPT
--------
This statement is optional. Specify this to override default
data set allocation parameters for the incremental update
checkpoint data set:
INCRCKPT data_set_allocation_parameters
Note: INCRCKPT is ignored when you specify INCRUPDATE NO.
The incremental update checkpoint data set tracks incremental
update job status and the data that has been processed during
the current daily update cycle. The incremental update
checkpoint is used to detect and block the input of duplicate
data during incremental update processing. This data set
will be exactly the same size as prefix.MICS.CHECKPT.DATA
(the unit checkpoint data set), usually 20K to 200K depending
on the prefix.MICS.PARMS(SITE) CKPTCNT parameter (100-9999).
Your INCRCKPT parameter specifications are used in generating
the cccIUALC job (where ccc is the product ID).
o You will execute the cccIUALC job to allocate and
initialize the incremental update checkpoint file. If you
specified INCRDB PERM, then the cccIUALC job will also
allocate the incremental update DETAIL and DAYS database
files.
o By default the incremental update checkpoint data set is
allocated as SPACE=(TRK,(5,2)) using the value you
specified for the prefix.MICS.PARMS(JCLDEF) DASDUNIT
parameter.
o Omit the INCRCKPT parameter if you prefer to override
data set allocation parameters directly in the generated
prefix.MICS.CNTL(cccIUALC) job.
Specify data set allocation parameters, separated by blanks,
according to SAS LIBNAME statement syntax. If you need
multiple lines, repeat the INCRCKPT keyword on the
continuation line.
INCRCKPT accepts the engine/host options documented in the
SAS Companion for the MVS Environment, including STORCLAS,
UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Important! DO NOT SPECIFY THE DISP PARAMETER.
Example 1:
INCRCKPT STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated, where:
xxxx is TRK, CYL, or blklen
pp is the primary allocation
ss is the secondary allocation
and ROUND specifies that the allocated space be
"rounded" to a cylinder boundary when the unit
specified was a block length. ROUND is ignored
with the TRK or CYL options.
Example 2 (multiple lines):
INCRCKPT STORCLAS=MICSTEMP UNIT=SYSDA
INCRCKPT SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
INCRDAYS
--------
This statement is required if you specify either of these:
INCRDB TAPE
INCRDB DYNAM
Otherwise, this statement is optional. There is no default.
Specify this to define data set allocation parameters for the
incremental update DAYS data set (IUDAYS):
INCRDAYS data_set_allocation_parameters
Note: INCRDAYS is ignored when you specify INCRUPDATE NO.
The incremental update DAYS data set (IUDAYS) contains the
current incremental update days-level database files, and the
DAYS "to-date" data for the current daily update cycle. You
should allocate DASD space equivalent to two cycles of this
product's DAYS timespan data.
If you specified INCRDB PERM (the default), your INCRDAYS
parameter specifications are used in generating the cccIUALC
job (where ccc is the product ID).
o You will execute the cccIUALC job to allocate and
initialize the incremental update database and checkpoint
files.
o Omit the INCRDAYS parameter if you prefer to specify
data set allocation parameters directly in the generated
prefix.MICS.CNTL(cccIUALC) job.
If you specified INCRDB TAPE or INCRDB DYNAM, your INCRDAYS
parameter specifications are used in incremental update DAYS
data set dynamic allocation during incremental update or
DAILY job step execution.
o The INCRDAYS parameter is required for the TAPE or DYNAM
option.
o Specify data set allocation parameters, separated by
blanks, according to SAS LIBNAME statement syntax. If
you need multiple lines, repeat the INCRDAYS keyword on
the continuation line.
o INCRDAYS accepts the engine/host options documented in the
SAS Companion for the z/OS Environment, including
STORCLAS, UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and
VOLSER.
Important! DO NOT SPECIFY THE DISP PARAMETER.
o You can override the INCRDAYS data set allocation
parameters at execution-time using the //PARMOVRD
facility. For more information about execution-time
override of dynamic data set allocation parameters, see
the PIOM, Section 2.3.6.
Example 1:
INCRDAYS STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated, where:
xxxx is TRK, CYL, or blklen
pp is the primary allocation
ss is the secondary allocation
and ROUND specifies that the allocated space be
"rounded" to a cylinder boundary when the unit
specified was a block length. ROUND is ignored
with the TRK or CYL options.
Example 2 (multiple lines):
INCRDAYS STORCLAS=MICSTEMP UNIT=SYSDA
INCRDAYS SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
INCRCKPT
--------
This statement is optional. Specify this to override default
data set allocation parameters for the incremental update
checkpoint data set:
INCRCKPT data_set_allocation_parameters
Note: INCRCKPT is ignored when you specify INCRUPDATE NO.
The incremental update checkpoint data set tracks incremental
update job status and the data that has been processed during
the current daily update cycle. The incremental update
checkpoint is used to detect and block the input of duplicate
data during incremental update processing. This data set
will be exactly the same size as prefix.MICS.CHECKPT.DATA
(the unit checkpoint data set), usually 20K to 200K depending
on the prefix.MICS.PARMS(SITE) CKPTCNT parameter (100-9999).
Your INCRCKPT parameter specifications are used in generating
the cccIUALC job (where ccc is the product ID).
o You will execute the cccIUALC job to allocate and
initialize the incremental update checkpoint file. If you
specified INCRDB PERM, then the cccIUALC job will also
allocate the incremental update DETAIL and DAYS database
files.
o By default the incremental update checkpoint data set is
allocated as SPACE=(TRK,(5,2)) using the value you
specified for the prefix.MICS.PARMS(JCLDEF) DASDUNIT
parameter.
o Omit the INCRCKPT parameter if you prefer to override
data set allocation parameters directly in the generated
prefix.MICS.CNTL(cccIUALC) job.
Specify data set allocation parameters, separated by blanks,
according to SAS LIBNAME statement syntax. If you need
multiple lines, repeat the INCRCKPT keyword on the
continuation line.
INCRCKPT accepts the engine/host options documented in the
SAS Companion for the MVS Environment, including STORCLAS,
UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and VOLSER.
Important! DO NOT SPECIFY THE DISP PARAMETER.
Example 1:
INCRCKPT STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated, where:
xxxx is TRK, CYL, or blklen
pp is the primary allocation
ss is the secondary allocation
and ROUND specifies that the allocated space be
"rounded" to a cylinder boundary when the unit
specified was a block length. ROUND is ignored
with the TRK or CYL options.
Example 2 (multiple lines):
INCRCKPT STORCLAS=MICSTEMP UNIT=SYSDA
INCRCKPT SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
INCRSPLIT
---------
This statement is optional and defaults to this:
INCRSPLIT IGNORE
Specify the following if you want the incremental update job
for this product to get input measurement data from the
output of the SPLITSMF job. The optional
data_set_allocation_parameters are used by the SPLITSMF job
when creating the measurement data file for this product.
INCRSPLIT USE data_set_allocation_parameters
Note: INCRSPLIT is ignored when you specify INCRUPDATE NO.
This option would be used when multiple products in a
single unit database are enabled to incremental update. The
SPLITSMF job performs the same function for incremental
update jobs as the DAILY job DAYSMF step performs for the
DAYnnn database update steps.
o The SPLITSMF job dynamically allocates, catalogs, and
populates prefix.MICS.ccc.IUSPLTDS data sets for each
product in the unit database for which you specified both
the INCRUPDATE YES and INCRSPLIT USE parameters. These
data sets are then deleted after processing by the
appropriate INCRccc job.
o Specify data set allocation parameters, separated by
blanks, according to SAS LIBNAME statement syntax. If you
need multiple lines, repeat the INCRSPLIT keyword on each
continuation line.
o INCRSPLIT accepts the engine/host options documented in
the SAS Companion for the MVS Environment, including
STORCLAS, UNIT, SPACE, BLKSIZE, DATACLAS, MGMTCLAS, and
VOLSER.
Important! DO NOT SPECIFY THE DISP PARAMETER.
Specify the following or accept the default if you want the
incremental update jobs for this product to get their input
measurement data from the data sets specified in the INPUTccc
(or INPUTSMF) member of prefix.MICS.PARMS:
INCRSPLIT IGNORE
When you specify INCRSPLIT IGNORE, this product will NOT
participate in SPLITSMF job processing.
Example 1:
INCRSPLIT USE STORCLAS=MICSTEMP SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated, where:
xxxx is TRK, CYL, or blklen
pp is the primary allocation
ss is the secondary allocation
and ROUND specifies that the allocated space be
"rounded" to a cylinder boundary when the unit
specified was a block length. ROUND is ignored
with the TRK or CYL options.
Example 2 (multiple lines):
INCRSPLIT USE STORCLAS=MICSTEMP UNIT=SYSDA
INCRSPLIT SPACE=(xxxx,(pp,ss),,,ROUND)
where:
STORCLAS - specifies a storage class for a new data set.
The name can have up to eight characters.
UNIT - specifies the generic unit for a new data set.
The name can have up to eight characters.
SPACE - specifies how much disk space to provide for
a new data set being allocated.
DYNAMWAIT
---------
This statement is optional. Specify the following:
DYNAMWAIT minutes
to override the default amount of time, in minutes, the DAILY
and/or INCRccc job will wait for an unavailable data set.
Note: This optional parameter is not normally specified.
The system default is adequate for most data centers.
Internal Step Restart and Incremental Update facilities use
z/OS dynamic allocation services to create new data sets and
to access existing data sets. Data set naming conventions
and internal program structure are designed to minimize data
set contention. However, if data set allocation does fail
because another batch job or online user is already using a
data set, DAILY and/or INCRccc processing will wait 15
seconds and then try the allocation again. By default, the
allocation will be attempted every 15 seconds for up to 15
minutes. After 15 minutes, the DAILY or INCRccc job will
abort.
If data set contention in your data center does cause
frequent DAILY or INCRccc job failures, and you are unable to
resolve the contention through scheduling changes, you may
want to use the DYNAMWAIT parameter to increase the maximum
number of minutes the DAILY and/or INCRccc jobs will wait for
the data set to become available.
On the other hand, if your data center standards require
that the DAILY and/or INCRccc jobs fail immediately if
required data sets are unavailable, specify the following:
DYNAMWAIT 0
Note: You can override the DYNAMWAIT parameter at
execution-time using the //PARMOVRD facility. For
more information about execution-time override of
dynamic data set allocation parameters, see the PIOM,
section 2.3.6.
+-------------------------------------------------------------------------- | INSTALLATION PREPARATION WORKSHEET: VCA Options Definition | | | | PARMS Library Member is VCAOPS | | Reference: Section 7.3.1 | +--------------------------------------------------------------------------+ | | | VCA PROCESSING OPTIONS: | | | | ACCOUNTING ________ (DETAIL or DAYS) | | | | | | | | BCSREQUIRED _______ (YES or NO) | | | | EXTENTDETAIL ______ (optional: DAA, _VS, or blank for both) | | | | ______ (YES or NO) | | | | OPTIONS __________ (sysid) | | | | __________ (default duration) | | | | VCAFMT (no operand) | | | | WORK __________ (optional: n data_set_allocation_parameters) | | | | RESTART __________ (optional: YES or NO) | | | | INCRUPDATE ________ (optional: YES or NO) | | INCRDB ________ (optional: PERM, TAPE, or DYNAM) | | INCRDETAIL ________ (optional: data_set_allocation_parameters) | | | +--------------------------------------------------------------------------+ | ....5...10...15...20...25...30...35...40...45...50...55...60...65...70.. | +--------------------------------------------------------------------------+
Figure 7-6. VCA Options Definition Worksheet
|
Copyright © 2014 CA.
All rights reserved.
|
|