Previous Topic: Appendix C. TAPE INTERFACE

Next Topic: Appendix E. NON-VSAM DATA SET ACTION-RELATED INTERFACE

Appendix D. CASE STUDIES


CASE STUDY #1

The ABC Soup Company has used Generation Data Groups (GDGs)
on tape for many years, but has only recently allowed GDGs to
be created on DASD.  The Storage Administrator has been
recently assigned to analyze the current DASD-resident GDGs,
to write some basic standards for these types of data sets,
to identify any current problems, and to notify the users
responsible for those problems.

The reports produced by the GDG Management and Retention
facility of CA MICS StorageMate can help to perform this
study.  The Storage Administrator invokes this facility by
using the following panel selection sequence:

 o Display the CA MICS StorageMate primary option panel and
   select the 'Storage Administration' option

 o Select the 'Data Set Standards' option

 o Select the 'GDG Management and Retention' facility

At this point the execution options panel for this facility
is displayed.  The Storage Administrator completes the
options on the panel so that it appears as shown in Figure
D-1.

------------------------  GDG Management and Retention  ------------------
Command ===>

CA MICS Input Definition:
   Unit DBID(s) ===> V           Cycle ===> 01 Sources ===> D

Report Options:
   Produce prefix report          ===> N       (Y/N)
   Produce GDG base report        ===> Y       (Y/N)
   Produce detail report          ===> N       (Y/N)
   Simulate generations kept      ===> ___     (Numeric, 1-999)
   For prefix/GDG base reports:
     Sort reports by              ===> DSNAME  (DSNAME/SAVINGS)


DOWN - Filter   ENTER - Validate END - Run
-------------------------------------------------------------------------------
 Figure D-1.  Execution Options Panel

The input data for the report comes from the 01 data cycle in
the 'V' Unit of the CA MICS database.   The 'D' in the
Sources area indicates that only DASD data should be used.
The ISPF DOWN key could be used to display the data filtering
panel, but in this case no special data filtering is done.
Only one of three possible reports (the GDG base report), is
produced and the data on that report is sorted based on the
GDG data set name.

The Storage Administrator completes the panel as shown above,
and produces the report.  A copy of the report output appears
in Figure D-2.

GDG MANAGEMENT AND RETENTION (STGEFE) CA MICS StorageMate - GDG LEVEL REPORT ---------------------------------------------------- System Identifier=SYSA ----------------------------------------------------- Base GDG Name # Min Max Dupe # of # of Min Max Avg Total +-Simulated Space Savings-+ of GEN GEN GENS Gaps Vols MB MB MB MB Data Sets MB Saved GENS # # Alloc Alloc Alloc Alloc Moved BACKUP.DAILY.LIST 5 4 8 0 0 1 0.0 0.0 0.0 0 0 0.0 BACKUP.DATASETS.LIST 15 994 1008 0 0 1 0.1 0.1 0.1 2 0 0.0 BACKUP.DELETED.MVS340.LIST 20 181 200 0 0 1 0.0 0.2 0.2 3 0 0.0 BUDGT.B003.OUTPUT 7 16 22 0 0 4 0.1 0.2 0.2 1 0 0.0 BUDGT.TEST.GDG 4 1 4 0 0 3 0.1 2.2 0.6 2 0 0.0 COS.PROD.DB5.BACKUP.SDLDS 2 516 516 1 0 2 0.1 0.1 0.1 0 0 0.0 DEVPROD.REPORTS 2 1 2 0 0 2 0.1 0.2 0.1 0 0 0.0 DEVTEST.SAS.DAY.DATA 1 18 18 0 0 1 5.9 5.9 5.9 6 0 0.0 DEVTEST.SAS.MONTH.DATA 1 1 1 0 0 1 2.5 2.5 2.5 3 0 0.0 DEVTEST.SAS.WEEK.DATA 1 1 1 0 0 1 2.5 2.5 2.5 3 0 0.0 DUMMY10.TEST.GDG1 1 1 1 0 0 1 0.0 0.0 0.0 0 0 0.0 DZZ110.SMFDAILY.DATA 1 1 1 0 0 1 6.4 6.4 6.4 6 0 0.0 DZZ199.SMFDAILY.DATA 21 85 104 1 0 2 0.7 32.3 7.2 151 0 0.0 XXCKUP.DATASETS.LIST 1 918 918 0 0 1 0.5 0.5 0.5 0 0 0.0 ----- ---- ---- -------- ------- ------- SYSID 82 2 0 178 0 0.0


 Figure D-2.  Sample Report Output

This report shows that 82 generation data sets currently
reside on DASD, belonging to slightly more than a dozen GDGs.
Several groups have only one generation, while the largest
group has 21.  The report also identifies several potential
problems that should be investigated:

o The BACKUP.DELETED.MVS340.LIST GDG consists of 20 different
  generations, but the '# OF VOLS' column indicates that all
  of these data sets reside on one volume.  This could cause
  a problem if that volume were lost.  The JCL for that GDG
  should be changed to spread the generations across several
  volumes for better disaster recovery.  All three of the
  GDGs with the BACKUP prefix have this same problem.

o The COS.PROD.DB5.BACKUP.SDLDS generation contains two data
  sets, but the minimum and maximum generation numbers found
  were both 516, and the 'DUPE GENS' column contains a value
  of 1.  This indicates that two data sets exist with the
  same generation number, 516.  Because duplicate data sets
  cannot both be cataloged, one of these is probably in error
  and could be deleted.  The Storage Administrator could run
  this report again, this time requesting the detail option
  and selecting only the COS prefix.  This would give more
  information about both data sets, including the volumes on
  which they reside.

o The DZZ199.SMFDAILY.DATA GDG also has one duplicate
  generation, and thus has the same problem described
  previously.  It could also be investigated and corrected by
  running the report again with the detail option and
  requesting only the DZZ199 prefix.

o Several of the GDGs with DEVTEST and DZZ110 prefixes
  prefixes have only one generation, and that generation
  number is 1.  It is possible these data sets were
  established but then never used, or were created as a test
  but then never deleted.

o The DZZ199.SMFDAILY.DATA group contains 21 generations with
  an average size of 7.2 megabytes per generation, and at
  least one generation that is using 32.3 megabytes.  Perhaps
  space could be saved by reducing the number of generations
  to be kept for this group.

o The entry XXCKUP.DATASETS.LIST is probably not a real GDG
  at all, but a renamed version of generation 918 of the
  BACKUP.DATASETS.LIST group.  Because the oldest generation
  in that group is generation 994, this data set is at least
  76 generations old, and can probably be deleted.

Using this report, the Storage Administrator has detected
several problems that should be corrected, and has a feel for
the standards that should be put in place for DASD-resident
Generation Data Groups.  One of these standards could be the
specification that DASD-resident GDGs should have a maximum
of only seven kept generations.  A glance at the report shows
that a new standard such as this would require changes to
only three existing GDGs.

As part of the justification for a new standard, the Storage
Administrator wants to calculate the space savings if the
standard were implemented.  This can be done by executing the
GDG Management and Retention facility again with slightly
different options.  The facility is accessed again in the
same manner, but this time the execution options panel will
be completed as in Figure D-3.

------------------------  GDG Management and Retention  -----------------------
Command ===>

CA MICS Input Definition:
   Unit DBID(s) ===> V           Cycle ===> 01 Sources ===> D

Report Options:
   Produce prefix report          ===> Y       (Y/N)
   Produce GDG base report        ===> N       (Y/N)
   Produce detail report          ===> N       (Y/N)
   Simulate generations kept      ===> 7__     (Numeric, 1-999)
   For prefix/GDG base reports:
     Sort reports by              ===> SAVINGS (DSNAME/SAVINGS)


DOWN - Filter   ENTER - Validate END - Run
-------------------------------------------------------------------------------
 Figure D-3. Execution Options Panel

For this execution, the Storage Administrator has requested
that the Prefix report be produced, rather than the GDG base
report.  This will provide the same information, but will
summarize at the data set prefix level, rather than at the
GDG level.  A simulation value of seven was also specified,
so that the report will simulate the savings of keeping a
maximum of seven generations for each GDG.  Finally, the
report should be sorted so that prefixes showing the greatest
amount of simulated savings will appear first.

The report produced by the previous panel is shown in Figure
D-4.

GDG MANAGEMENT AND RETENTION (STGEFE) CA MICS StorageMate - PREFIX REPORT ---------------------------------------------------- System Identifier=SYSA ----------------------------------------------------- Prefix # Min Max Avg GDGs GDGs Min Max Avg Max Avg Total +--- Simulated Space Savings ---+ Of # # # with with # # # MB MB MB Data Sets Moved MB Saved GDGs Gen Gen Gen Dups Gaps Vol Vol Vol Alloc Alloc Alloc DZZ199 1 21 21 21 1 0 2 2 2 151.2 7.2 151.2 14 81.9 BACKUP 3 5 20 13 0 0 1 1 1 3.1 0.1 5.3 21 3.0 BUDGT 2 4 7 6 0 0 3 4 4 2.4 0.3 3.5 0 0.0 COS 1 2 2 2 1 0 2 2 2 0.2 0.1 0.2 0 0.0 DEVPROD 1 2 2 2 0 0 2 2 2 0.2 0.1 0.2 0 0.0 DEVTEST 3 1 1 1 0 0 1 1 1 5.9 3.7 11.0 0 0.0 DUMMY10 1 1 1 1 0 0 1 1 1 0.0 0.0 0.0 0 0.0 DZZ110 1 1 1 1 0 0 1 1 1 6.4 6.4 6.4 0 0.0 XXCKUP 1 1 1 1 0 0 1 1 1 0.5 0.5 0.5 0 0.0 -------- ----- ---- ---- -------- ------ -------- 14 2 0 178.4 35 84.9


 Figure D-4.  Sample Report Output

This report shows that only two groups would be affected by
the proposed standard of seven kept generations -- prefix
DZZ199 and prefix BACKUP.  Implementation of this standard
would cause 35 data sets to be removed, resulting in space
savings of 84.9 megabytes.  The Storage Administrator can now
use these values to justify the new standard, and can contact
the owners of the two identified prefixes to begin
implementation.

CASE STUDY #2

The Maple Manufacturing Company implemented DFSMS about six
months ago.  At that time, it defined separate Storage Groups
for TSO data sets and Batch data sets, as those are their two
most important applications.  The Storage Administrator wants
to monitor these groups on a regular basis, to ensure that
they deliver the desired performance and availability.  This
monitoring function can easily be done using the Volume Group
Configuration facility of CA MICS StorageMate.  This facility
can be invoked by using the following panel selection
sequence:

o Display the CA MICS StorageMate primary option panel and
  select the 'Storage Administration' option

o Select the 'Volume Grouping' option

o Select the 'Volume Group Configuration' facility

At this point the execution options panel for this facility
is displayed.  The Storage Administrator completes the
options on the panel so it appears as shown in Figure D-5.

-------------------------  Volume Group Configuration  ------------------------
Command ===>

CA MICS Input Definition:
   Unit DBID(s) ===> P           Cycles ===> 01 - 01

Report Options:
   Produce reports    ===> DETAIL   (SUMMARY/DETAIL/BOTH)
   Volume Group Table ===> ________ (Input member name)


DOWN - Filter   ENTER - Validate END - Run
-------------------------------------------------------------------------------

===============================================================================

-------------------------  StorageMate Data Filtering  ------------------------
Command ===>

Element Oper  Value(s)
DSNAME    EQ   _______________________________________________________________
GROUP     EQ   TSO  BATCH
VOLSER    EQ   _______________________________________________________________


UP - Report     ENTER - Validate
-------------------------------------------------------------------------------

 Figure D-5.  Execution Options Panel

The input data for the report comes from the 01 data cycle in
the 'P' Unit of the CA MICS database.   Data is filtered so
that only the Storage Groups named TSO and BATCH will appear
on the report.  The Detailed version of the output report has
also been requested.  The example shown above is actually a
portion of two different panels.  The DOWN key is used to
display the filtering panel, and then the UP key is used to
return to the main report panel.

The Storage Administrator completes the panels as shown
above, and produces the report.  A copy of the report output
appears in Figure D-6.

VOLUME GROUP CONFIGURATION (STGEHA) CA MICS StorageMate - DETAIL --------------------------------------- System Identifier=SYSB / Volume Group=BATCH -------------------------------------------- Device C.U. Volume Device Frag. +---- Data Sets ----+ +-- Tracks Allocated --+ +-- Free --+ +- Average -+ Address Path Serial Type Index Total Non- VSAM Total Non- VSAM Tracks Per- I/O Response Number VSAM VSAM cent Rate Time 140 0001 STG140 3380 0.314 60 59 1 8069 8066 3 5061 38% 4.43 24.4 141 0001 STG141 3380 0.052 117 105 12 11148 9853 1295 2096 16% 4.47 24.8 142 0001 STG142 3380 0.099 6 1 5 9318 15 9303 3896 29% 2.82 32.0 143 0001 STG143 3380 0.948 351 348 3 12876 12710 166 338 3% 0.73 22.4 144 0001 STG144 3380 0.375 134 134 0 11541 10142 1399 1703 13% 0.00 26.6 145 0001 STG145 3380 0.312 2 2 0 13245 13245 0 24 0% 4.01 24.0 146 0001 STG146 3380 0.463 352 345 7 12963 12823 140 251 2% 0.75 20.5 147 0001 STG147 3380 0.805 248 245 3 12679 12358 321 535 4% 0.70 21.3 148 0001 STG148 3380 0.338 139 130 9 12933 12718 215 311 2% 0.23 22.1 149 0001 STG149 3380 0.399 64 63 1 12286 12283 3 958 7% 1.19 21.3 14A 0001 STG14A 3380 0.833 229 218 11 13121 12501 620 30 0% 0.74 19.9 14B 0001 STG14B 3380 0.793 202 201 1 12850 12847 3 364 3% 0.51 22.9 14C 0001 STG14C 3380 0.321 207 200 7 13093 4748 8345 121 1% 1.34 23.2 14D 0001 STG14D 3380 0.689 198 197 1 12750 12735 15 464 3% 0.59 20.5 14E 0001 STG14E 3380 0.927 236 234 2 13160 13123 37 54 0% 0.48 18.2 14F 0001 STG14F 3380 0.803 345 339 6 12456 12327 129 758 6% 0.37 20.6 ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ------ ------ 016 Volumes 0.529 2890 2821 69 194488 172494 21994 16964 8% 23.38 24.3 --------------------------------------- System Identifier=SYSB / Volume Group=TSO -------------------------------------------- Device C.U. Volume Device Frag. +---- Data Sets ----+ +-- Tracks Allocated --+ +-- Free --+ +- Average -+ Address Path Serial Type Index Total Non- VSAM Total Non- VSAM Tracks Per- I/O Response Number VSAM VSAM cent Rate Time 240 0002 STG240 3380 0.337 136 131 5 12143 10744 1399 1101 8% 1.60 24.7 241 0002 STG241 3380 0.652 159 149 10 12521 12034 487 693 5% 0.71 15.7 242 0002 STG242 3380 0.131 6 1 5 7398 15 7383 5816 44% 2.52 41.1 243 0002 STG243 3380 0.461 224 207 17 12965 12162 803 249 2% 0.86 22.2 244 0002 STG244 3380 0.502 576 541 35 13111 12147 964 103 1% 0.20 33.9 245 0002 STG245 3380 0.382 257 252 5 13055 12658 397 159 1% 0.57 23.9 246 0002 STG246 3380 0.665 213 212 1 12291 12276 15 923 7% 0.26 32.1 247 0002 STG247 3380 0.259 217 212 5 12678 12511 167 536 4% 0.44 21.3 248 0002 STG248 3380 0.521 39 38 1 12745 12742 3 469 4% 0.16 30.7 249 0002 STG249 3380 0.443 227 226 1 12908 12893 15 306 2% 0.29 21.4 24A 0002 STG24A 3380 0.324 120 117 3 12943 12895 48 301 2% 0.50 27.9 24B 0002 STG24B 3380 0.590 234 221 13 13051 12250 801 154 1% 0.77 25.6 24C 0002 STG24C 3380 0.162 216 185 31 12844 11207 1637 370 3% 1.51 23.5 24D 0002 STG24D 3380 0.674 226 225 1 12283 12268 15 931 7% 0.38 29.7 24E 0002 STG24E 3380 0.346 146 143 3 11713 11455 258 1531 12% 0.00 31.9 24F 0002 STG24F 3380 0.044 9 1 8 6104 15 6089 7140 54% 7.65 15.5 ---- ------ ------ ------ ------ ------ ------ ------ ------ ---- ------ ------ 016 Volumes 0.406 3005 2861 144 190753 170272 20481 20782 10% 18.42 22.9


 Figure D-6.  Sample Report Output

The results from the report indicate that the two groups are
similar, in that each contains 16 volumes and has similar
fragmentation, usage, free space, and activity values.

Based on projections of future activity, the Storage
Administrator expects a greater demand for space from Batch
users over the next few months, and a corresponding decrease
in the space needed by TSO users.  Because of this, the
Storage Administrator would like to shift at least two
volumes from the TSO group to the batch group.  The report
shows that volumes STG242 and STG24F would be good
candidates, as they contain a total of only 15 TSO data sets,
and have plenty of free space.

The Storage Administrator knows that the Volume Group
Configuration facility can also be used to simulate alternate
groupings and that performing a simulation could be valuable
before making the actual volume group changes.  Before
running the report again, the Administrator would first
create a Volume Group Table that would contain SMS volumes
(See Chapter 3.4), and then would modify it so that volumes
STG242 and STG24F would be assigned to group BATCH, rather
than to group TSO.  The Administrator would then invoke the
Volume Group Configuration facility again, using the options
shown in Figure D-7.

------------------------  Volume Group Configuration  ------------------------
Command ===>

CA MICS Input Definition:
   Unit DBID(s) ===> P           Cycles ===> 01 - 01

Report Options:
   Produce reports    ===> SUMMARY  (SUMMARY/DETAIL/BOTH)
   Volume Group Table ===> VGTEST01 (Input member name)


DOWN - Filter   ENTER - Validate END - Run
-------------------------------------------------------------------------------

===============================================================================

-------------------------  StorageMate Data Filtering  ------------------------
Command ===>

Element Oper  Value(s)
DSNAME    EQ   _______________________________________________________________
GROUP     EQ   TSO  BATCH
VOLSER    EQ   _______________________________________________________________


UP - Report     ENTER - Validate
-------------------------------------------------------------------------------

 Figure D-7.  Volume Group Configuration Facility

For this execution, a Volume Group Table named VGTEST01 has
been specified.  When this table was created, SMS volumes
were included in it, and an option was activated that would
cause it to be searched for all volumes.  This will cause the
facility to use the provided table for volume assignments,
and ignore the SMS Storage Group assignment.  Also, a Summary
report was requested this time, instead of the previously
shown Detail report.

The report produced by these options is shown in Figure D-8.

VOLUME GROUP CONFIGURATION (STGEHA) CA MICS StorageMate - SUMMARY ---------------------------------------------------- System Identifier=SYSB ----------------------------------------------------- Volume Number Frag. +------ Data Sets ------+ +---- Tracks Allocated ----+ +---- Free ----+ +--- Average ---+ Group of Index Total Non- VSAM Total Non- VSAM Tracks Per- I/O Resp. Name Volumes VSAM VSAM cent Rate Time BATCH 18 0.480 2905 2823 82 207990 172524 35466 29920 13% 33.55 23.6 TSO 14 0.451 2990 2859 131 177251 170242 7009 7826 4% 8.24 24.2 -------- ------ ------ ------- ------- ------- -------- -------- -------- -------- ------ ------- -------- 32 0.468 5895 5682 213 385241 342766 42475 37746 9% 41.79 23.7


 Figure D-8.  Sample Report Output

As expected, the number of volumes in each group is no longer
16; two volumes have shifted from the TSO group to the BATCH
group.  Note that the percentage of the BATCH group that is
unallocated jumped from 8% to 13%, while the free percentage
of the TSO group dropped from 10% to 4%.  One drawback of
making this change, however, would be to badly skew the
average I/O rate to each group, as the BATCH group would
receive 33.55 I/Os per second, while the TSO group receives
only 8.24.  By making slight changes to the Volume Group
Table, the Storage Administrator could run the report several
times, using different combinations of volumes, until a
better combination was found.

While the I/O rate after this change would be somewhat
skewed, the Storage Administrator is satisfied with the other
results, and believes that an I/O rate of 33.55 would not
have too adverse an effect on the Batch workload.  The
Administrator decides to implement this change and to run the
Volume Group Configuration report on a regular basis to
ensure the expected results are achieved.

CASE STUDY #3

XYZ Phone Sales is a small company that just purchased its
first cache controller.  The main purpose for the new
controller is to provide optimum service to their online
order entry application.  All of the critical data sets
related to this application have been moved to volumes under
control of this new controller.

The most critical time for this application is Friday
morning from 9 a.m. until noon.  That is the time their area
sales force calls in their orders for the week, and fast
response times are a necessity.  This critical time window
has been defined to their CA MICS system as zone 1.

Now that the new controller is fully implemented, the storage
administrator wishes to monitor its activity.  This can be
done using the ASTEX / Top Resource Consumers report within
CA MICS StorageMate.  The administrator invokes this facility
with the following procedure:

o Display the CA MICS StorageMate primary option panel and
  select the 'Performance Management' option

o Select the 'ASTEX / Top Resource Consumers' option

At this point the execution options panel for this facility
is displayed.  The Storage Administrator completes the
options on the panel so it appears as shown in Figure D-9.

-----------------------  ASTEX / Top Resource Consumers  ----------------------
Command ===>

CA MICS Input Definition:
   Unit DBID(s) ===> A           Cycles ===> 01 - 03

Report Options:
   Produce reports          ===> CHART    (CHART/LIST/BOTH or DETAIL)
     Color graphics output  ===> NO       (YES/NO/MODIFY)
   Summarize by             ===> A1       (An-n/Dn-n/TABLE)
     Data Set Group Table   ===> ________ (If TABLE specified)
   Summary level            ===> ZONE     (DATE/HOUR/ZONE)


DOWN - Select   ENTER - Validate END - Run
-------------------------------------------------------------------------------

===============================================================================

-------------------------  StorageMate Data Filtering  ------------------------
Command ===>

Element Oper  Value(s)
ENDTS     EQ   05APR91
ZONE      EQ   1
1ACCT     EQ   _______________________________________________________________


UP - Select     ENTER - Validate
-------------------------------------------------------------------------------

 Figure D-9.  Execution Options Panel

Input from DBID 'A' was requested, because that unit contains
the ASTEX data.  The administrator wants to monitor activity
for Friday, April 5, 1991.  Unsure which cycle contains the
data for that date, the administrator requests cycles 01
through 03, and specifies that only date 05APR91 be selected.
This will cause CA MICS StorageMate to do the work of finding
the data.  As the critical time period for our application is
defined by zone 1, only that zone is requested.  The output
should consist of a printer graphic report, and summarization
should be done using the contents of the first accounting
field.  Summarization by zone should also be done.

The example shown above actually represents portions of two
different panels.  Movement between the main report options
panel, the data filtering panel, and the selection panel
described below is performed via the ISPF UP and DOWN keys.

Because this facility allows reporting of multiple
indicators, the administrator now uses the DOWN key to
request display of the element selection panel.  This causes
the panel shown in Figure D-10 to be displayed.

-----------------------  ASTEX / Top Resource Consumers  ----------------------
Command ===>

Enter A-Z to select elements, then press UP to return.
I/O Activity
      _ Total I/O Duration                 _ Total Number of I/O Requests
      _ Total IOS Queue Time               _ Total RPS Time
      _ Total I/O Pending Time             _ Total Seek I/O Requests
      _ Total I/O Disconnect Time          _ Average Seek Distance (Cylinders)
      _ Total I/O Connect Time             _ Average Response Time in MS

Cache Activity
      X Total Cache Effective (Hit) Count  _ Total Tracks Loaded
      _ Total Cache-Candidate I/O Requests _ Total DFW Tracks Loaded
      _ Total Non-Candidate I/O Requests   _ Total NVS Overflow Count

Exceptions/Errors
      _ Total Dispatching Exceptions       _ Total Volume Exceptions
      _ Total Dsname Exceptions            _ Total Cross-Volume Exceptions
      _ Total Path Exceptions              _ Total I/O When Volume Reserved
      _ Total Seek Exceptions


UP - Report     DOWN - Filter  ENTER - Validate
-------------------------------------------------------------------------------

 Figure D-10.  Reporting Element Selection Panel

In this case, the administrator is only interested in
selecting one element, although multiple elements could be
selected for this single execution.  The element 'Total I/O
Requests Cached' has been selected, because that will report
the number of I/O requests that were handled by cache, rather
than by having to access DASD directly.  Once this selection
is made, the administrator presses UP to return to the
previous panel, and then END to generate the report.

The resulting report produced from these two panels appears
in Figure D-11.

ASTEX / TOP RESOURCE CONSUMERS (STGEPF) CA MICS StorageMate XYZ PHONE SALES SYSID=SYSA Date=05APR91 Zone=1 ********************** **** **** ***. *** ORDER-ENTRY *** . *** ** . ** ** .. ** * . * ** . ** ** . ** ** . 469012 ** ** .. 32.14% ** * . * ** . ** * . * * . * * + . . . . . . . . . . . . . . . . * . . . . . * * .. . . 42735 * * . . 2.93%. . * TECH-SUPPORT * 645535 . . . . .* * 44.24% . . * DATA-BASE-PROD * . . * * .. . * * . . * ** . . 225238 ** ** . . ** ** . . ** ** 76543 . ** CUST-SUPPORT * .5.25% . * ** . . ** ** . . ** *** . . *** *** . . *** **** . **** ********************** * PAYROLL Total Cache Effective (Hit) Count


 Figure D-11.  Sample Report Output

The results from the report are as the administrator
expected, except that 5.25% of the cache activity is shown
for data sets belonging to the Payroll department.  That is
not a critical application, and should not be cached.

To investigate this further, the administrator will run this
report again, this time making some minor changes as shown on
the panel in Figure D-12.

-----------------------  ASTEX / Top Resource Consumers  ----------------------
Command ===>

CA MICS Input Definition:
   Unit DBID(s) ===> A           Cycles ===> 01 - 03

Report Options:
   Produce reports          ===> DETAIL   (CHART/LIST/BOTH or DETAIL)
     Color graphics output  ===> NO       (YES/NO/MODIFY)
   Summarize by             ===> A1       (An-n/Dn-n/TABLE)
     Data Set Group Table   ===> ________ (If TABLE specified)
   Summary level            ===> ZONE     (DATE/HOUR/ZONE)


DOWN - Select   ENTER - Validate END - Run
-------------------------------------------------------------------------------

===============================================================================

-------------------------  StorageMate Data Filtering  ------------------------
Command ===>

Element Oper  Value(s)
ENDTS     EQ   05APR91
ZONE      EQ   1
1ACCT     EQ   PAYROLL


UP - Select     ENTER - Validate
-------------------------------------------------------------------------------

 Figure D-12.  Execution Options Panel

For this execution, only two changes have been made.  An
additional data filtering statement was coded to test the
first accounting field for a value of PAYROLL, so that only
the data from the payroll department will be selected.
Secondly, a Detail report was requested instead of a Chart.
This will give more detail as to how the values on the first
report were derived.  It is also not necessary to use the
secondary element selection panel this time, because the
elements from the last report will be remembered and used.
Thus, the administrator simply uses the END key to execute
the report.

The report produced by these options is shown in Figure D-13.

ASTEX / TOP RESOURCE CONSUMERS (STGEPF) CA MICS StorageMate - DETAIL XYZ PHONE SALES -------------------------------------- System Identifier=SYSA Owner=PAYROLL Date=05APR91 Zone=1 ------------------------------------ Time Total Cache Data Set Name Volume Effective Serial (Hit) Count Number 9:30 11050 NWEST.PAYROLL.TST.FILE CRIT01 9:30 0 CENT.PNAME.FILE TSO023 10:00 0 AGENT5.ISPPROF TSO055 10:00 26465 NWEST.PAYROLL.TST.FILE CRIT01 10:00 0 AGENT5.ISPPROF TSO055 10:30 22784 NWEST.PAYROLL.TST.FILE CRIT01 10:30 0 CENT.PNAME.FILE TSO023 11:00 10345 NWEST.PAYROLL.TST.FILE CRIT01 11:00 0 CENT.PAYRATES.FILE TSO061 11:30 5899 NWEST.PAYROLL.TST.FILE CRIT01 ----- ------------ ZONE 76543 ============ 76543


 Figure D-13.  Sample Report Output

As expected, some of the data sets belonging to payroll have
been accessed but had no cache activity.  But one data set,
NWEST.PAYROLL.TST.FILE, seems to have been responsible for
all the cache activity during this measurement interval.  The
Storage Administrator notices immediately that this data set
resides on volume CRIT01, which is one of the critical
volumes under the cache controller.

The only task that remains is for the Storage Administrator
to move the identified data set to a non-cached volume.  This
should reduce cache activity by approximately 5%, if this was
a representative measurement interval.