Source: ASTADS file at the DETAIL timespan
Function: This facility allows the storage administrator to
quickly identify the primary users of resources
critical to the operation of the storage I/O
subsystem. Most performance analysts are familiar
with the 80/20 rule, which states that 20% of your
users are responsible for 80% of the resource
usage on your system. Based on the increasing
demands being made upon storage administrators, it
is most effective to concentrate upon the usage of
the system by its primary users. This facility
allows the storage administrator to select one or
more critical resource measures for which
reporting will be done. The reports produced will
quickly identify primary resource consumers,
giving the storage administrator information about
where tuning activities will produce the best
results.
When potential problems have been detected by the
summary reports or charts in this facility, a
detail report option can be used to provide more
information about the area in question. This will
narrow specific problems down to the individual
hour and data set level, if necessary.
Features: As stated above, a wide variety of data selection
and summarization options are available. Data can
be summarized using one or more nodes of the data
set names, one or more CA MICS accounting fields,
or using a user-defined table that relates data
set prefixes with descriptions. Selection and
reporting can also be done by date, hour, and
zone. All of these options provide various views
and levels of detail, making it easy to quickly
identify and isolate problems.
The user is allowed to select the resource
elements that can be used for reporting, as many
different elements are available that measure
storage system activity. Several of these may be
selected in one report run, providing a large
amount of information from a single report
execution.
Output reports from this facility include tabular
reports, printer charts, and color graphic charts.
SMS
Issues: Users creating or modifying their SMS storage
classes and groups must carefully monitor the I/O
load on their existing applications, making sure
that load is distributed so as to provide the
desired performance for critical groups. These
reports will help the user design storage classes
and groups, and will allow the monitoring of
resources after SMS implementation to make sure
the SMS groups are still working correctly. This
continued monitoring is essential, as even
well-tuned groups may change as I/O workloads
fluctuate.
ELEMENT DESCRIPTIONS
SYSID
This value appears at the top of each page and
identifies the System Identifier (system) to which
this data applies.
Date
The date during which the measurements shown on
this chart were taken. The interval at which
measurements are taken, and the number of data
cycles included in the report will affect the
number of dates that appear in the output.
Hour
If summarization by hour was selected, the hour
during which the measurements on the chart were
taken will be displayed.
Zone
If summarization by zone was selected, the CA MICS
zone during which the measurements on the chart
were taken will be displayed.
Pie Chart Slices
The slices on the pie chart represent the amount
of the selected storage resource being consumed by
a particular group or user. The label on the
slice identifies the group represented by the pie
slice. Depending on the options selected, two
numeric values may also appear on the chart
associated with each slice. One represents the
actual numeric value being represented by the
slice, while the other shows the percentage of the
entire pie that slice represents. For example,
assume that Total I/O Requests was the value being
charted, and the numbers 5000 and 25% appeared for
a given slice. That would mean that 5000 I/O
requests were issued by this group, representing
25% of all I/O requests issued during this
measurement interval.
Pie Chart Labels
The label associated with each pie slice indicates
the group to which that slice applies. This may
be assigned using one or more of the data set name
nodes, one of more of the CA MICS accounting
fields, or using a user-specified table that
associates data set prefixes with descriptive
labels.
Footnotes
The footnote at the bottom of each chart indicates
the storage measurement represented by the chart.
A separate chart will appear for each element
selected in each time period selected. The
possible values that are possible, and a short
description of their meanings are as follows:
Total I/O Duration
The total duration of all I/O requests issued
by this summary group during this measurement
interval. This represents the sum of IOS
queue time, pending time, disconnect time, and
connect time for all requests in this group.
Total IOS Queue Time
The total amount of time spent by I/O requests
in this summary group waiting on the IOS
request queue to be scheduled.
Total I/O Pending Time
The total amount of time spent by I/O requests
in this summary group in pending status. This
represents the time an I/O request must wait
for a clear path to the intended device.
Total I/O Disconnect Time
The total amount of time spent by I/O requests
in this summary group in disconnect status.
This represents the interval between the time
the I/O request has been passed to the device
and the time the device is ready to transfer
data.
Total I/O Connect Time
The total amount of time spent by I/O requests
in this summary group in connect status. This
represents the time that data is actually
being transferred between the device and the
processor.
Total Number of I/O Requests
The total number of I/O requests issued by all
users in this summary group during this
measurement interval.
Total RPS Time
The total amount of time spent by I/O requests
in this summary group in RPS status. This
represents the time the request is waiting
while the DASD device is being positioned to
the correct sector.
Total Seek I/O Requests
The total number of Seek I/O requests issued
by users in this summary group during this
measurement interval. A seek request causes
the read/write head of the DASD device to
position itself at a specified cylinder
address.
Average Seek Distance in Cylinders
The average of all seek distances, expressed
in cylinders, for each I/O request in this
summary group for this measurement interval.
This represents the average distance the DASD
read/write heads had to move to satisfy the
I/O requests for this group.
Average Response Time in Milliseconds
The average time, expressed in millionths of a
second, it took for each I/O request to
complete for the I/Os in this summary group
during this measurement interval.
Total Cache Effective (Hit) Count
The total number of I/O requests for this
summary group during this measurement interval
that were processed by cache, and did not have
to access the DASD device directly.
Total Cache Candidate I/O Requests
The total number of I/O requests for this
summary group during this measurement interval
that were considered to be cache candidates.
Total Non-Candidate I/O Requests
The total number of I/O requests for this
summary group during this measurement interval
that were not considered to be candidates for
caching.
Total Tracks Loaded
The total number of tracks loaded into the
cache on behalf of I/O requests in this
summary group during this measurement
interval.
Total DASD Fast Write Tracks Loaded
The total number of tracks loaded into the
DASD Fast Write cache on behalf of I/O
requests in this summary group during this
measurement interval.
Total NVS Overflow Count
The total number of times that I/O requests by
this summary group caused the non-volatile
storage (NVS) cache area to become full and
overflow during this measurement interval.
Total Dispatching Exceptions
The total number of dispatching exceptions
that were caused by I/O requests in this
summary group during this measurement
interval. A display exception is caused when
excessive delay occurs getting an I/O
dispatched.
Total Dsname Exceptions
The total number of data set name exceptions
that were caused by I/O requests in this
summary group during this measurement
interval. A dsname exception is caused when
excessive delay occurs at the data set level.
Total Path Exceptions
The total number of path exceptions that were
caused by I/O requests in this summary group
during this measurement interval. A path
exception is caused when excessive delay
occurs at the I/O path level.
Total Seek Exceptions
The total number of seek exceptions that were
caused by I/O requests in this summary group
during this measurement interval.
Total Volume Exceptions
The total number of volume exceptions that
were caused by I/O requests in this summary
group during this measurement interval. A
volume exception is caused when excessive
delay occurs at the volume level.
Total Cross-Volume Exceptions
The total number of cross-volume exceptions
that were caused by I/O requests in this
summary group during this measurement
interval. A cross-volume exception is caused
when excessive delay occurs at the volume
level, caused by activity from another system.
Total I/O When Volume Reserved
The total number of I/O requests issued by
this summary group during this measurement
interval when the intended device was reserved
by another user.
ASTEX / TOP RESOURCE CONSUMERS (STGEPF) CA MICS StorageMate ABC CORPORATION SYSID=SYSA Date=05APR91 Hour=9 DATA-BASE-PROD ********************** ****. **** *** . .** *** . . *** FINANCE ** . . ** ** .. .. ** * . . * ** . 255.068 . ** ** . 12.35% . ** ** . . ** PAYROLL ** 255.01 . . ** * . 12.35% . . * ** . . . . ** * .. . . 361.79 * * . . .. . 17.52% * * .. . .. * * . . . . * * .. . . * * . . . . * * . * * + . . . . . . . . . . . . . . . . * . . . . . . 29.8952 * * . . . . 1.45% . . . . . OPR-MNGT * . . 57.954 * * . 2.81%. * SYS-PROGS * . . . * * . . .* * 645.132 . * * 31.23% . * ** . ** * . * SALES ** . ** ** .. 460.583 ** ** . 22.30% ** ** . ** * . * ** . ** ** . ** *** . *** *** . *** SUPPORT **** . **** *******.************** Total I/O Connect Time
Figure 4-15. ASTEX / Top Resource Consumers / Chart
| Copyright © 2012 CA. All rights reserved. | Tell Technical Publications how we can improve this information |