Causes graphs to depict a summarization of a specified time period. The selected data is averaged into the time period selected. See Chapter 4 for more information.
If you also use the /SCHEDULE or /DATES qualifiers, the DAILY and WEEKLY graphs are trimmed to show only the selected hours.
If history data with the periodicity attribute is selected, the /AVERAGE value is automatically set to that periodicity value. This is true regardless of whether the /AVERAGE qualifier is used.
Specifies the beginning date and time of data selected for graphing.
Where date represents the date and time in standard DCL format.
The date and time format is the standard DCL format, either absolute or relative. If you do not specify the /BEGINNING qualifier, the Performance Manager uses 00:00:00 on the same day for which the ending date and time is specified. If you do not specify an /ENDING qualifier, the Performance Manager uses 00:00:00 of the current day as the default beginning time.
You can also use the keywords TODAY and YESTERDAY. See HP's OpenVMS User's Manual, or access the HELP topic SPECIFY DATE_TIME for complete information on specifying time values.
/BEGINNING is incompatible with the /DATES qualifier.
Specifies the name of the Collection Definition, and hence the collected data that you desire to use for the graph. If you omit this qualifier, daily data is obtained from the Collection Definition called “CPD.”
To view the Collection Definitions that you have available, use the DCL command ADVISE COLLECT SHOW ALL.
If you want to use history data instead of daily data, use the /HISTORY qualifier instead of the /COLLECTION_DEFINITION qualifier. /COLLECTION_DEFINITION is incompatible with the /HISTORY qualifier.
Specifies the workload family whose workload definitions are to be used for summarizing process activity. This affects the TOP_WORKLOAD graph types as well as custom graphs with WORKLOAD metrics by providing the desired metrics on an individual workload basis. The default is “other” which averages all process activity together. The family_type of USERGROUP is required. No restrictions are made on the family name.
Combines data from all nodes into a single graph. Data from each node is either added or averaged.
The following command produces a graph of the total number of processes in the cluster.
$ ADVISE PERFORMANCE GRAPH/COMPOSITE/TYPE=PROCESSES
When the Performance Manager combines I/O data from more than one node, it is possible to double count I/O operations to a disk device if it is served. Therefore, when you specify /COMPOSITE, the Performance Manager does not count all MSCP-served I/O for individual disks.
When generating a customized graph for a single metric with /COMPOSITE, the Performance Manager graphs the metric by node.
When graphing CPU percentages with the /COMPOSITE qualifier each node's CPU time is scaled according to the VUP rating to produce a cluster average CPU utilization.
For more information, see the chapter Generate Historical Graphs.
Specifies that a file containing a series of date ranges is to be used in place of the /BEGINNING and /ENDING qualifiers. Each line in the dates file should look like the following command:
dd-mmm-yyyy hh:mm:ss.cc,dd-mmm-yyyy hh:mm:ss.cc
The time can be either omitted entirely or truncated. Any truncated parts of the time defaults to 0. The periods of time represented by each line in the file need not be contiguous but they must be in ascending order.
/DATES is incompatible with the /BEGINNING and /ENDING qualifiers.
Specifies the ending date and time of the graph. Where date represents the date and time in standard DCL format.
If you do not specify /BEGINNING, /ENDING defaults to the current time. If you do specify /BEGINNING, the /ENDING default are midnight (23:59) of the beginning date.
You can specify either an absolute time or a combination of absolute and delta times. You can also use the keywords TODAY, TOMORROW, and YESTERDAY. See HP's OpenVMS User's Manual, or access the HELP topic SPECIFY DATE_TIME for complete information on specifying time values.
/ENDING is incompatible with the /DATES qualifier.
The /FILTER qualifier allows you select a subset of the daily or history data for graphing. Process data and disk data can be filtered.
Hotfile data is also filtered. When you specify filtering by process, a hotfile record is selected if accessed by the specified process. When you specify filtering by disk device, a hotfile record is selected if located on the specified device. For hotfile records matching both process and disk device, specify filtering by both process and device.
Process data can be filtered by using any of the filter keywords: USERNAMES, IMAGENAMES, PROCESSNAMES, ACCOUNTNAMES, UICS, PIDS or WORKLOADNAMES. If a process record's identification information matches any of the identification specifications that are specified, then that record is selected.
Likewise, disk data can be filtered by using either of the filter keywords, VOLUMENAMES and DEVICENAMES. If a device record's identification information matches any of the volume names or device names that are specified, then that record is selected.
The following table lists the /FILTER keyword options:
|
Keyword |
Description |
|---|---|
|
/USERNAMES=(string,...) |
Specify /FILTER=USERNAMES to graph all process records with the username matching any of the specified strings. |
|
/IMAGENAMES=(string,...) |
Specify /FILTER=IMAGENAMES to graph all process records with the imagename matching any of the specified strings. Do not specify any trailing ".EXE", nor the file version, device or directory. |
|
/PROCESSNAMES=(string,...) |
Specify /FILTER=PROCESSNAMES to graph all process records with the processname matching any of the specified strings. The match string is case sensitive, so if the process names have any lowercase letters, spaces or tabs, use double quotes when you enter the value (e.g., /FILTER=PROCESSNAMES="--RTserver--" ). |
|
/ACCOUNTNAMES=(string,...) |
Specify /FILTER=ACCOUNTNAMES to graph all process records with the accountname matching any of the specified strings. |
|
/WORKLOADNAMES |
Specify /FILTER=WORKLOADNAMES to graph all process records associated with any of the specified workloads. This filter is valid only if the /CLASSIFY_BY qualifier is used to specify a classification scheme for your workload data. |
|
/UICS=(uic,...) |
Specify /FILTER=UICS to graph all process records with the UIC matching any of the specified UICs. An asterisk may be used to wildcard either the group or user field of the specified UICs. |
|
/PIDS=(pid,...) |
Specify /FILTER=PIDS to graph all process records with the PID matching any of the specified PIDs. |
|
/VOLUMENAMES=(string,...) |
Specify /FILTER=VOLUMENAMES to graph all disk records with the volumename matching any of the specified strings. Do not specify any trailing colon. |
|
/DEVICENAMES=(string,...) |
Specify /FILTER=DEVICENAMES to graph all disk records with the devicename matching any of the specified strings. Do not specify any trailing colon. |
Where:
|
l |
Is in the range of 2 to 480, and a best-fit value is chosen by default. |
|
m |
Is greater than or equal to 20 and less than or equal to 60. |
|
n |
Is greater than or equal to 40 and less than or equal to 132. |
The Performance Manager graphs ReGIS or ANSI graph by default, depending on the device characteristics of the SYS$OUTPUT device. ANSI and ReGIS formats are not available with pie charts. You may override the default with the /FORMAT qualifier. A graph is one of four formats: ANSI, REGIS, TABULAR or PostScript.
Optionally, you may specify whether ReGIS graphs use LINE, PATTERN, or COLOR. COLOR is the default. PATTERN is incompatible with COLOR.
Use the X_POINTS keyword to specify the number of data points to plot across a ReGIS graph. The valid range for X_POINTS is 2 to 480. By default, the Performance Manager chooses a best-fit value for x-points so that the time period represented by each point is even.
As the value of X_POINTS increases, spikes and valleys become more defined and the graph has a higher resolution. A low number of X_POINTS produces a smoother graph because the graphing facility averages any additional data points within the time frame requested. Consider the time frame of a particular graph request when you determine the value of X_POINTS.
For example, over a 12-hour span, the Performance Manager records statistics 360 times (every 2 minutes). If the value of X_POINTS is 24, the graphing facility averages every 15 data records (or 30 minutes) and produces a graph with smooth flow. If the value of X_POINTS is 72, the graphing facility averages every 5 data records (or 10 minutes) and produces a graph with valleys and spikes.
Use the WIDTH keyword to specify the column width of the ANSI graph output. Valid widths range from 40 to 132 columns. If you do not specify the WIDTH qualifier, the Performance Manager uses the terminal width setting. When you specify the /OUTPUT qualifier or generate the graph under batch, the width of the graph is 132 columns.
Use the HEIGHT keyword to specify the graph height of the ANSI graph output. Valid heights are from 20 to 60 lines. If you do not specify HEIGHT, the Performance Manager uses the terminal page length setting. When you use the /OUTPUT qualifier or generate the graph under batch, the height of the graph is 40 lines.
Allows you to select history data from the Performance Manager database. By default, daily data files are used to supply data for graphing. However, by specifying the name of a history file descriptor, you can select historical data instead.
You must define the history file descriptor in the parameters file and have archived data according to the descriptor's definition. Use the DCL command ADVISE EDIT to invoke the Performance Manager Parameter Edit Utility. From the utility, you can ADD, DELETE, MODIFY, and SHOW history file descriptors. Use the ADVISE ARCHIVE command to create the archived files.
If history data with the periodicity attribute is selected, the /AVERAGE value is automatically set to that periodicity value. This is true regardless of whether the /AVERAGE qualifier is used.
/HISTORY is incompatible with the /COLLECTION_DEFINITION qualifier.
For information on how to produce a graph of history data including a typical time period, see the chapter Generate Historical Graphs.
Note: If model data was not archived, the /CLASSIFY_BY qualifier is restricted to those workload families specified in the history file descriptor.
Identifies the nodes to graph.
The Performance Manager creates a separate graph for each node unless you specify the /COMPOSITE qualifier. If you omit the /NODE_NAMES qualifier, all the nodes in the schedule file associated with the specified collection definition (CPD by default) are used for the graph(s). If you specify only one node, the parentheses can be omitted. Do not use wildcard characters in the node-name specifications.
Creates an output file that contains the graphs. The default file extension for a ReGIS graph is .REG, the file type for ANSI and TABULAR formatted graphs is .RPT and the file extension for PostScript is .PS.
When you generate multiple graphs with a single command line, you can create a unique output file for each graph. To do this, omit the file name with the /OUTPUT qualifier. The Performance Manager generates a separate file for each graph created and uses the graph type keyword as the unique file name.
For example:
$ ADVISE PERFORMANCE GRAPH/NODE=SYSDEV/END=1/TYPE=(MEM,CPU_U,CPU_MODE) /OUTPUT=.REG %PSPA-I-CREAGRAPHOUT, PSPA Graph created file MUMMS$DKA300:[CORREY]SYSDEV_CPU_UTILIZATION.REG;1 %PSPA-I-CREAGRAPHOUT, PSPA Graph created file MUMMS$DKA300:[CORREY]SYSDEV_MEMORY_UTILIZATION.REG;1 %PSPA-I-CREAGRAPHOUT, PSPA Graph created file MUMMS$DKA300:[CORREY]SYSDEV_CPU_MODES.REG;1
Loads information from the rules file to establish user-defined hardware scaling factors. The file-spec must point to an auxiliary knowledge base which has previously been compiled with the ADVISE PERFORMANCE COMPILE command. The default file type is .KB. If the NORULES qualifier is specified no augmentation of the factory rules occur. See also the Chapter "Customize the Knowledge Base."
Specifies that a subset of Performance Manager data is to be used (or not used if keyword negation is specified) to generate graphs. By default, the Performance Manager selects all data between the /BEGINNING time and the /ENDING time, or as specified with the /DATES qualifier.
Where:
|
day |
SUNDAY, MONDAY, TUESDAY, WEDNESDAY, THURSDAY, FRIDAY, SATURDAY, EVERYDAY, WEEKDAYS or WEEKENDS. |
|
hour-range |
Specified as m-n, where m and n are numbers from 0 to 24, and m is less than n. You can specify more than one hour range for a given day. Hour-range is mutually exclusive with the NO option. |
If you omit a day keyword, the data for that day is selected. Data selection for individual days of the week can be inhibited by negating the keyword (for example, NOSUNDAY) or for all of the days of the week by specifying the NOEVERYDAY keyword. The values [NO]WEEKDAYS and [NO]WEEKENDS similarly can be used to enable or disable data selection for weekdays and weekends.
You must specify an hour range for any non-negated day keyword. Do not include an hour range if you are specifying a negated day keyword, such as NOMONDAY.
Less inclusive keyword values override more inclusive values. For example, MONDAY=10--12 overrides EVERYDAY=8--17 for Monday, but the Performance Manager selects data from 8:00 a.m. to 5:00 p.m. for all of the other days of the week.
For example:
$ ADVISE PERFORMANCE GRAPH - _$ /SCHEDULE=(NOEVERYDAY,WEEKDAYS=(8-12,13-17))
Graphs do not depict the time periods deselected by the /SCHEDULE qualifier.
Use /SELECT in conjunction with the optional threshold values which may be specified on a per graph type basis.
If this qualifier is present, before a graph is produced, a check is made to see if the values to be graphed fall within the threshold values for the indicated percentage of points. If so, then the graph (or pie chart) is produced. If not, no graph is produced. For details on THRESHOLD, see the /TYPE qualifier.
|
Keyword |
Meaning |
|---|---|
|
GREATER_THAN:percent |
At least “percent” of the graph points plotted must be greater than or equal to the threshold value specified with the /TYPE qualifier. |
|
LESS_THAN:percent |
At least “percent” of the graph points plotted must be less than or equal to the threshold value specified with the /TYPE qualifier. |
These keywords accept a single value representing the percentage of the points plotted that must meet the threshold criteria before the graph is produced. Each graph point value is determined by the sum (STACKED) of the items depicted (up to 6). If the GREATER_THAN keyword is specified without a value, then 50 percent is assumed. If the LESS_THAN keyword is specified without a value, then 90 percent is assumed.
If the /SELECT qualifier is present without a keyword, then GREATER_THAN:50 is assumed. For example:
$ ADVISE PERFORMANCE GRAPH /BEGINNING=10/ENDING=11/NODE=YQUEM - _$ /TYPE=(CPU_U:THRESHOLD:25,CPU_M:THRESHOLD:35,TOP_CPU_I:THRESHOLD:45)- _$ /SELECT=GREATER/OUTPUT=.REGIS %PSPA-I-CREAGRAPHOUT, PSPA Graph created file BADDOG:[CORREY.WORK.PSPA]YQUEM_CPU_UTILIZATION.REG;1
This command requests that three graphs be produced. The CPU Utilization graph is produced, if 50 percent or more of the data points exceed 25 percent CPU utilization. The CPU_MODES graph is produced if 50 percent or more of the data points exceed 35 percent CPU utilization. The TOP_CPU_IMAGES graph is produced if 50 percent or more of the data points exceed 45 percent CPU utilization. In this case only one graph is produced.
$ ADVISE PERFORMANCE GRAPH /BEGINNING=10/ENDING=11/NODE=YQUEM - _$ /TYPE=(CPU_U:THRESHOLD:25,CPU_M:THRESHOLD:35,TOP_CPU_I:THRESHOLD:15)- _$ /SELECT=GREATER/OUTPUT=.REGIS %PSPA-I-CREAGRAPHOUT, PSPA Graph created file BADDOG:[CORREY.WORK.PSPA]YQUEM_CPU_UTILIZATION.REG;3 %PSPA-I-CREAGRAPHOUT, PSPA Graph created file BADDOG:[CORREY.WORK.PSPA]YQUEM_TOP_CPU_IMAGES.REG;1
This command produced two of three graphs because threshold quantity for the last graph was lowered.
$ ADVISE PERFORMANCE GRAPH /BEGINNING=10/ENDING=11/NODE=YQUEM - _$ /TYPE=(CPU_U:THRESHOLD:25,CPU_M:THRESHOLD:35,TOP_CPU_I:THRESHOLD:15) - _$ /SELECT=GREATER:90/OUTPUT=.REGIS $
The previous command generated none of the graphs because in all cases 90 percent of the graph points did not exceed the specified thresholds.
Stacks the values for each category on the graph. Use /NOSTACK to overlay the values on the graph. ReGIS graphs using /NOSTACK may cause some occlusion if you do not specify /FORMAT=ReGIS=CHARACTERISTICS=LINE also. If you are requesting a series of graphs in one command, you can override the /[NO]STACK qualifier by specifying the [NO]STACK keyword following each graph type. See Chapter 4 for an illustration of the use of the /NOSTACK qualifier and for additional information about default behavior.
Specifies which of the graphs you want generated.
Use the TITLE keyword to override the Performance Manager supplied title. The text string may be a maximum of 40 characters.
The STACK keyword for a particular graph type overrides the setting established by the /STACK qualifier.
The THRESHOLD keyword specifies a threshold value associated with the graph. The m specifier is a positive decimal value. A horizontal line is placed on the graph at the position on the Y-axis associated with the value. You can use THRESHOLD in conjunction with the /SELECT qualifier to prevent the generation of the graph or pie chart.
The Y_AXIS_MAXIMUM specifies a fixed y-axis to be used for the graph. The default behavior is to setup the y-axis so that the maximum data point appears near the top of the graph. This graph modifier allows you to specify the y-axis so that you can compare data from different graphs without having different scales on the y-axis. The n specifier is a positive decimal value.
You can specify multiple graphs in a single command. For example, you can specify /TYPE=(TOP_IO_DISKS,TOP_HARDFAULTING_IMAGES). Of course, /TYPE=ALL_GRAPHS generate all of the predefined graphs. To suppress a graph type, specify NO graph_type.
CPU_UTILIZATION is the default graph type.
The following list contains all of the available Performance Manager graphs:
|
[NO]ALL_GRAPHS |
[NO]COMPUTE_QUEUE |
|
[NO]CPU_MODES |
[NO]CPU_UTILIZATION |
|
CUSTOM |
[NO]DECNET |
|
[NO]DISKS |
[NO]FAULTS |
|
[NO]FILECACHE |
[NO]JOBS |
|
[NO]LOCKS |
[NO]MEMORY_UTILIZATION |
|
[NO]PROCESSES |
[NO]RESPONSE_TIME |
|
[NO]TERMINALS |
[NO]TOP_BDT_W |
|
[NO]TOP_BLKS_R |
[NO]TOP_BLKS_S |
|
[NO]TOP_BUFIO_IMAGES |
[NO]TOP_BUFIO_USERS |
|
[NO]TOP_BUFIO_WORKLOADS |
[NO]TOP_BUSY_DISKS |
|
[NO]TOP_BUSY_PROCESSOR |
[NO]TOP_BUSY_VOLUMES |
|
[NO]TOP_CHANNEL_IO |
[NO]TOP_CHANNEL_QUELEN |
|
[NO]TOP_CHANNEL_THRUPUT |
[NO]TOP_CLUSTER_RULE_OCC |
|
[NO]TOP_COMPAT_PROCESSOR |
[NO]TOP_CPU_IMAGES |
|
[NO]TOP_CPU_RULE_OCC |
[NO]TOP_CPU_USERS |
|
[NO]TOP_CPU_WORKLOADS |
[NO]TOP_CR_W |
|
[NO]TOP_DGS_D |
[NO]TOP_DGS_R |
|
[NO]TOP_DGS_S |
[NO]TOP_DIRIO_IMAGES |
|
[NO]TOP_DIRIO_USERS |
[NO]TOP_DIRIO_WORKLOADS |
|
[NO]TOP_DISKIO_IMAGES |
[NO]TOP_DISKIO_USERS |
|
[NO]TOP_DISKIO_WORKLOADS |
[NO]TOP_EXEC_PROCESSOR |
|
[NO]TOP_FAULTING_IMAGES |
[NO]TOP_FAULTING_USERS |
|
[NO]TOP_FAULTING_WORKLOADS |
[NO]TOP_FREEBLK_DISKS |
|
[NO]TOP_FREEBLK_VOLUMES |
[NO]TOP_HARDFAULTING_IMAGES |
|
[NO]TOP_HARDFAULTING_USERS |
[NO]TOP_HARDFAULTING_WORKLOADS |
|
[NO]TOP_HSC_DISK_IO |
[NO]TOP_HSC_DISK_THRUPUT |
|
[NO]TOP_HSC_IO |
[NO]TOP_HSC_TAPE_IO |
|
[NO]TOP_HSC_TAPE_THRUPUT |
[NO]TOP_HSC_THRUPUT |
|
[NO]TOP_IDLE_PROCESSOR |
[NO]TOP_IMAGE_ACTIVATIONS |
|
[NO]TOP_IMAGE_VOLUME_IO |
[NO]TOP_INTERRUPT_PROCESSOR |
|
[NO]TOP_IOSIZE_DISKS |
[NO]TOP_IOSIZE_VOLUMES |
|
[NO]TOP_IOSIZE_IMAGES |
[NO]TOP_IOSIZE_USERS |
|
[NO]TOP_IOSIZE_WORKLOADS |
[NO]TOP_IO_DISKS |
|
[NO]TOP_IO_FILES |
[NO]TOP_IO_RULE_OCC |
|
[NO]TOP_IO_VOLUMES |
[NO]TOP_KB_MAP |
|
[NO]TOP_KB_RC |
[NO]TOP_KB_S |
|
[NO]TOP_KERNEL_PROCESSOR |
[NO]TOP_MEMORY_RULE_OCC |
|
[NO]TOP_MGS_R |
[NO]TOP_MGS_S |
|
[NO]TOP_MP_SYNCH_PROCESSOR |
[NO]TOP_MSCPIO_FILES |
|
[NO]TOP_PAGING_DISKS |
[NO]TOP_PAGING_FILES |
|
[NO]TOP_PAGING_VOLUMES |
[NO]TOP_POOL_RULE_OCC |
|
[NO]TOP_PRCT_FREE_DISKS |
[NO]TOP_PRCT_FREE_VOLUMES |
|
[NO]TOP_PRCT_USED_DISKS |
[NO]TOP_PRCT_USED_VOLUMES |
|
[NO]TOP_QUEUE_DISKS |
[NO]TOP_QUEUE_VOLUMES |
|
[NO]TOP_READ_DISKS |
[NO]TOP_READ_FILES |
|
[NO]TOP_READ_VOLUMES |
[NO]TOP_RESIDENT_IMAGES |
|
[NO]TOP_RESIDENT_USERS |
[NO]TOP_RESIDENT_WORKLOADS |
|
[NO]TOP_RESOURCE_RULE_OCC |
[NO]TOP_RESPONSE_TIME_DISKS |
|
[NO]TOP_RESPONSE_TIME_FILES |
[NO]TOP_RESPONSE_TIME_IMAGES |
|
[NO]TOP_RESPONSE_TIME_USERS |
[NO]TOP_RESPONSE_TIME_VOLUMES |
|
[NO]TOP_RESPONSE_TIME_WORKLOADS |
[NO]TOP_RULE_OCCURRENCES |
|
[NO]TOP_SPLITIO_DISKS |
[NO]TOP_SPLITIO_FILES |
|
[NO]TOP_SPLITIO_VOLUMES |
[NO]TOP_SUPER_PROCESSOR |
|
[NO]TOP_TERMINAL_INPUT_IMAGES |
[NO]TOP_TERMINAL_INPUT_USERS |
|
[NO]TOP_TERMINAL_INPUT_WORKLOADS |
[NO]TOP_TERMINAL_THRUPUT_ |
|
[NO]TOP_TERMINAL_THRUPUT_USERS |
[NO]TOP_TERMINAL_THRUPUT_ |
|
[NO]TOP_THRUPUT_DISKS |
[NO]TOP_THRUPUT_FILES |
|
[NO]TOP_THRUPUT_IMAGES |
[NO]TOP_THRUPUT_USERS |
|
[NO]TOP_THRUPUT_VOLUMES |
[NO]TOP_THRUPUT_WORKLOADS |
|
[NO]TOP_USER_IMAGE_ACTIVATIONS |
[NO]TOP_USER_PROCESSOR |
|
[NO]TOP_USER_VOLUME_IO |
[NO]TOP_VA_IMAGES |
|
[NO]TOP_VA_USERS |
[NO]TOP_VA_WORKLOADS |
|
[NO]TOP_WORKLOAD_IMAGE_ACTIVATIONS |
[NO]TOP_WRITE_DISKS |
|
[NO]TOP_WRITE_FILES |
[NO]TOP_WRITE_VOLUMES |
|
[NO]TOP_WSSIZE_IMAGES |
[NO]TOP_WSSIZE_USERS |
|
[NO]TOP_WSSIZE_WORKLOADS |
|
The following sections list the graph types and their descriptions. Included are keywords used with the /TYPE qualifier.
You must specify the items for the Performance Manager to graph. The metrics and selection objects are described below.
ADVISE PERFORMANCE
GRAPH/TYPE=CUSTOM=({SYSTEM_METRICS=(system_metrics) |
USER_METRICS=(process_metrics),SELECTION=(usernames) |
IMAGE_METRICS=(process_metrics),SELECTION=(imagenames) |
WORKLOAD_METRICS=(process_metrics),SELECTION=(workloadnames)|
DEVICE_METRICS=(disk_metrics),SELECTION=(devicenames) |
VOLUME_METRICS=(disk_metrics),SELECTION=(volumenames) |
CPU_METRICS=(cpu_modes),SELECTION=(Phy-cpu-ids) |
HSC_METRICS=(hsc_metrics),SELECTION=(HSC-nodenames) |
SCS_METRICS=(scs_metrics),SELECTION=(SCS-nodenames) |
RULE_METRICS=(rule_metrics),SELECTION=(Rule-ids) |
CHANNEL_METRICS=(channel_metrics),SELECTION=(channel-specs) |
FILE_METRICS=(file_metrics),SELECTION=(file-names) |
DISK_USER_METRICS=(disk_user_metrics),SELECTION=(username-volumename) |
DISK_IMAGE_METRICS=(disk_image_metrics),SELECTION=(imagename-volumename)}
[,[NO]STACK] [,Y_AXIS_MAXIMUM=n] [,THRESHOLD=m] [,TITLE=string])
Where:
|
metric_class |
The metrics are grouped together by metric class and described in the next table. |
|
Selection_string |
Specify up to six strings, or only one if you specify multiple metrics. The strings are used to match against Performance Manager records to select data for the CUSTOM graph. If you specify /TYPE= CUSTOM= (USER_METRICS= CPUTIME,SELECTION= WILK) the Performance Manager selects and graph all process records which have the username field “WILK.” |
The CUSTOM graph type allows you to graph a selection of metrics for either the system, or selected users, images, workloads, disk devices, volumes, HSCs, SCS nodes, rule-ids or channels. You may graph up to six selections with a single metric, or up to six metrics with a single selection. The Performance Manager either prompts you in command mode for the data (ADVISE PERFORMANCE) or you can specify the desired metrics and selections in a single DCL command.
For example:
$ ADVISE PERFORMANCE GRAPH/TYPE = CUSTOM = SYSTEM_METRICS = - _$ (DZROFAULTS,GVALID)
The SELECTION string must be chosen based on the metric class that you use:
To display a graph which shows active CPUs in an OpenVMS multiprocessing system, enter a command similar to the following:
$ ADVISE PERFORMANCE GRAPH/END=0:10/NODE=YQUEM -
_$ /TYPE=TOP_BUSY_PROCESSOR
Specifying a physical CPU ID allows you to isolate and analyze one CPU of a selected node in an SMP configuration.
Note: Rule Metrics are available only from history files.
The following tables identify the custom graphing metrics grouped by metric class.
|
Channel |
Description |
|---|---|
|
CHANNEL_IO |
Number of I/O operations transferred by the HSC K.SDI channel |
|
CHANNEL_QUELEN |
Number of I/O operations outstanding to all disks on the HSC K.SDI channel |
|
CHANNEL_THRUPUT |
Number of bytes per second transferred by the HSC K.SDI channel |
|
CPU |
Description |
|---|---|
|
P_BUSY |
Percentage of time that the physical CPU was busy |
|
P_COMPAT |
Percentage of time that the physical CPU was in compatibility mode |
|
P_EXEC |
Percentage of time that the physical CPU was in exec mode |
|
P_IDLE |
Percentage of time that the physical CPU was idle |
|
P_INTERRUPT |
Percentage of time that the physical CPU was in interrupt stack mode |
|
P_KERNEL |
Percentage of time that the physical CPU was in kernel mode |
|
P_MP_SYNCH |
Percentage of time that the physical CPU was in MP_synch mode |
|
P_SUPER |
Percentage of time that the physical CPU was in supervisor mode |
|
P_USER |
Percentage of time that the physical CPU was in USER mode |
|
Disk |
Description |
|---|---|
|
BUSY |
Percent of time that there was one or more outstanding I/O operation to the disk |
|
D_IO_SIZE |
Number of 512 byte pages per I/O request |
|
D_RESPONSE_TIME |
Average number of milliseconds to process an I/O operation (Note that this is zero if there are no I/O operations) |
|
SPLITIO |
Number of split I/O operations per second to the disk |
|
FREEBLKS |
Number of free blocks on the disk |
|
MSCPIO |
Number of MSCP I/O operations per second |
|
PAGIO |
Number of paging and swapping I/O operations per second |
|
PRCT_FREE |
Percentage of free disk space for a given disk |
|
PRCT_USED |
Percentage of used disk space for a given disk |
|
QUEUE |
Average number of I/O operations outstanding |
|
READIO |
Number of read I/O operations per second |
|
THRUPUT |
Number of Kbytes per second transferred to or from the disk |
|
TOTIO |
Number of I/O operations per second |
|
WRITIO |
Number of write I/O operations per second |
|
Disk User |
Description |
|---|---|
|
USER_VOLUME_IO |
Number of I/Os per second for the user's use of the disk volume. This is based on the collected top two disks' I/O rates per process. |
|
HSC |
Description |
|---|---|
|
HSC_DISK_IO |
Number of disk I/O operations performed by the HSC |
|
HSC_DISK_THRUPUT |
Number of bytes per second transferred to and from disks on the HSC |
|
HSC_IO |
Number of I/O operations transferred by the HSC |
|
HSC_TAPE_IO |
Number of tape I/O operations performed by the HSC |
|
HSC_TAPE_THRUPUT |
Number of bytes per second transferred to and from tapes on the HSC |
|
HSC_THRUPUT |
Number of bytes per second transferred by the HSC |
|
File Metric |
Description |
|---|---|
|
FILE_TOTIO |
Number of I/O's per second to this file. |
|
FILE_PAGIO |
Number of paging I/Os per second to this file. |
|
FILE_READIO |
Number of read I/O's per second to this file. |
|
FILE_WRITIO |
Number of write I/O's per second to this file. |
|
FILE_THRUPUT |
Number of bytes per second transferred to or from this file. |
|
FILE_RESPONSE_TIME |
Average number of milliseconds elapsed between the start of the IO (SIO) and its completion (EIO), for all of the I/Os to the file. |
|
FILE_SPLITIO |
Number of split I/O's per second to this file. |
|
Process |
Description |
|---|---|
|
BUFIO |
Number of process buffered I/O operations per second |
|
CPUTIME |
Percent of total CPU time that the process(es) consumed |
|
DIRIO |
Number of process direct I/O operations per second |
|
DSKIO |
Number of process disk I/O operations per second |
|
DSKTP |
Number of process bytes per second transferred to and from disks |
|
FAULTS |
Number of process hard and soft page faults per second |
|
HARDFAULTS |
Number of process page fault I/O operations per second |
|
IMAGE_ACTIVATIONS |
Number of process image activations per second |
|
IO_SIZE |
Average number of pages per process disk I/O |
|
RESIDENCE |
Number of resident processes with either the specified user name or image name |
|
RESPONSE_TIME |
Average number of seconds between the end-transaction for a terminal read, and the start-transaction for the next terminal read, or an image termination. |
|
TAPIO |
Number of process tape I/O operations per second. |
|
TAPTP |
Number of process bytes per second transferred to and from tapes. |
|
TERM_INPUT |
Number of process terminal read operations per second. |
|
TERM_THRUPUT |
Number of process bytes per second transferred via terminal reads. |
|
VASIZE |
Number of pages in the virtual address space for a given process |
|
WSSIZE |
Number of working set pages (X 1000) per process |
|
Rule |
Description (Rule Metrics available from history data only) |
|---|---|
|
CLUSTER_OCCURRENCES |
Number of rules prefixed with the letter “L” that fired per hour. (Does not include any rules in Domain Cluster) |
|
CPU_OCCURRENCES |
Number of rules prefixed with the letter “C” that fired per hour |
|
IO_OCCURRENCES |
Number of rules prefixed with the letter “I” that fired per hour |
|
MEMORY_OCCURRENCES |
Number of rules prefixed with the letter “M” that fired per hour |
|
OCCURRENCES |
Number of rules that fired per hour (including user written rules) |
|
POOL_OCCURRENCES |
Number of rules in the set: R0020, R0025, R0030, R0035, R0040, R0045, R0050, R0060, R0070, R0080 that fired per hour |
|
RESOURCE_OCCURRENCES |
Number of rules prefixed with an “R” but not in the above set that fired per hour |
|
SCS |
Description |
|---|---|
|
BDT_W |
Number of times per second that message had to wait for buffers |
|
BLKS_R |
Block request rate |
|
BLKS_S |
Block send rate |
|
CR_W |
Number of times per second that messages had to wait due to insufficient credits |
|
DGS_D |
Datagrams discarded rate |
|
DGS_R |
Datagram receive rate |
|
DGS_S |
Datagram send rate |
|
KB_MAP |
Kbytes transferred rate |
|
KB_RC |
Kbytes received rate |
|
KB_S |
Kbytes sent rate |
|
MGS_R |
Message receive rate |
|
MGS_S |
Message send rate |
|
System |
Description |
|---|---|
|
ARRLOCPK |
Arriving local packets per second |
|
ARRTRAPK |
Transit packets per second |
|
BATCH_COMQ |
Number of computable batch processes |
|
BATCH_PROCESSES |
Number of Batch processes |
|
BUFIO |
Buffered I/O per second |
|
CEF |
Average number of processes in common event flag wait state |
|
COLPG |
Average number of processes in collided page wait state |
|
COM |
Average number of processes in computable state |
|
COMO |
Average number of processes in computable outswapped state |
|
COMPAT |
Percent CPU time spent in compatibility mode |
|
CPU_BATCH |
Percent CPU time used by batch jobs |
|
CPU_DETACHED |
Percent CPU time used by detached jobs |
|
CPU_INTERACTIVE |
Percent CPU time used by interactive jobs |
|
CPU_NETWORK |
Percent CPU time used by network jobs |
|
CPU_OTHER |
Percent CPU time for which the Performance Manager did not capture process data |
|
CPU_TOTAL |
Percent CPU time not in idle mode |
|
CPU_VUP_RATING |
The VUP rating of the CPU |
|
SWPBUSY |
Percentage of CPU SWAPPER busy |
|
IOBUSY |
Percentage of CPU Multi I/O busy |
|
ANYIOBUSY |
Percentage of CPU Any I/O busy |
|
PAGEWAIT |
Percentage of CPU idle: page wait |
|
SWAPWAIT |
Percentage of CPU idle: swap wait |
|
MMGWAIT |
Percentage of CPU idle: page or swap wait |
|
SYSIDLE |
percentage of CPU and I/O idle |
|
CPUONLY |
Percentage of CPU only busy |
|
IOONLY |
Percentage of I/O only busy |
|
CPUIO |
Percentage of CPU and I/O busy |
|
CUR |
Average number of processes in currently executing process state |
|
DEADLOCK_FIND |
Number of deadlocks found by OpenVMS per second |
|
DEADLOCK_SEARCH |
Number of deadlock searches per second |
|
DEPLOCPK |
Departing local packets per second |
|
DETACHED_COMQ |
Number of computable detached processes |
|
DETACHED_PROCESSES |
Number of detached processes |
|
DIRIO |
Direct I/O per second |
|
DISK_PAGING |
Number of paging I/O operations per second |
|
DISK_SWAPPING |
Number of swapping I/O operations per second |
|
DISK_USER |
Number of user I/O disk operations per second |
|
DZROFAULTS |
Number of demand-zero page faults per second |
|
ERASE_QIO |
Number of Erase QIO operations per second |
|
EXEC |
Percent CPU time charged to executive mode |
|
FILE_OPEN |
Number of files opened per second |
|
FILE_SYS |
Percent CPU time spent in the file system |
|
System |
Description |
|---|---|
|
FPG |
Average number of processes in free page wait state |
|
FREECNT |
Free list page count |
|
FREEFAULTS |
Number of free list page faults per second |
|
FREELIM |
Percent of physical memory allocated to the free list by the SYSGEN parameter FREELIM |
|
FREELIST |
Percent of physical memory on the FREELIST, excluding the number of pages for FREELIM |
|
GVALID |
Global page faults per second |
|
HIB |
Average number of processes in hibernate wait state |
|
HIBO |
Average number of processes in hibernate outswapped wait state |
|
IDLE |
Percent CPU time that is idle time |
|
IMAGE_ACTIVATIONS |
Number of image activations per second |
|
INCOMING_LOCKING |
Number of incoming ENQs or Lock Conversion (CVTs) from remote nodes per second |
|
INPROCACT |
Number of active inswapped processes |
|
INPROCINACT |
Number of inactive inswapped processes |
|
ISWPCNT |
Inswaps per second |
|
INTERACTIVE_PROCESSES |
Number of interactive processes |
|
INTERRUPT |
Percent CPU time spent on the interrupt stack |
|
INT_COMQ |
Number of computable interactive processes |
|
IRP_CNT |
Count of the IRPs in use |
|
IRP_MAX |
Length of the IRP list |
|
KERNEL |
Percent CPU time charged to kernel mode time |
|
LAT_TERMIO |
Number of LAT terminal I/O operations per second |
|
LEF |
Average number of processes in local event flag wait state |
|
LEFO |
Average number of processes in local event flag outswapped wait state |
|
LG_RESPONSE |
Average process terminal response time for interactions requiring greater than 1.0 CPU seconds |
|
LOCAL_LOCKING |
Number of local node ENQs or Lock Conversion (CVTs) per second |
|
LOCK_CNT |
Count of lock IDs in use |
|
LOGNAM |
Number of logical name translations per second |
|
LRP_CNT |
Count of the LRPs in use |
|
LRP_MAX |
Length of the LRP list |
|
MBREADS |
Mailbox reads per second |
|
MBWRITES |
Mailbox writes per second |
|
MED_RESPONSE |
Average process terminal response time for interactions requiring greater than or equal to 0.1 CPU seconds, and less than 1.0 CPU seconds |
|
MEM_TOTAL |
Percent of physical memory in use, excluding pages on the free and modified list |
|
MFYCNT |
Modified list page count |
|
MFYFAULTS |
Number of modified list pagefaults per second |
|
MODIFIED |
Percent of physical memory on the modified list |
|
MP_SYNCH |
CPU time charged while waiting for a resource protected by a spin lock to be freed |
|
MWAIT |
Average number of processes in miscellaneous wait state |
|
NETWORK_COMQ |
Number of computable network processes |
|
NETWORK_PROCESSES |
Number of network processes |
|
NP_FREE_BLOCKS |
Count of non-paged blocks |
|
NP_FREE_BYTES |
Number of free Kbytes in non-paged pool |
|
NP_FREE_LEQ_32 |
Number of free non-paged pool blocks less than or equal to 32 bytes in size |
|
NP_MAX_BLOCK |
Size, in Kbytes, of largest free non-paged pool block |
|
NP_MIN_BLOCK |
Size, in bytes, of smallest free non-paged pool block |
|
NP_POOL_MAX |
Size, in Kbytes, of non-paged pool |
|
NV_TERMIO |
Number of NV terminal I/O operations per second |
|
OTHERBUFIO |
Number of buffered I/O operations less any terminal I/O operations per second |
|
OUTGOING_LOCKING |
Number of outgoing ENQs or Lock Conversion (CVTs) to remote nodes per second |
|
OUTPROCACT |
Number of active outswapped processes (COMO) |
|
OUTPROCINACT |
Number of inactive outswapped processes |
|
PAGEFILE_UTILIZATION |
Percent of pagefile pages in use or occupied |
|
PFW |
Average number of processes in page fault wait state |
|
PG_FREE_BLOCKS |
Count of paged blocks |
|
PG_FREE_BYTES |
Number of free Kbytes in paged pool |
|
PG_FREE_LEQ_32 |
Number of free paged pool blocks less than or equal to 32 bytes in size |
|
PG_MAX_BLOCK |
Size, in Kbytes, of largest free paged pool block |
|
PG_MIN_BLOCK |
Size, in bytes, of smallest free paged pool block |
|
PG_POOL_MAX |
Size, in Kbytes, of paged pool |
|
PREADIO |
Read operations per second from a disk due to a page fault |
|
PREADS |
Pages read per second from a disk due to a page fault |
|
PWRITES |
Pages written per second to paging files |
|
PWRITEIO |
Write operations per second to paging files |
|
RCVBUFFL |
Receiver buffer failures per second |
|
RELATIVE_CPU_POWER |
This node's VUP rating as a percentage of the composite of selected nodes |
|
RESOURCE_CNT |
Count of resources in use |
|
RT_TERMIO |
Number of remote (RT) terminal I/O operations per second |
|
SM_RESPONSE |
Average process terminal response time for interactions requiring less than 0.1 CPU seconds |
|
SPLITIO |
Number of split I/O transfers per second |
|
SRP_CNT |
Count of SRPs in use |
|
SRP_MAX |
Length of the SRP list |
|
SUPER |
Percent CPU time charged to supervisor mode |
|
SUSP |
Average number of processes in suspend wait state |
|
SUSPO |
Average number of processes in suspend outswapped wait state |
|
SYSFAULTS |
System page faults per second |
|
SYSTEMWS |
Percent of physical memory used by processes with the user name of SYSTEM |
|
TOTAL_PROCESSES |
Total number of processes |
|
TRCNGLOS |
Transit congestion losses per second |
|
TT_TERMIO |
Number of TT terminal I/O operations per second |
|
TW_TERMIO |
Number of DECterm I/O operations per second |
|
TX_TERMIO |
Number of TX terminal I/O operations per second |
|
USERWS |
Percent of physical memory used by process working sets |
|
USER_MODE |
Percent CPU time spent in user mode |
|
VMSALLOC |
Percent of physical memory allocated to OpenVMS (including pool) |
|
WINDOW_TURN |
Number of file window turns per second |
|
WRTINPROG |
Transition page faults per second |
|
WT_TERMIO |
Number of UIS terminal operations per second |
Plots the number of computable processes categorized by:
Plots the percentage of CPU time spent in the various processor modes:
Plots 6 metrics for percent CPU utilization:
This is the default graph type.
Plots the number of DECnet operations per second in terms of:
Plots the disk operations per second categorized by:
Plots the page fault rate per second, and places the rate into these categories:
Plots the file operation attempt rate to the file system caches categorized by:
Plots the number of processes categorized by:
Plots the number of distributed lock operations per second categorized by:
Plots physical memory usage categorized by:
Plots the number of processes categorized as:
Plots the terminal response time for interactive processes categorized as:
Plots the number of terminal operations per second categorized by the type of terminal used:
Plots five remote nodes with the highest rate of BDT waits (plus “Other”) resulting when a local node issues an I/O, but the connection had to wait for a buffer descriptor. The metric graphed is BDT_W.
Plots the top five nodes with the highest block transfer requests (plus “Other”) from the remote system to the local system. The metric graphed is BLKS_R.
Plots the top five nodes with the highest block transfers sent (plus “Other”) from the local system to the remote system. The metric graphed is BLKS_S.
Plots the top five (plus “Other Images”) creators of buffered I/O by image names. The metric graphed is BUFIO.
Plots the top five (plus “Other Users”) creators of buffered I/O by user names. The metric graphed is BUFIO.
Plots the top five (plus “Other Users”) creators of buffered I/O by workload names. The metric graphed is BUFIO.
Plots the five (plus “Other Disks”) disk devices that experienced the highest busy time percentages. The metric graphed is BUSY.
Plots the five (plus “Other”) processors that experienced the highest busy time percentages. The metric graphed is P_BUSY.
Plots the five (plus “Other Volumes”) disk volumes that experienced the highest busy time percentages. The metric graphed is BUSY.
Plots the five (plus “Other”) HSC channels that experienced the largest I/O rate, in I/Os per second. The metric graphed is CHANNEL_IO.
Plots the five (plus “Other”) HSC channels that experienced the largest queue length. The metric graphed is CHANNEL_QUELEN.
Note: The channel names are provided in the format nodename_n, where n represents the channel number (K.SDI) on the HSC node indicated by node name. If the channel cannot be identified, the character u is substituted for n. See logical name PSDC$hscname_hscunitnumber in the Performance Agent Administrator Guide.
Plots the five (plus “Other”) HSC channels that experienced the largest throughput rate, in Kilobytes per second. The metric graphed is CHANNEL_THRUPUT.
Note: The channel names are provided in the format nodename_n, where n represents the channel number (K.SDI) on the HSC node indicated by node name. If the channel cannot be identified, the character u is substituted for n. See logical name PSDC$hscname_hscunitnumber in the Performance Agent Administrator Guide.
Plots the five (plus “Other”) rule identifiers that fired, as a rate per hour. The metric graphed is CLUSTER_OCCURRENCES and is available only from history data.
Plots the five (plus “Other”) processors in terms of time spent in compatibility mode, as a percent of CPU time. The metric graphed is P_COMPAT.
Plots the top five (plus “Other Images”) consumers of CPU time by image name. The metric graphed is CPUTIME.
Plots the five (plus “Other”) CPU rule identifiers that fired, as a rate per hour. The metric graphed is CPU_OCCURRENCES and is available only from history data.
Plots the top five (plus “Other Users”) consumers of CPU time by user name. The metric graphed is CPUTIME.
Plots the top five (plus “Other”) workloads as consumers of CPU time. The metric graphed is CPUTIME.
Plots five nodes with the highest rate of credit waits (plus “Other”) resulting when a connection has to wait for a send credit. The metric graphed is CR_W.
Plots five nodes with the most datagrams discarded (plus “Other”)resulting when application datagrams are discarded by the port driver. The metric graphed is DGS_D.
Plots five nodes with the most datagrams received (plus “Other”)resulting when the local system receives datagrams over the connection from the remote system and given to SYSAP. The metric graphed is DGS_R.
Plots five nodes with the most datagrams sent (plus “Other”) resulting when application datagrams are sent over the connection. The metric graphed is DGS_S.
Plots the top five (plus “Other Images”) creators of direct I/O by image name. The metric graphed is DIRIO.
Plots the top five (plus “Other Users”) creators of direct I/O by user name. The metric graphed is DIRIO.
Plots the top five (plus “Other Users”) creators of direct I/O by workload name. The metric graphed is DIRIO.
Plots the top five (plus “Other Images”) creators of disk I/O by image name. The metric graphed is DSKIO.
Plots the top five (plus “Other Users”) creators of disk I/O by user name. The metric graphed is DSKIO.
Plots the top five (plus “Other”) creators of disk I/O by workload name. The metric graphed is DSKIO.
Plots the five (plus “Other”) processors in terms of time spent in executive mode, as a percent of CPU time. The metric graphed is P_EXEC.
Plots the top five (plus “Other Images”) creators of page faults by image name. The metric graphed is FAULTS.
Plots the top five (plus “Other Users”) creators of page faults by user name. The metric graphed is FAULTS.
Plots the top five (plus “Other Users”) creators of page faults by workload name. The metric graphed is FAULTS.
Plots the top five (plus “Other”) disk devices in terms of number of free disk pages. The metric graphed is FREEBLKS.
Plots the top five (plus “Other”) disk volumes in terms of number of free disk pages. The metric graphed is FREEBLKS.
Plots the top five (plus “Other Images”) creators of hard page faults by image name. The metric graphed is HARDFAULTS.
Plots the top five (plus “Other Users”) creators of hard page faults by user name. The metric graphed is HARDFAULTS.
Plots the top five (plus “Other Users”) creators of hard page faults by workload name. The metric graphed is HARDFAULTS.
Plots the top five (plus “Other”) HSCs in terms of disk I/O operations per second. The metric graphed is HSC_DISK_IO.
Plots the top five (plus “Other”) HSCs in terms of disk throughput in Kilobytes per second. The metric graphed is HSC_DISK_THRUPUT.
Plots the top five (plus “Other”) HSCs in terms of I/O operations per second. The metric graphed is HSC_IO.
Plots the top five (plus “Other”) HSCs in terms of tape I/O operations per second. The metric graphed is HSC_TAPE_IO.
Plots the top five (plus “Other”) HSCs in terms of tape thruput in Kilobytes per second. The metric graphed is HSC_TAPE_THRUPUT.
Plots the top five (plus “Other”) HSCs in terms of total thruput in Kilobytes per second. The metric graphed is HSC_THRUPUT.
Plots the top five (plus “Other”) images in terms of image activations per second. The metric graphed is IMAGE_ACTIVATIONS.
Plots the top five (plus “Other”) image and volume name pairs in terms of their I/O rate. The metric graphed is IMAGE_VOLUME_IO.
Plots the five (plus “Other”) processors in terms of time spent on the interrupt stack, as a percent of CPU time. The metric graphed is P_INTERRUPT.
Plots the five (plus “Other Disks”) disk devices that incurred the highest I/O rates. The metric graphed is TOTIO.
Plots the five (plus “Other”) files that incurred the highest I/O rates. The metric graphed is FILE_TOTIO.
Plots the five (plus “Other”) IO rule identifiers that fired, as a rate per hour. The metric graphed is IO_OCCURRENCES.
Plots the five (plus “Other Volumes”) disk volumes that incurred the highest I/O rates.
Plots five nodes (plus “Other”) in terms of the number of kilobytes of data mapped for block transfer. The metric graphed is KB_MAP.
Plots five nodes (plus “Other”) in terms of the number of kilobytes of data received by the local system from the remote system through request-data commands. The metric graphed is KB_RC.
Plots five nodes (plus “Other”) in terms of the number of kilobytes of data sent from the local system to the remote system through send-data commands. The metric graphed is KB_S.
Plots the five (plus “Other”) processors in terms of time spent in kernel mode as a percent of CPU time. The metric graphed is P_KERNEL.
Plots the five (plus “Other”) memory rule identifiers that fired, as a rate per hour. The metric graphed is MEMORY_OCCURRENCES and is available only from history data.
Plots five nodes (plus “Other”) in terms of number of application datagram messages received over the connection. The metric graphed is MGS_R.
Plots five nodes (plus “Other”) in terms of number of application datagram messages sent over the connection. The metric graphed is MGS_S.
Plots the five (plus “Other”) processors in terms of time spent in MP synchronization mode, as a percent of CPU time. The metric graphed is P_MP_SYNCH.
Plots the five (plus “Other”) files that incurred the highest MSCP I/O rates. The metric graphed is FILE_MSCPIO.
Plots the five (plus “Other Disks”) disk devices that incurred the highest I/O paging and swapping rates. The metric graphed is PAGIO.
Plots the five (plus “Other”) files that incurred the highest I/O paging and swapping rates. The metric graphed is FILE_PAGIO.
Plots the five (plus “Other Volumes”) disk volumes that incurred the highest I/O paging and swapping rates. The metric graphed is PAGIO.
Plots the five (plus “Other”) pool rule identifiers that fired, as a rate per hour. The metric graphed is POOL_OCCURRENCES and is available only from history data.
Plots the top five (plus “Other”) disk devices in terms of percentage of free disk blocks. The metric graphed is PRCT_FREE.
Plots the top five (plus “Other”) disk devices in terms of percentage of used disk blocks. The metric graphed is PRCT_USED.
Plots the top five (plus “Other”) disk volumes in terms of percentage of free disk blocks. The metric graphed is PRCT_FREE.
Plots the top five (plus “Other”) disk volumes in terms of percentage of used disk blocks. The metric graphed is PRCT_USED.
Plots the five (plus “Other Disks”) disk devices that experienced the longest queue lengths. The metric graphed is QUEUE.
Plots the five (plus “Other Volumes”) disk volumes that experienced the longest queue lengths. The metric graphed is QUEUE.
Plots the five (plus “Other Disks”) disk devices that incurred the highest read I/O rates. The metric graphed is READIO.
Plots the five (plus “Other”) files that incurred the highest read I/O rates. The metric graphed is FILE_READIO.
Plots the five (plus “Other Volumes”) disk volumes that incurred the highest read I/O rates. The
metric graphed is READIO.
Plots the top five (plus “Other Images”) images most resident on the system by image name. The metric graphed is RESIDENCE.
Plots the top five (plus “Other Users”) users most resident on the system by user name. Note that each subprocess adds to the residence for the parent process's user name. The metric graphed is RESIDENCE.
Plots the top five (plus “Other Workloads”) workloads most resident on the system by workload name. The metric graphed is RESIDENCE.
Plots the five (plus “Other”) resource rule identifiers that fired, as a rate per hour. The metric graphed is RESOURCE_OCCURRENCES and is available only from history data.
Plots the five (plus “Other Disks”) disk devices that incurred the highest response times. The metric graphed is D_RESPONSETIME.
Plots the five (plus “Other”) files that incurred the highest response times. The metric graphed is FILE_RESPONSE_TIME.
Plots the five (plus “Other Images”) images with the highest terminal response time. The metric graphed is RESPONSE_TIME.
Plots the five (plus “Other Users”) users with the highest terminal response time. The metric graphed is RESPONSE_TIME.
Plots the five (plus “Other Volumes”) disk volumes that have the highest response times. The metric graphed is D_RESPONSETIME.
Plots the five (plus “Other Workloads”) workloads with the highest terminal response time. The metric graphed is RESPONSE_TIME.
Plots the five (plus “Other”) rule identifiers that fired, as a rate per hour. The metric graphed is OCCURRENCES and is available only from history data.
Plots the five (plus “Other”) disk devices that have the highest split I/O operations. The metric graphed is SPLITIO.
Plots the five (plus “Other”) files that have the highest split I/O operations. The metric graphed is FILE_SPLITIO.
Plots the five (plus “Other”) disk volumes that have the highest split I/O operations. The metric graphed is SPLITIO.
Plots the five (plus “Other”) processors in terms of time spent in Supervisor mode, as a percent of CPU time. The metric graphed is P_SUPER.
Plots the top five (plus “Other Images”) images with the highest character per second terminal input. The metric graphed is TERM_INPUT.
Plots the top five (plus “Other Users”) users with the highest character per second terminal input. The metric graphed is TERM_INPUT.
Plots the top five (plus “Other Workloads”) workloads with the highest character per second terminal input. The metric graphed is TERM_INPUT.
Plots the top five (plus “Other Images”) images with the highest character per second terminal thruput. The metric graphed is TERM_THRUPUT.
Plots the top five (plus “Other Users”) users with the highest character per second terminal thruput. The metric graphed is TERM_THRUPUT.
Plots the top five (plus “Other Workloads”) workloads with the highest character per second terminal thruput. The metric graphed is TERM_THRUPUT.
Plots the five (plus “Other Disks”) disk devices that incurred the highest throughput rates. The metric graphed is THRUPUT.
Plots the five (plus “Other”) files that incurred the highest throughput rates. The metric graphed is FILE_THRUPUT.
Plots the five (plus “Other”) images with the highest throughput rates. The metric graphed is THRUPUT.
Plots the five (plus “Other”) users with the highest throughput rates. The metric graphed is THRUPUT.
Plots the five (plus “Other”) disk volumes that incurred the highest throughput rates. The metric graphed is THRUPUT.
Plots the five (plus “Other”) workloads with the highest throughput rates. The metric graphed is THRUPUT.
Plots the top five (plus “Other”) users in terms of image activations per second. The metric graphed is IMAGE_ACTIVATIONS.
Plots the five (plus “Other”) processors in terms of time spent in User mode, as a percent of CPU time. The metric graphed is P_USER.
Plots the top five (plus “Other”) user and volume name pairs in terms of their I/O rate. The metric graphed is USER_VOLUME_IO.
Plots the top five (plus “Other”) workloads in terms of image activations per second. The metric graphed is IMAGE_ACTIVATIONS.
Plots the five (plus “Other Disks”) disk devices that incurred the highest write I/O rates. The metric graphed is WRITIO.
Plots the five (plus “Other”) files that incurred the highest write I/O rates. The metric graphed is FILE_WRITIO.
Plots the five (plus “Other Volumes”) disk volumes that incurred the highest write I/O rates. The metric graphed is WRITIO.
Plots the top five (plus “Other Images”) images that had the largest combined virtual address space by image name. The metric graphed is VASIZE.
Plots the top five (plus “Other Users”) users that had the largest combined virtual address space by user name. The metric graphed is VASIZE.
Plots the top five (plus “Other”) workloads that had the largest combined virtual address space. The metric graphed is VASIZE.
Plots the top five (plus “Other Images”) images that had the largest combined working set sizes by image name. The metric graphed is WSSIZE.
Plots the top five (plus “Other Users”) users that had the largest combined working set sizes by user name. The metric graphed is WSSIZE.
Plots the top five (plus “Other”) workloads that had the largest combined working set sizes. The metric graphed is WSSIZE.
|
Copyright © 2008 CA.
All rights reserved.
|
|