Previous Topic: Adapter Listener Behavior with Event SingularityNext Topic: Run an Adapter Behind a Firewall


Aggregate Transaction Data

Transaction data is often collected in order to match it against thresholds or to be able to calculate the eventual periodic percentages of success. For example, every five minutes a virtual transaction is performed against a system and the result (response time in milliseconds), is stored as seen below:

[…]
1/1/2004 00:00 432
1/1/2004 00:05 560
1/1/2004 00:10 329
1/1/2004 00:15 250
1/1/2004 00:20 275
1/1/2004 00:25 2860
1/1/2004 00:30 140
[…]

In other situations, instead of using virtual transactions, there may be access to actual transactions taking place in a system. In these cases, hundreds or even thousands of transactions may be performed hourly.

In both of the above cases, loading such a volume of information into CA Business Service Insight should be avoided, if at all possible.

Aggregating the data by time periods is the best way to reduce the amounts of data. When the threshold against which success is measured is fixed, aggregation can be performed by allowing the Adapter to count the number of transactions within the aggregated period that were successful. For example, if the success threshold in the previous example is set at 500 milliseconds, only five out of seven transactions were successful within the displayed period. The problem with this approach is the fixed threshold: What if one should, later on, wish to change the threshold? In such a situation, raw data must be reread and tested by the Adapter against the new threshold.

Therefore, optimally the Adapter should aggregate transaction data in a flexible manner without losing significant data.

A limited solution is to allow the Adapter to test the transactions against several thresholds. There are two ways of doing this:

Both options suffer from the same problem-thresholds can be changed in the future only within a small set of predefined values.

Recommended Solution

Assumption-all potential thresholds are a multiple of a certain number. For example, all thresholds (in milliseconds) are a multiple of 250, so 0, 250, 500, 1750 and 3000 milliseconds are all potential thresholds.

Based on this assumption the suggested solution is to aggregate transactions by rounding all values to the common multiple, and counting how many transactions fall into each rounded value. Event Type = {RangeFrom, RangeTo, TransactionCount}

For example, the following Events will be generated in order to aggregate the data displayed above, where the common multiple is 250 milliseconds:

{1,     250,    2}
{251,   500,    3}
{501,   750,    1}
{751,  1000,   1}

Comments:

The timestamp of those events would be the same.( For example all aggregated events may occur at 1/1/2007 00:00., and there may be another set of events for the next time sample at 1/1/2007 01:00 assuming hourly aggregation)

RangeTo is calculated by rounding a transaction to the common multiple (see next).

RangeFrom is RangeTo minus the multiple, plus 1. It is specified for clarity reasons only, and you may skip it.

For example-aggregating by the hour would appear similar to the following (replace MULTIPLE by the multiple value):

select  trunc (time_stamp, 'hh') "TimeStamp",
        round (response_time/MULTIPLE, 0)*MULTIPLE-MULTIPLE+1 "RangeFrom",
        round (response_time/MULTIPLE, 0)*MULTIPLE "RangeTo",
        count (*) "TransactionCount"
from    t_log
group by trunc (time_stamp, 'hh'),
        round (response_time/MULTIPLE, 0)*MULTIPLE

In the Business Logic, the following condition may be applied to the Events:

If eventDetails("RangeTo")<=Threshold Then
	SuccessfulTransactions=SuccessfulTransactions+eventDetails("TransactionCount")
End If

Some conclusive insights: