Previous Topic: DataManager Terminology

Next Topic: DataManager Rules and Cautions

System Flow

The DataManager online facility builds tables that store descriptions and definitions of input and output records. DataManager checks to be sure it knows where to find all the values you've defined as output. DataManager reads an ORD, checks its IRD and verifies that corresponding input fields exist. Every data field on an output record must match up with an input field. This checking is done automatically by the online facility.

When you've finished defining records, you save this version so it can be used by daily production runs. This saving step is the commit process.

DataManager processing consolidates the performance and accounting data collected at your site. The committed IRD and ORD definitions specify what data you want collected. However, because DataManager is a table-driven system, input data must be in an acceptable format. Therefore, all input data must be:

When DataManager processes this data, it executes a construct program defined by each application. The final output is data ready for further processing by PMA applications.

The commit step is the boundary between two different DataManager environments:

DataManager lets you work simultaneously in both of these environments. For example, you may decide to include some other data in your chargeback system. To do this, you copy the definitions you're currently using for production and then update them. Meanwhile, production keeps running using the committed definitions. When you're ready to update the production version, run the COMMIT process. DataManager then changes the content of its tables based on the values each record stores in its Version field. For example, committing a new TEST version:

Therefore, DataManager allows you to continue development work while daily production jobs continue preparing data for PMA applications.