You can help ensure the fastest level of performance for large-scale batch operations by confirming that the input data to the process is organized in a meaningful manner. That is, the data is sorted according to the cluster of the primary table. Better yet, all of the tables that are accessed by the large batch process should have the same cluster as the input. This could mean pre-sorting data into the proper order before processing. If you code a generic transaction processor that handles online and batch processing, you could be creating problems. If online is truly random, you can organize the tables and batch input files for the highest level of batch processing, and it should have little or no impact on the online transactions.
A locally executing batch process, which is processing the input data in the same sequence as the cluster or your tables, which are bound with RELEASE(DEALLOCATE) uses several performance enhancers, especially dynamic prefetch and index lookaside, to improve the performance of these batch processes.
|
Copyright © 2013 CA Technologies.
All rights reserved.
|
|