Batch Strategies

Let’s review two general strategies for creating custom batches. Both strategies assume knowledge of the quality of data being place into the batch staging tables. The first strategy is based upon the assumption that the data quality of the source system being used to import data into the batch is poor. We will call this the "Dirty Source Data Scenario." We will allow the import to place the data into the batch staging table(s) and then expect the data entry staff to clean the dirty data within the batch user interface grid prior to its committal to the production tables. In this scenario, we will keep the number of constraints on the batch staging table(s) low in comparison with the production table(s). During data entry, we will enforce as much data integrity as we can through the Add and Edit Data Forms that add and edit rows within the batch which would include form field hints such as required and the save implementations within the data forms. Since we will rely on data entry to clean up the data, we will utilize event handlers within the batch grid user interface that are bypassed on import.

In the second scenario, we have more trust with the data in our source system from which we will import data. We will call the second scenario the "Clean Source Data Scenario." In the clean scenario, we will place constraints on the batch staging tables that mimic the product tables as closely as possible. As side benefit, validation may be kept to a minimum, since a row that makes it into the staging table will be virtually assured of making it into the production table.