June 8, 2015 // By Michelle Miller
This is Part 2 of a 3-part series. To read Part 1, "What is Tested," click here. Part 3 is coming soon.
Design and code reviews are essential in the prevention of defects, while functional testing is used to uncover defects which have been introduced. For testing the functionality of a database, the principles are the same as with any other form of testing – first, the test needs to be set up. This means initializing the environment and its data to a known state. It may also mean seeding the environment with data which has pre-defined values. The data sources can be flat files, xml files, excel spreadsheets, or other databases. It is important that the data or the sample of the data, which will be affected by the tests, is known, so that the results of testing are predictable.
Data creation or import processes may need to be created for data seeding, but it is also possible to leverage existing data interfaces that might exist. Data can also be self-contained, where each test includes the creation of necessary test data that tests will act upon. It is important to agree upon the method of data creation with the entire project team, and ensure that appropriate time and resources are allocated to that task, as it can really throw off project schedules if not planned in advance.
After the environment is initialized, tests are run. In standard functional testing there are primarily two methods – black-box testing and white-box testing. These testing methods can apply to database testing as well. Database testing also benefits from a third method of testing called Inspection or Sampling. Each method has its own testing processes which work best.
For white-box testing, stubs are created to invoke stored procedures and triggers. These stubs can be created during unit testing and can be re-used or expanded upon for regression testing. They are highly effective and identify issues early on in a project.
Direct SQL command of the triggers and procs is effective for unit testing, but does not lend itself for re-use, and usually requires more testing when more of the application is integrated.
Black-box testing requires that the interfaces be integrated with the database, typically UI integration. The triggers and procs are invoked by interaction with those interfaces directly. Results are then reviewed either by confirming expected results on the UI (which isn’t always reliable), or on the backend by querying the database (which is more reliable). The testing results are more reliable than white-box because the application as a whole is tested, but it often takes more time, and issues may not be caught as early as desired. This is especially true if the UI interfaces are also under development.
Inspection and Sampling is the analysis of the data post commit. Performing queries on a representative subset of the data and reviewing it for anomalies. Utilization of sorting methods and comparative operations within the queries, is important in the identification of potential bugs. Individual field values and record counts can be analyzed and compared to look for defects. Results can be output into formats that can be compared using existing comparison tools or programs can be written for comparing to a baseline result. This is often very helpful in testing migrations or data conversions. Anomalies have to be individually investigated to identify the cause, and they may not easily tie in to specific test cases, but are very powerful in identifying severe defects that may not have been caught otherwise.
Regardless of method, the results are validated, reported, and triaged, then the cycle repeats. Reporting metrics on test results should align with the project objectives and highlight areas of high risk or concern. Quantifying defects by where they occur, such as source data, transform logic, data mapping, or test scripts, helps to communicate project health or isolate areas which require more attention.
There are nuances within database testing that should be explored as well. You can read about these in my next blog post.
In case you missed Part 1, click here to read it. If you'd like to speak to Magenic directly, you can email us or call us at 877-277-1044.