April 22, 2015 // By Andy Tinkham
Designing tests is fundamentally an exercise in analyzing one or more models of a system. Different models yield different views of the system (similar to the way that different lenses can show us different things – we wouldn’t use red/blue 3D glasses to see Jupiter, nor would a telescope be useful when watching a 3D movie). One model we can use focuses on the individual components of a system and how they hook together and interact. This model is frequently called integration testing, as traditionally each component is developed individually and testers used this model when the components were first put together (integrated).
Looking at the integration points focuses our tests on certain risks that could occur. The bulk of these risks involve the assumptions the developers of each component made about the interface boundaries as they were building out the code – assumptions about what “messages” their component would send out, when those messages would be sent, what messages their component would receive (and when they’d be received), and what both sets of messages would contain. Frequent integration problems occur when one or more of the components involved in an interface differ in their assumptions – sending or receiving messages that aren’t expected.
In addition, software always has 3 sets of functionality to it, as shown in Figure 1. There’s the functionality that was planned and actually built, the functionality that was planned but not built, and the functionality that wasn’t planned but was still built. (There’s also functionality that wasn’t planned and wasn’t built, but that’s an infinite set and so we will continue to blithely ignore its existence in this post.)
When we combine these functionality sets with two sets of developers making assumptions about how the interface between their components should work, we end up with a large possibility of a problem occurring when the components are put together. Testing for these problems takes several techniques, some involving the components actually working together and some simulating the interface in ways that give the tester more control of the interactions.
The first techniques to use start before the components are even developed. Testers should begin their integration testing by being involved in the design process as the interface is specified. Ideally, the same tester is involved in the discussions for all components sharing the interface, and the developers are working together to jointly specify how the integration will work. In these discussions, the testers should be watching for places where unexpected occurrences might happen – places where data is sent when it isn’t expected, not sent when it is expected, or sent but with different content than expected.
When the components are beginning to be implemented, testers can begin to leverage stub tools that simulate the integration points. These integration tools often allow the tester to send messages into the component or to verify the messages coming out of the component against what’s expected. These tools allow the tester to test that the various possible incoming messages are handled correctly (including ones that signify error conditions that might be hard to create with the real component – for example, an error condition in the sending component may generate a message but be a very rare occurrence. Triggering that message might require recreating that error condition, possibly circumventing other portions of the system. By using a stub that can create and send this message without actually needing the error condition to occur, a tester can much more easily test that the receiving component handles the message correctly.) Receiver stubs can be used in conjunction with manual or automated ways of driving the component and the messages generated during that testing can be compared against an expected set.
Finally, while stubs can be very helpful in determining that the component meets the planned functionality conditions, the components need to be tested together. This is particularly important when unplanned functionality exists in one or more components. This unplanned functionality may result in unexpected messages being sent and trigger problems that stubs wouldn’t show.
Modern applications often involve many of these components working together. By beginning our integration testing early to catch design errors in the interfaces between these components, by liberally making use of stub programs to remove some of the complexity around the interfaces, and by exercising the components together, we can gain a better understanding for our teams on how our application will perform and make better informed decisions about the quality of our apps.
Do you have integration problems with your system? Want to know more about how we’d use stubs or design reviews to help improve your testing and the information you have about your application? We’d love to chat! Give us a call at 877-493-9369 or email us!