I was asked a question by a tester in the office the other day that got me thinking on this topic. Her question was "Will we need to do any integration or system integration testing?" Mind you, with some products, that is a perfectly reasonable request. In this shop, given what we do, we're pretty much doing some aspects of that any time we're running a test. Many times, we're testing within the boundary of our charter and exercising only so far. To continue to the next step requires live connections with external companies. We have connections to emulators, but I don't consider that to be a "real" situation - simply checking for handshakes and responses.
So, I thought a bit about the possibility that I have a different understanding of "System Integration Testing." That led me to that all-knowing repository of knowledge, Wikipedia, and found this:
System Integration Testing (SIT) is a testing process that exercises a software system's coexistence with others. System integration testing takes multiple integrated systems that have passed system testing as input and tests their required interactions. Following this process, the deliverable systems are passed on to acceptance testing.
Hmmmm. Well, I don't know if I'd buy that in total for our situation, or for most situations where I've worked.
So, I said, HEY! She has Foundation Level Certification from ISTQB.
ISTQB Glossary says:
System Integration Testing: Testing the integration of systems and packages; testing interfaces to external organizations.
Striking me a bit as "A painter is one who paints" I went looking at the individual terms. So:
Test: A set of one or more test cases. (taken from IEEE 829)
Testing: The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Integration Testing: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
System: A collection of components organized to accomplish a specific function or set of functions. [IEEE 610]
System Testing: The process of testing an integrated system to verify that it meets specified requirements. [from W. Hetzel (1988), The complete guide to software testing – 2nd edition, QED Information Sciences, ISBN 0-89435-242-3.]
This made me think of the very first time I encountered "integration testing" when I was testing something I had not programmed.
Some folks remember the days when programmers talked to business users/clients and met with them to discuss what they needed, then worked on the design for the solution, then talked with the client again about the design and checked with the client and verified what we had worked on with them. We worked up sample reports and screen shots and went over them again. Talk about time-consuming, no?
So here I was a reasonably senior programmer being pulled into a project that was over a year late and had to be delivered - postively had to be delivered - in 2 months. The first task I was given was to "simply validate" a series of batch jobs (remember JCL on IBM mainframes?) and confirm that all the components together worked correctly and finished within the required timeframe, nightly.
No problem, I thought. Gathered what I needed to learn the systems it touched, met with the more senior programmers who wrote the programs and the JCL and what not, made sure I knew the exact sequences and limitations that they knew of. I set up a test run on a weekend. The idea was to take over the entire test system - a clone of the production system - fire the process up, monitor the logs while it ran then check the summary reports. If they all looked correct, run some SQL scripts to validate the DB was correct.
Only problem was, after 18 hours running, draining the system of all available resources, the first step had not completed. That, constitutes a problem in any book.
I worked all the next day on identifying the cause, literally. When the team walked into the main conference room Monday morning, I had the entire data flow for the system the wall - all the way around the room. Bottlenecks were circled in red, pages from DB schemas were taped to the wall, and a first draft of a solution was scribbled on data flow diagrams were on the table at my "usual" seat.
That version, after rebuilding the various PROCs that were needed (ummm, execution sequence for those who don't remember when green-bar ruled the computer world) the second version was tested the next weekend. This one took only 4 hours to run to completion. Better, but well outside the target window of 2 to 3 hours.
Next idea? Pull in a couple of other mainframe jockeys for ideas, grab a senior DBA and say "I need a process that will run in a variable number of initiators, up to at least 5 and ideally up to 7." They said, "Can we do that? Not sure, but it might be fun to try." And we did. It worked. It ran in 45 minutes.
What I learned then was that nothing "worked" until you had proven that all the parts worked in concert with all the other parts or components or systems it needed to work with, in the timeframe it needed to work.
I also learned something else that day. Testing rocks. From that point, I studied all I could, even though I was a "programmer." After changing jobs, I found myslef in a position to branch out beyond what was then my "career" and learn various flavours of Unix, languages that did not exist when I went to college. Then, there was a reorganization at the company I worked for.
Part of it was rolling out a previously non-existant group. From scratch. I took the chance and have not looked back. Without running those integration tests, I am not sure I would have chosen this path.