Many winters ago I was a tester, and then a maintenance developer doing regression testing. While maintaining a printer driver which produced binary printer data, we used a crafty technique (at least it seemed so then) to test when some fix or enhancement we made to the driver caused adverse effects to the output. That technique I’ll call difference testing. Here’s how it worked:
- Run a bunch of documents through the printer driver and capture the output
- Make a change to the driver
- Run the same bunch of documents through the printer driver and test each output against the previous output from the same document
- When a difference is detected, visually compare the printed output to determine if it’s a good change or a bad change
- And, (this is important) update the baseline output when the difference is good
I’ve recently had the opportunity to use this technique again, regaining an appreciation for its effectiveness. I’m just wrapping up the development of a pricing engine that can accept product orders in an XML format and output the same format with prices attached to each line item and sub-item. The testing challenge here is to check the dazzling number of combinations of options and the interplay of business rules applied to those combinations via the prices they produce. So, here’s what we did:
- Order a bunch of products with specific combinations of options designed to test edge cases of pricing rules, capturing the XML containing product orders and prices as baseline data
- Make a change to the pricing engine
- Iterate over the previously captured XML product orders and generate pricing again, comparing each file output to the corresponding baseline output
- When a difference is detected, visually or programatically compare the XML elements
- And, (this is important) update the baseline output when the difference is good
- Also, as with any regression suite, add new test cases to the suite when a previously untested error condition is discovered
I originally planned to parse XML elements and spit out some specific error statements about the differences. But, my client suggested just comparing the files with a visual diff, thus tossing the data in question up in front of the user with all the context required to determine the real error. Way to leverage the readability of XML!
Feel free to use this technique anytime – no charge. Oh, and make sure the XML is generated with <new lines> or your diff will just show one long line in error.
Comments