Competing for Data Quality

No comments
By Tina McCoppin
 

Show them the money! How better data quality lets everyone win.

How do you like this for an IT mantra:

  • Inter-company combativeness as a good thing?
  • Development vying with QA as a means for cost avoidance?
  • Literally, paying for your mistakes?

In the right context and used appropriately, yes – you should not only condone but even actively encourage the “mano-a-mano” competition. Here’s how I have seen it work:

I was part of a relatively young, five-year-old software company at the time – full of enthusiastic, innovative and imaginative folks. The development team was nearing completion of a major release of the product, with significant new functionality and dramatic overhauling of existing logic. Now, some software companies have been known to take a “If it compiles, ship it” mentality. That is, the testing phase is cursory, with the attitude that customers will tolerate and “participate in uncovering” defects and deficiencies. But our CEO made it clear from the outset: We had no intention of making our clients act as our Quality Assurance (QA) function.

But how could we increase the level of product quality?

With a twinkle in his eye, the Director of IT posted a challenge to the ENTIRE company. He said he had established a considerable pool of cold, hard cash – special bonus money for the development team. I’m talking five figures. He told the development team it was theirs to lose. He gave them three weeks to hammer out as many bugs, defects, and issues on the product and supporting documentation as they could. At the end of the three weeks he would have the product and documentation opened up to the entire company for one week. If a non-development individual found a defect, they were paid accordingly:

  • $1,000 for Severity 1 (Critical product failure)
  • $250 for Severity 2 (High severity)
  • $25 for Severity 3+

So, for each issue found post-development, money came out of the development team’s pockets.

I can assure you, those final three weeks of development the lights and laptops were on 24×7. It wasn’t just the money (well, yes, money served as a big motivator). It was also the spirit of friendly competition. Heck, not just your peers, but also the HR Department and the folks in the cafeteria were going after your pot of gold.

And when they got the chance, teams outside of development logged into the testing site and hammered away. Sales and Consulting – the folks on the front lines — went for the big dollars and tried to find the critical breaks. The Helpdesk team ran typical scenarios they encountered when on the phone with customers. The left-brain folks in Accounting and Legal gave long, hard looks at the documentation, on the lookout for discrepancies between what was written and how the product worked.

Everyone was attuned to product quality. And with this technique, everyone could participate in improving the company’s key asset.

The end result? The development team was more thorough in their testing. The “open enrollment” testing period still yielded all levels of severities. And the special bonus was doled out to the deserving participants.

But the big winners were the company’s clients – and thus, in the long run, the company, because the delivered product was more fundamentally sound and bullet-proofed.

How do you calculate the amount of the special bonus pool?

In our case, the Director of IT had estimated the cost of fixing defects post-General Availability (GA) of the product, based on previous releases. Anything less than that is money well-spent. He only instituted this one time, but the lesson was one none of us ever forgot.

What is the cost of fixing defects after release? Google or Bing that question and you find, among other articles:

  • Johanna Rothman’s scenarios using this link are from 2000 – and the cost of labor has gone up since that time
  • In another article in StickyMinds.com, Rothman proposes the following formula:
Average cost to fix a defect = (Number of people * number of days) * cost per person-day
(Number of fixed defects)
  • Jon Strickler depicts a chart that shows cost per defect by development phase is
    • Requirements = $139
    • Design = $455
    • Coding = $977
    • Testing = $7,136
    • Maintenance = $14,103
    • OUCH!
  • And finally, Capers Jones argues from the vantage point of value of quality based on Function Point analysis.

There’s no argument that identifying and fixing defects earlier rather than later in the development life cycle is less costly – not only in terms of direct cost, but potentially also in the cost of reputation (think of Toyota’s cost to pocket and brand for automotive recalls).

Friendly competition can serve as one avenue for improving a quality release. And save you money in the long run.

LaunchPointCompeting for Data Quality

Related Posts

Leave a Reply