Rethinking How to System Test Your BI Project, Part 2: Building a Functional Canary Test Data Set

No comments
By Steve Knutson

System testing should prove out your earlier efforts, not invalidate the entire system. If the latter happens, you’ve failed on requirements, environment, design, or all of the above. Like the proverbial canary that dies when a coal mine fills with gas, a functional canary test data set can prove whether your code base withers or withstands its intended long-term use.

Central to a successful system testing effort is the construction of a tiny data set that exercises all the business rules embedded in the code. I call this a Functional Canary Test Data Set. Let’s break down the terms to understand the goals.

Functional behavior focuses on logic that uses business data or generates business data to create outputs. For example, logic that generates a count of insurance plan members is a functional behavior. Business rules are followed to generate the desired result. Note the difference between functional, performance and operational behaviors.  The metric that the process runs in less than twenty minutes is a performance behavior, and the ability to restart the process when it fails is an operational behavior. We’re interested solely in functional behavior.

The term canary pitches the abbreviated nature of the data. We attribute this term to the coal mining industry, which used canaries in the mines to determine if the air was safe to breathe.   In the IT industry today, the typical “canary test” characterizes the minimal set of data that verifies code and data setup components required to ensure a successful code migration. Our “functional canary test” will take this one step further by ensuring that every business rule is validated during the test runs.

Why not use enhanced production data instead of creating a canary data set? Typical production data will execute the same test condition thousands, perhaps even millions, of times. For test purposes, we need not gas our fine feathered friend repeatedly to prove out a business rule. Once will do. Given that a typical BI project has multiple environments, you may run these tests dozens, if not hundreds, of times. Rapidity is crucial. The fastest way to complete this large number of test runs will be using tiny data sets in rapid-fire succession.

The test must execute all functional logic paths found in the business rules.  To prove out the solution, the test data set must fire all the test conditions, including many that probably don’t exist in your current production data.  Use of production data exercises an average of just 65% of the functional logic specified in a given BI effort! This is a big problem if you are relying solely on production data for testing. It is highly unlikely that a given production data run will demonstrate every data condition. Another concern is that roughly fifty percent of the logic executed in a BI solution supports data handing rules for checks like data quality and data integrity validations. Your ETL team specifies these rules, not the business users. In short, a lot of the logic that goes beyond simple user-defined business logic may not be fired by the production data. The sample size does not excuse your test data from representing all test conditions. Good BI solutions specify and know how the logic will behave in all scenarios.

System testing does not need to understand the logic embedded at every inflection point in the code. Instead, testing must balance efficiency of effort with the need to demonstrate high confidence that the solution satisfies the business rules. The best way for your testing to stay above the fray of the data flow design is to test only at the last landing point. For example, a typical data flow might land data at multiple points, including collection, staging, atomic data testing, and data mart/extract. Testing at each of these steps would constitute ‘white box’ rules.   The so-called ‘black box’ principle advises us that only the test outcome at the end of the data flow is important – not intermediate steps.  This is a deceivingly simple concept to grasp but often difficult in execution.  To support ‘black box’ testing, ask yourself if the business rules are written in a ‘black box’ manner.  Resist the urge to create system tests for the intermediate stages.

In coal mines at the turn of the century, the presence of live canaries assured the miners that the air was safe to breathe. Survival was the primary objective. Our modern take on this uses functional canary testing to ensure that your project survives to deployment and meets its objectives. Many projects spend too much time and energy system testing each step. Don’t do it. Choose your test points wisely. Focus on the outputs. Only the end result matters.

Future blogs will discuss the use of large data sets and creating initial and incremental data sets for successful system testing.

Examples follow for the more technically inclined.

A business requirement for a data mart states the following: provide the ICD-9 code for a given claim. The logic might be as follows: Direct map from the claim system diagnosis_code field.   If the code is all blanks or spaces, then provide the value “UNK”.  For all other cases, trim leading and trailing blanks and provide the resulting value.

Bare bones canary testing requires just six test conditions with six test records. Let’s map the required test cases against anticipated production data sample counts:

1. A valid diagnosis code with no leading or trailing spaces -> millions of cases

2. A valid diagnosis code with leading and trailing spaces -> 2 cases

3. An invalid diagnosis code with no leading or trailing spaces -> no cases

4. An invalid diagnosis code with leading and trailing spaces -> 1,000 cases

5. A null diagnosis code -> 50,000 cases

6. A diagnosis code with blanks -> no cases

The tests are agnostic as to which step the logic occurs (collection, staging, atomic store, or data mart). We want to ensure successful execution of our black box test through each logic path.

There is a challenge to building the test data set to exercise all the rules.  A best practice is to start with production data record sets, use key dimensional subject areas first, and apply use some business analysis to hone down each source data set to eliminate commonly repeating data patterns.  Here’s a health insurance example:

There are three dimensional (membership, product, provider) and one fact (claim) sources.  The health plan organization covers 5 million members, and typically processes 10 million claims at a time.  Business rules generate different results in data marts based on:

–          Type of company the member works for

–          Type of product the member uses

–          Type of service performed

–          Size and type of member’s family

–          Type of provider and type of provider contract

The initial effort defines a data set that includes all companies, products, family types and sizes, service types, provider types and types of provider contracts.  A strong business/data analyst that understands the nature of the business rules is indispensible to culling out the required data. (SME expertise is a key component to project success that often eludes the project manager’s radar.) He or she can query source data in such a way to pare the source data down to 5,000 to 10,000 members and 10,000 to 20,000 claims.

The test team then creates an initial record set from production data. The team will identify which records as suitable for direct use for a given functional test. They will then create or modify records to meet data conditions that are missing from the initial production data. The end result might be a set of approximately 5000 functional tests contained in small data sets that can run through a BI process in minutes.

— Jim Van de Water contributed to this blog.
LaunchPointRethinking How to System Test Your BI Project, Part 2: Building a Functional Canary Test Data Set

Related Posts

Leave a Reply