Rethinking How to System Test Your BI Project, Part 6: Design Testing to be Evaluated Automatically

By Steve Knutson

Best practice tells us that complex solutions require comprehensive system testing. The idea that one or two system test cycles will validate the solution is incorrect, worth no more than a view into an opaque crystal ball.

In other words, here comes the painful part. Well, then again, maybe not.

Here is what you might expect to see for a well-vetted solution:

  • 20 to 100 system test executions during development
  • 3-5 system test executions during testing
  • 2-3 system test executions during deployment to production

Let’s rephrase it; a BI project should plan for 25 to 100+ system test cycles.

I know what you’re thinking – that a LOT of tests.

This is not only possible, it will also help ensure your longevity at the company. You need to design system testing to be executed and evaluated quickly – in fact, very quickly. That’s why functional canary testing is so vital – it’s quick and thorough and it needs to occur in every system test cycle. That’s also why we’re going to build automation into our system testing routine.

Test counts might look something like this:

  • 100 Functional canary
  • 20   Large data sets
  • 60   Incremental
      • 50 Incremental functional canary
      • 10 Incremental large data

You may have also have noticed that the system testing is heavily skewed into development.  Is this your concept of system testing? Read on.

Most system testing cycles should occur while the code is in the development environment. System testing executed in the development environment mitigates for the conditions we discussed in the last blog, “The Case for Thorough System Testing”. When system testing occurs after development, your code will be subject to a vicious cycle of unexpected new rules, recoding, and retesting.

The concept of system testing is often tightly coupled with Quality Assurance. QA generally starts their efforts when development is completed. Old school! Your project must find a way to have QA teams develop and/or approve system test data sets and methods to execute during development. This is a critical concept that project leadership typically struggles with or simply does not understand.

Target functional canary test data set runs to complete in thirty minutes or less. Test cycles include setup, execution, and validation of test results. The big challenge is to complete the result validations in that timeframe. Traditional system tests are time consuming as they require humans to compare results against expectations. Manual test validations can take upwards of two days to complete. Sorry, that simply won’t meet our timeline when we have dozens of tests to execute. Efficient and fast system testing and validation require automation.

One test concept automates system test setup and cleanup using a “test harness.”  A test harness includes logic that differentiates between all of our test conditions – functional canary data set and large data set runs, as well as initial and incremental data set runs.  A good test harness provides the opportunity to the tester to select the type of test run and the locations of source and target data.

Test validation is done using a preconfigured setup of test target data sets. This could be actual data processing results (e.g. a data mart schema) or a test view generated on top of the actual data processing results (e.g. summary derived calculations from the data mart). A data set defines the test, the table/view, row/column, expected value, when the test executes, the test run, actual data values and test results. Manual reads are replaced with an algorithm that reads and writes to the above dataset, finds the comparison value, records the test cycle and results, and calls out variances. It’s an elegant system that captures all the required metadata and results for our developers to analyze just the unexpected values.

Your team has a lot of work ahead planning and executing a series of system test cycles in development, QA and production. How do you plan to get all of this done in a reasonable timeframe?

My next blog will discuss additional guidelines for successful system test execution.

– Jim Van de Water contributed to this blog.
This entry was posted in Blog. Bookmark the permalink.

Newsroom


Contact Us


Post Categories