Testing Times
Testing time rolls around for every release, but before I get started I should make it clear which kind of tests I'm talking about. I'm not discussing the unit and regression tests that are run as part of every developer check-in. Those tests are important, because you know how developers can be - mindful of their specific omelette but careless with everyone else's eggs. But if a developer breaks those tests, that means they've broken the build process and then the siren kicks off, along with the pointing and cries of "shame!" It's an almighty hullabaloo that goes on until they've fixed things. Since a completely broken build is never released to testing, I only receive things when they are in better shape.
The tests we're talking about here are the ones that decide the fate of any production release, and they are composed of two separate exercises. One of those exercises involves longer-running automated test suites that are impractical to run with every check-in. I'm specifically referring to our automated Selenium web-tests that simulate an application user at a web browser. These can be used for various things - for example, we have suites of tests that simulate sections from our Longevitas training materials and papers, and others that initiate long-running operations such as Projections Toolkit VaR runs. Tests that precisely simulate an application user's activity at a web browser can be fragile. New releases tend to adjust interface and output elements in various ways, and web-tests are very sensitive to that. As a result, web-tests often need more technical management, including developer adjustment to deal with interface or output changes. That's why the second exercise is the one I'm most involved in, which is coordinating the manual (i.e. real human) testing of our applications.
It would be easy to think humans are becoming superfluous in the world of testing, but it is far from the case. You know the phrase there are no dumb questions? Well with automated testing there are no questions at all. Automated tests are written to look at very specific things and succeed or fail based on that. If some application output those tests aren't focused upon seems unexpectedly different or sub-optimal, then only a human tester can ask the non-dumb question. And if we don't like the answer, we'll log a bug that usually will need to be addressed before the release sees the light of day.
Human testers are also the first non-developers to see new features, and that is where this series of blogs come in. From this release on, you can expect me to pop up occasionally to tell you what we're seeing in the testing lab. Testing is an inexact science, in that we never know how long the process will take (not because of us testers, you understand, but because the amount of developer change shifts with every release.) These blogs will let you know about some of the features we're playing with a little before we know exactly when those features will be released into the wild.
I'll be back shortly to talk about our next release for Longevitas and the Projections Toolkit - version 2.8.7.
Add new comment