Yesterday my company (Intel Corp) held a conference for
Software Professionals in Israel. One of the activities, along with
the technical lectures, was a day-long coding contests, in which
contenders submitted code to solve given problems and data sets. I
was one of the 1st place winners (it was a tie between some
competitors), perhaps the only tester in the list.
During the preparation of the conference, there was talking about
promoting a testing contest, and similar comments were heard during
the conference itself: If there's a coding contest, why not holding
a testing contest? In fact, I volunteered myself to help if such a
contest was carried on. I don't particularly like testing contests
because you can't pick a real testing winner by looking at definite
criteria (see my post on evaluation,
Then throughout the day, and as I noticed how I worked for the
coding contest, I started to understand more about why a testing
contest wouldn't work.
It's not only because testing is hard :). It's more because testing
is hard to define.
A coding contests has some very definite outputs. You write code,
it either compiles or not. You run the code over the given data
test sets, and it either computes a correct answer or not
(correct answer compared automatically towards an oracle
solution, this software suffers from the halting problem
undecidability just as any other). You wither get these
right, or you get these wrong. Software, with all it's abstraction,
has still got tangible characteristics.
But testing stands in a more intangible space.
On my way back home I told about the testing contest to a
programmer (who got 1st place as well), and he asked
A testing contest? How? You can't rate testing
I was very happy that he said that, because I was half expecting
the same trivial answer we are used to receive: "
count bugs and pick the winner with most bugs". This
programmer understands that testing is not about finding bugs
(although bugs are part of testing, as one of the information
types we uncover). Michael Bolton once wrote that the best
programmers he's ever seen have been great critical thinkers.
Back to the intangibility. Far from having clear cut results (a
source code file that compiles and answers a question with the
required answer), testing is a service. You can't measure a
service by counting discrete actions taken to serve it.
Testing happens at so many levels and dimensions at once, that you
can't keep track of score in any fair way.
It would be funny if at a "
Psychology Professionals" conference there was a
Psychologist Contest", in which Drs have to provide
mental health care to patients by asking as many questions as they
can. The Doctor that asks more questions wins.
Or a "
People Managing Contest" where managers have to
quickly provide as many compliments as they can while a
Moral Boost Gauge" measures which manager has the
most engaged team?
So why do we still want testing contests?
I think that part of the problem is the relation is that we still
think software testing is very related to computers, or software,
or computer science, or programming. But testing has very little to
do with these (surprise!). Testing is about studying
value, and value is related to people.
So instead of thinking that "
testing is like computer
programming, but from another point of view", we need to
think that "
testing is like studying people, but with a
computer software point of view". That will help us measure
testing in a more relevant way.
But what about the contests that exist out there? Every company
holds a testing contest once in a while. uTest have public bug
battles, for example.
These contests should not be called "
in my new humble opinion. These are "
Bug Reporting Contests", but there's limited
testing happening on them. My experience in in-company contests
(I've participated in some) and with uTest Bug Battles
(participated too (got a Best Feedback
award)) confirm that.
Bug Reporting Contests" are possible. But there's
a lot of intangibility over what is a bug and which exposure it
gets, which makes it less fair and gets people angry (that's
from experience too).
So if we do want to make contest that somewhat relate to bugs or
testing, we have to pick testing activities that are clear cut:
- Maybe a "
Crashing Contest" where testers have to
find as many ways to crash a software would work well -- a crash is
not given to discussion (it either crashes or not), can be
counted, and are great fun to search for.
Security Penetration Tests" contests are definite
as well. Testers that can crash, halt or bring internal secret data
from a system. This can be counted and scored.
Misspelling Contests" will work as well, as they
can be counted and there's little discussion about them (we can
have the Oxford dictionary and style manual as oracle).
Testing Contest"? Testing can not be weighted
or measured in a one-day context-detached situation.
An alternative to contests can be collaborative (or at least
interactive) testing sessions. Where people can approach
people that can tell them about value and give them feedback about
the information they come up with.
Dojos, like the ones that happened recently at Agile Testing
But in these everybody wins, so there's no one big prize. Maybe
that's what people don't like?
What do you think? What value have you derived from testing
How do you think we can build a contest with substance?
Ps1> Coding contests also can't measure very important
traits of programming, like maintainability, testability, elegance,
efficiency... So they've got some limited value as well.