Re: Using Unit Tests for bug tracking

From: Andrus (andru..bjectstyle.org)
Date: Sat Dec 21 2002 - 16:35:27 EST

  • Next message: Andrus: "Bug Fix: Selecting to-many."

    Well I was thinking more about adding a 3rd failure state:

    1. Error - something that failed unexpectedly (no assertions in the tests)
    2. Failure - something that failed in the assertion code
    3. "Soft Failure" - a special assertion *for bug tracking only*.

    (2) and (3) are similar, but (2) will generate a JVM exit status of "1" so
    that the nightly build Perl script would fail, while (3) will generate an
    exit status of "0".

    (3) must only be used for officially reported bugs and not for the normal
    XP day to day coding. (3) is needed to confirm the initial bug reports but
    deferring fixes till later time. And only for this purpose.

    Am I making sense at all?

    Andrus

    At 10:21 PM 12/21/2002 +0100, you wrote:

    >Andrus wrote:
    > > [JUnit for bug tracking]
    >
    >The original idea is to have a red/green distinction and really nothing in
    >between; all tests must succeed all the time, and you fix a problem as
    >soon as a test fails. This is one of the things that many people -
    >understandably! - don't like about JUnit: it doesn't allow for 'slack',
    >even when it's necessary or meant well, like in your example. I'm not
    >aware of any solution to this problem, since exactly this slack will then
    >usually lead many projects into their code-'n-fix death spiral when the
    >participants don't have the necessary discipline. It's really difficult.
    >
    > > I can probably hack something like that into CayenneTestCase superclass. I
    > > was just wondering if there is a JUnit solution to it. Or is this against
    > > their philosophy?
    >
    >The JUnit FAQ explains something along this line (I think) in item 4.9
    >("What's the difference between a failure and an error?") but I'm not sure
    >if this is what you're after? I've used the expected failures quite often
    >myself, but never for bug tracking, more as a kind of safety net for
    >unreachable code or prevention for regression. But we could certainly try
    >to use failures for "soft" and assertions for "hard" errors, like in the
    >example given in the FAQ. This would also mean that all tests would have
    >to conform to this failure/error raising behaviour.
    >Hm..thinking of an example. Let's try!
    >
    >QueryHelper.selectObjectForId(ObjectId) is supposed to return a
    >SelectQuery but will fail with an NPE if the passed ObjectId is null
    >(sorry for the obvious example ;). I stumble over this and write a test
    >which is expected to fail:
    >
    >// I never declare my tests to throw anything
    >void testSelectObjectForIdNullArg()
    >{
    > try
    > {
    > SelectQuery q = QueryHelper.selectQueryForObjectId(null);
    > }
    >
    > // bad
    > catch (NullPointerException npe)
    > {
    > fail("bad NPE! fix me!");
    > }
    >
    > // better: the expected behaviour
    > catch (IllegalArgumentException iae)
    > {
    > // all OK - nothing to be done
    > }
    >}
    >
    >Is this about what you had in mind?
    >
    >Holger



    This archive was generated by hypermail 2.0.0 : Sat Dec 21 2002 - 16:35:08 EST