Thursday, February 18, 2016

Denial 104: "But we spent a lot of money on the license ... and customising it!"

Previously we looked at the psychological effect of denial, and what drives it and makes it tick.  Today we look at an example of denial at work within testing - especially other's perspectives of it.  Ladies and gentlemen, I give you exhibit C ...

"But we spent a lot of money on the tool license ... as well as customising it!"

Again here there is a base assumption that spending a lot of money equals "this has to be really suitable for us".  When we looked at peer pressure previously, I tackled some of this - especially the myth that a test management tool means "reporting for free" ... I of course automatically get nervous when people offer me anything for free, mainly because of this guy ...

Chitty Chitty Bang Bang's child catcher - he lured kids with promised of lollipops "for free".  I'm sure he now makes a living trapping unwary IT projects in enterprise license agreements.

As we discussed, test tools generally trap you into working in a particular way.  If it matches how you want to work, that's great.  However if it doesn't align to how you need to operate on your project, then you're stuck with such a difficult and clunky methodology that will constrain the way you work, and so has an effect on your team which is difficult to measure.  Except for the fact that your team hate using the system.

But wait ... there's something worse than an "out of the box" product which partly addresses your needs.  And that's a product that's been "tailored".

Naturally, I have a ridiculous (yet true) example from my past.  Many years ago we had to use Microsoft Team Foundation Server (TFS), and some industrious manager had decided that to get the most use out of the reporting from the system, they would replace the out-of-the-box defect lifecycle below with something that would be more granular, and would allow for better in-detail status reporting.

So out went the simplicity of ...

Difficulty setting: Easy

In came the following hierarchy of states before a defect could be closed ...
  • Proposed
  • Defect confirmed
  • Assigned to developer team leader
  • Assigned to developer
  • Developer working on
  • Code fixed
  • Ready for unit testing
  • Assigned to unit tester
  • Completed unit testing
  • Ready for system testing
  • Assigned to system tester
  • Completed system testing
  • Ready for pre-prod testing
  • Assigned to pre-prod tester
  • Completed pre-prod testing
  • Ready for deployment
  • Deployed
  • Checked in deployment
  • Verified
  • Closed

Each state above was mandatory by the way (no cheating and skipping states you naughty tester).  It wasn't too bad for tracking a production incident, but for a defect in a project which wasn't in production, it had way too many states - most of which made no sense for what you were doing.  Just to make it worse, every time you moved between states, you had to save and a comment was mandatory (so people could capture even more detail).

It meant that it took ten minutes just to close a defect that you'd already tested.  No wonder there was a bit of a rebellion - testers wouldn't use TFS to track defects because it wasn't suitable.  Yes, no-one could use it easily, but heck, it gave us great audit trails!  [We eventually won pressure to considerably simplify that lifecycle by the way]

I have seen similar comedy-of-errors played out when,
  • an industrious manager introduces more states into test/defect tracking system
  • the same manager then finds they have to chase/bribe everyone to update the defect tracking system.  Why?  Because they're finding it just too awkward to use.  It's not at all helpful.
The denial in this case comes in two places ...
  • The test team needs to slave their process to the tool.  Not the tool "make things easier for the tester".  The more money was spent on the tool, the more justification that it's the tool's approach that's right over the testers.  Throw in a few choice phrases like "enterprise standard" and "best practice" to add to the smokescreen of why the team has to bend to the tool, and not visa versa.
  • More customisation = better.  Our tailored defect flow above that fits one purpose fairly well, but was lousy for anything else.  I see a lot of projects which go out to try and customise their product to the nth degree - usually powered by "but what if ..." scenarios.  The problem is the process which typically comes out looks more like a map of the London Underground.  And whilst that might cover a few choice scenarios really well, what typically occurs is even the simple stuff is nightmarish.  The bottom-line is no-one wants to read a manual to understand how defect flow.
Difficulty setting: "Is Government involved?"

Ironically I see the same temptation with sprint task boards to "overcomplicate them", because they're too simple.  Believe me, simple can be beautiful!  Because needless complexity is almost always unusable.

My rule of thumb is if someone is pressuring you to have a state "just in case", resist at all costs.  And if you find a state which you frequently skip on the way to another state, think about removing it - that state is not telling you anything!


No comments:

Post a Comment