alacrity in action

Come a User Story. A lightweight expression of a feature or user requirement. Often referred to as a “placeholder for conversation“. Bread and butter of every single “agile” team I have seen and possibly many not-so-agile teams too.

A card with one sentence on the front and usually a list of “acceptance criteria” at the back… or a card with a sentence at the front and … a number. A subtle pointer to an electronic card tracking system where conversations are substituted with mini-specifications, given-when-then wish lists or even detailed test-cases.

So what should these “acceptance criteria” be, and, more importantly, who should write them? In principle it doesn’t matter. There is a cross-functional, title-free team of people who self-organise and work it out.

In practice, more often than not, there is BA, a QA and some DEVs. Each fairly attached to their role and rather shy of stepping across the boundary.

Why do we need “acceptance criteria”? We want to know what needs doing, when the work on a story is completed, and to know what to test, naturally. So we ask the BA to write the acceptance criteria as part of specifying the story. Developers use these criteria to do just enough and finally QA will use them to validate if the work has been done correctly (rant about inspection deferred to another post). In a world of rational, predictable and certain software development this works.

The real world of software development is not rational, is not predictable and far from certain. Most of the acceptance criteria written up-front will not survive the contact with the implementation. Most of the software written will not fully oblige to the set of pre-imposed constraints and good QAs, having got their hands on the real thing, will find corner cases that couldn’t even have been thought of up-front.

This can go in one of two ways.

The team sees what’s going on, makes the mental shift from the utopian into the real world and starts collaborating on the stories. They shift to more conversations. The list of criteria becomes a fluid, temporal reflection of the current understanding and gets encoded in examples and automated tests rather than in the tracking system or even on paper. Problem solved.

You can keep writing acceptance criteria.

There is also the other way, perhaps more common (judge by your own experience). One were we refuse to accept that we might be engaging in a fuzzy, unpredictable, uncertain endeavour. The topic of acceptance criteria becomes increasingly a subject of many lengthy conversations that lead to nothing. Eventually the BA claims ownership and insists on writing them down (or are told to do so, or can’t see an alternative). Developers mostly ignore what’s written and code what they think is right (which is either wrong or right, welcome confirmation bias). Testers go back to the list and diligently try to validate behaviours. And they raise defects. A written list, instead of being a helpful tool in understanding and development, becomes a divisive artefact that isolates the roles and moves people further apart. Silos are created and reinforced because now, there is something that we can use instead of talking.

This is when I think you should stop writing acceptance criteria.

For my hope is that a lack of a static and seemingly sensible reference point (a written up list) will serve as an encouragement. That the void will become filled with more collaboration. That people will, perhaps desperate with no other option, start to ask questions, to provide opinions and suggestions and start helping each other out.

Try it out. It might not be enough and many people will not like it. But it may lead you to a better place. You won’t know unless you try.

Right?


Comments4

  1. Hi Floryan,

    I see this kind of behaviour happening.
    yet it’s not because people abuse a tool, it’s a wrong tool.

    For me it’s better to say everyone has the right and the moral duty to write and change acceptance criteria.

    For example, testers are really good at writing acceptance criteria.
    And a PO is the one deciding what is needed.

    Another example is when you have a dependant team. (I prefer to avoid these, yet sometimes an organisation is not ready for this.)
    In that case the client team should write the acceptance criteria and preferable in a test format.

    And yes I have seen this already go wrong with teams that insist on having a strict definition of ready fr a story, that blocks the team from starting risky stories. I prefer teams to start on risky stories, with a lot of unknowns early. A hard definition of ready blocks them from doing that.
    It’s great fro the team as they fail less, it’s bad for the company, as they keep the unknowns till the end of the project.

  2. I totally agree with everything you are saying Marcin. However, the success of such an endeavour assumes and relies on everyone in the team operating at a level of maturity that supports this. One ‘weak link’ in the team can have a dramatically detrimental affect. This grows expanentially with each team member not operating at that required level.
    Even in teams where this is working well, I have found that changes to that team (ie, new members rolling on/off) can impact the effectiveness of such working practices.
    As with everything agile, I think it requires regular re-evaluation, perhaps a rebasing. And above all, never operate under the assumption that everyone is doing what you think they should be…

  3. Nice article! I enjoyed it.

    Every time I have witnessed teams fall back into “it must be in the acceptance criteria upfront” is in response to a culture of mistrust. Someone in the team feels attacked and uses the written down detail as their defence. Typically a poor leadership, or inexperienced product managers / owners who really need an Agile Coach hear the defence as some sort of root cause analysis and through naivety decide the story and is acceptance criteria is at fault. This leads teams down a dangerous path and migrates agility into mini waterfall!!

    The actual issue I typically see is always communication and / or culture. Not forgetting that software development is a constantly moving target with an insane volume of variables that always change! It’s simply not stable enough to spec and build – normally it’s not scientific enough for such an approach to work! There are exceptions!

    Is the product owner accessible to answer clarify queries or assumptions? Too frequently the product owner / manager seems to prioritise all other tasks above support the team. They are stuck inaccessible in meetings, reading their email, at lunch, too busy, etc!

    Culture issues are tough and plentiful – the worst I have witnessed is where there is an expectation to analyse everything (in isolation) to “learn from our mistakes” which nearly always turns into a blame culture.
    In these situations the issue is trust – trust the team for doing their best and trust retrospective to uncover the true issues. In blame culture retro needs a really skilled mediator!

  4. Marcin,

    I agree that many acceptance criteria are write only. At least in my experience, if you check who reads them it’s never the devs. So the open question is how to ensure that the software does what is expected? The only way I have seen this work is, as you say, through conversation.

    I accept that to help the conversation it may be useful to make some notes such as acceptance criteria, but please, please, please not in Given/When/Then boilerplate, too much noise.It is just as bad with As .. So that user stories.

    It turns out that English is quite flexible, a few reminder notes on the card to cover non-obvious cases or edges is worth it. Pointing out the blindingly obvious for a ‘full’ acceptance criteria is typing for the sake of it.

    Adam

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.