This can help think of BDD in its easiest form; like discussions around specific scenarios.
You have your use cases. You have your own requirements. Now you want to make sure that you have a good understanding of these. So someone, maybe a developer, maybe a tester, says: "Okay, just to make sure I understand ... given that we start with <this> when the user does <this> then <this> Is it correct?"
And the tester will say: "Yes, that's right."
Then the UX or analyst says: "Well, thatβs right, unless that other context exists>."
Speaking of scenarios, the time taken to provide a universal understanding has been drastically reduced. Usually we smooth out obvious scenarios and focus on the edges; on new domain terms, concepts that differ between departments, complex interaction with outdated systems.
Developers really do not work with test cases. They work on the requirements and acceptance criteria in the same way as always. Tests simply become an example of what they can expect; Scenarios in which users benefit from or transfer system value. Automating these test cases can be useful, as the testing effort increases in proportion to the size of the code base.
BDD works best in projects where there is high uncertainty around the requirements or domain, and people's understanding is very different. If your project already works, then stick to it. Maybe you could see where the biggest gap is between ideas and their implementation, and if BDD helps you in this space, use it; otherwise select something else. What you do is very similar to BDD processes.
Lunivore
source share