When planning a two-week iteration in the past, I took a user story:
And he broke it into tasks, which were then evaluated in hours:
- History: rename file
- Task: Create a Rename command (2 hours)
- Task: save the list of selected files (3h)
- Task: Connect to the F2 key (1 hour)
- Task: add a context menu option (1 hour)
Then I would choose a task for work and track the time spent on its work. Then I will repeat this process with another task. At the end of the iteration, I could look at the time spent on each task, compare it with the grade, and use this information to improve future grades.
In a work that is completely dependent on tests, the only work that is clearly defined in advance is acceptance tests, which begin with development, and in a user’s history covering a large amount of work, the acceptance test volume may be too broad to give a good mark.
Therefore, I can guess the tasks that will eventually be completed (as before), but the time spent on them is much more difficult to track, because the tests force you to work on tiny vertical slices, often working on the bits of each task at the same time.
- , , TDD? TargetProcess, , , .