how many (percentage) have we done a root cause analysis for?
Pawel just did a wide scoped one
feeling that most have had at least an informal one
how many of those led to new automated tests?
for most we've added tests - just the latest bugs
how effective have those new tests been in preventing regressions?
unclear - we've been adding tests, more recent than not, and it's too early to tell if they're preventing regressions
feeling that tests won't cover mis-aligned perceptual things
anything angular directives testing is lacking
causes perceptual things
more effort for showing logic
not TDD
how many were regressions (meaning the same root cause was to blame)?
28 / 67 were regressions = about 41% (in call we're saying about half) (Pawel has created a Regression label, and taken a stab at labelling)
25/27 were marked UI - Pawel feels this is about right, not much was backend code except for the Malawi addition of Rejected status, refactor to Orderables
the other half that's not regressions look like missed edge cases (user configuration among culprits)
We’ll also want participants to weigh in on:
is the quality increasing? Put another way: will bugs in mature parts of the system be decreasing with time?
are we on the right track to increasing quality?
how can we improve?
We didn't get to these questions as much, though we do have ideas for improvement and action items
Two areas
Regressions:
Pawel feels like the high number of UI refactors is largely contributing to the half of bugs which are regressions.
a more functional testing approach in the UI might help
product testing in the middle of a sprint is a bad idea as the changes are in-flight
refactors are exacerbating this
Edge cases:
add edge cases to ticket (who?)
Ideas:
How do we refactor (which has caused bugs), and do product level testing in a sprint?
sprint shuffle - one on refactors, the next not
label tickets as refactors so that QA testing can see which ones are in-flight
increase the release cycle (momentarily decreases bug finding priority)
freeze ui compontents docker image - use versioning to take it off the CD server (test). Need to solve for how QA tests the snapshot?
how do we get edge cases in tickets? i.e. how do we get developers to test the right edge cases in-flight?
VR could be more specific
Developer could be more specific
having a time to add edge cases focuses the developer to first think about what the edge cases are (4 - the whole group)
haven't been doing estimation meetings as much recently - we could hold these, and have the group brainstorm
Action items
Nick Reid (Deactivated) to try the UI approach (decorator) on Product Grid Cell in his private repo, share in tech forum, and re-share next committee call
Paweł Gesek to hold some meetings to groom out the edge cases