2017-07-11 Meeting notes
Date
Jul 11, 2017
7am PDT
Meeting Link
Number: 196-716-795
Optional dial-in (USA toll): +1-415-655-0001
Attendees
@Josh Zamor (Deactivated)
@Elias Muluneh
@Jeff Xiong (Unlicensed)
@Paweł Gesek
@Paweł Albecki (Deactivated)
@Nikodem Graczewski (Unlicensed)
@Mateusz Kwiatkowski
@Nick Reid (Deactivated)
@Ben Leibert
@Darius Jazayeri (Unlicensed)
Goals
Continue discussion on UI extensability w/ maintainability
Bugs, root cause analysis
Discussion items
Time | Item | Who | Notes |
|---|---|---|---|
5m | Agenda review | Josh |
|
30m? | UI Extensability | @Nick Reid (Deactivated) | Continuation from: https://openlmis.atlassian.net/wiki/x/rgzOBg |
25m | Bugs and Root cause analysis | @Josh Zamor (Deactivated) @Paweł Gesek @Nikodem Graczewski (Unlicensed) | Dive into the number of bugs the project is finding and closing. Has root cause analysis been effectively translated into effective tests? Do we expect that we'll be finding fewer bugs each release in the future? short term? mid? long? |
UI Extensions and maintainability
Current approach is a bit blunt, Angular has Decorators which might be finer-grained.
Trying this in Malawi could give some very useful feedback - goal is to achieve better maintainability with the same functionality
Three potential targets for trialing this new approach:
Requisition Service in the Req UI
500 line file
customization was about 20 lines
Requisition Controller View
700 lines
small handful of changes
Product cell grid directive
200 line file
2 line change
For all three above, the file and the unit tests had to be forked!
Think that trialing it with the product cell grid directive might show best ROI (so few changes to so much code that was forked).
Nick is going to fork the repo, try it out, and share it in the dev forum. Next tech committee we'll talk about it again.
Bugs & Root Cause analysis
We've found that the past few sprints, a lot of our ticket load is bugs. It's not clear to everyone if these bugs are:
preventable
being seen again
where they're occurring (frontend, backend, a particular service, etc)
if we're writing effective tests as a result - a test that improves quality, shores up contracts, and prevents the same bug from occurring again.
Lets discuss...
Root cause analysis wiki: https://openlmis.atlassian.net/wiki/x/FgfABg
what have we learned from this wiki page?
Of the bugs fixed in the last 3-4 sprints:
how many (percentage) have we done a root cause analysis for?
Pawel just did a wide scoped one
feeling that most have had at least an informal one
how many of those led to new automated tests?
for most we've added tests - just the latest bugs
how effective have those new tests been in preventing regressions?
unclear - we've been adding tests, more recent than not, and it's too early to tell if they're preventing regressions
feeling that tests won't cover mis-aligned perceptual things
anything angular directives testing is lacking
causes perceptual things
more effort for showing logic
not TDD
how many were regressions (meaning the same root cause was to blame)?
28 / 67 were regressions = about 41% (in call we're saying about half) (Pawel has created a Regression label, and taken a stab at labelling)
25/27 were marked UI - Pawel feels this is about right, not much was backend code except for the Malawi addition of Rejected status, refactor to Orderables
the other half that's not regressions look like missed edge cases (user configuration among culprits)
We’ll also want participants to weigh in on:
is the quality increasing? Put another way: will bugs in mature parts of the system be decreasing with time?
are we on the right track to increasing quality?
how can we improve?
We didn't get to these questions as much, though we do have ideas for improvement and action items
Two areas
Regressions:
Pawel feels like the high number of UI refactors is largely contributing to the half of bugs which are regressions.
a more functional testing approach in the UI might help
product testing in the middle of a sprint is a bad idea as the changes are in-flight
refactors are exacerbating this
Edge cases:
add edge cases to ticket (who?)
Ideas:
How do we refactor (which has caused bugs), and do product level testing in a sprint?
sprint shuffle - one on refactors, the next not
label tickets as refactors so that QA testing can see which ones are in-flight
increase the release cycle (momentarily decreases bug finding priority)
freeze ui compontents docker image - use versioning to take it off the CD server (test). Need to solve for how QA tests the snapshot?
hit list of UI refactor targets (we have: UI UX backlog in confluence)
how do we get edge cases in tickets? i.e. how do we get developers to test the right edge cases in-flight?
VR could be more specific
Developer could be more specific
having a time to add edge cases focuses the developer to first think about what the edge cases are (4 - the whole group)
haven't been doing estimation meetings as much recently - we could hold these, and have the group brainstorm