2017-07-11 Meeting notes

Date


7am PDT


Webex:  https://meetings.webex.com/collabs/#/meetings/detail?uuid=M10E38B98CJWX1KWAM824BO6PB-3O29&rnd=709350.36331

Number:  196-716-795

Optional dial-in (USA toll):  +1-415-655-0001

Attendees

Goals

  • Continue discussion on UI extensability w/ maintainability
  • Bugs, root cause analysis

Discussion items

TimeItemWhoNotes
5mAgenda reviewJosh
  • anything we should prioritize?
 30m?UI Extensability Continuation from: https://openlmis.atlassian.net/wiki/x/rgzOBg
25mBugs and Root cause analysis

Dive into the number of bugs the project is finding and closing.

Has root cause analysis been effectively translated into effective tests?

Do we expect that we'll be finding fewer bugs each release in the future?  short term?  mid? long?

UI Extensions and maintainability


Current approach is a bit blunt, Angular has Decorators which might be finer-grained.

Trying this in Malawi could give some very useful feedback - goal is to achieve better maintainability with the same functionality


Three potential targets for trialing this new approach:

  • Requisition Service in the Req UI
    • 500 line file
    • customization was about 20 lines
  • Requisition Controller View
    • 700 lines
    • small handful of changes
  • Product cell grid directive
    • 200 line file
    • 2 line change

For all three above, the file and the unit tests had to be forked!


Think that trialing it with the product cell grid directive might show best ROI (so few changes to so much code that was forked).


Nick is going to fork the repo, try it out, and share it in the dev forum.  Next tech committee we'll talk about it again.


Bugs & Root Cause analysis

We've found that the past few sprints, a lot of our ticket load is bugs.  It's not clear to everyone if these bugs are:

  • preventable
  • being seen again
  • where they're occurring (frontend, backend, a particular service, etc)
  • if we're writing effective tests as a result - a test that improves quality, shores up contracts, and prevents the same bug from occurring again.


Lets discuss...

Root cause analysis wiki:  https://openlmis.atlassian.net/wiki/x/FgfABg

  • what have we learned from this wiki page?


Of the bugs fixed in the last 3-4 sprints:

  1. how many (percentage) have we done a root cause analysis for?
    1. Pawel just did a wide scoped one
    2. feeling that most have had at least an informal one

  2. how many of those led to new automated tests?
    1. for most we've added tests - just the latest bugs
    2. how effective have those new tests been in preventing regressions?
      1. unclear - we've been adding tests, more recent than not, and it's too early to tell if they're preventing regressions
      2. feeling that tests won't cover mis-aligned perceptual things
      3. anything angular directives testing is lacking
        1. causes perceptual things
        2. more effort for showing logic
        3. not TDD
  3. how many were regressions (meaning the same root cause was to blame)?
    1. 28 / 67 were regressions = about 41% (in call we're saying about half) (Pawel has created a Regression label, and taken a stab at labelling)
    2. 25/27 were marked UI - Pawel feels this is about right, not much was backend code except for the Malawi addition of Rejected status, refactor to Orderables
    3. the other half that's not regressions look like missed edge cases (user configuration among culprits)


We’ll also want participants to weigh in on:

  1. is the quality increasing?  Put another way:  will bugs in mature parts of the system be decreasing with time?
  2. are we on the right track to increasing quality? 
    1. how can we improve?


We didn't get to these questions as much, though we do have ideas for improvement and action items


Two areas


Regressions:

  • Pawel feels like the high number of UI refactors is largely contributing to the half of bugs which are regressions.
  • a more functional testing approach in the UI might help
  • product testing in the middle of a sprint is a bad idea as the changes are in-flight
    • refactors are exacerbating this


Edge cases:

  • add edge cases to ticket (who?)


Ideas:

  • How do we refactor (which has caused bugs), and do product level testing in a sprint?
    • sprint shuffle - one on refactors, the next not
    • label tickets as refactors so that QA testing can see which ones are in-flight
    • increase the release cycle (momentarily decreases bug finding priority)
    • freeze ui compontents docker image - use versioning to take it off the CD server (test). Need to solve for how QA tests the snapshot?
    • hit list of UI refactor targets (we have:  UI UX backlog in confluence)
  • how do we get edge cases in tickets?  i.e. how do we get developers to test the right edge cases in-flight?
    • VR could be more specific
    • Developer could be more specific
      • having a time to add edge cases focuses the developer to first think about what the edge cases are (4 - the whole group)
      • haven't been doing estimation meetings as much recently - we could hold these, and have the group brainstorm

Action items

OpenLMIS: the global initiative for powerful LMIS software