Date


7am PDT

Meeting Link


Webex:  https://meetings.webex.com/collabs/#/meetings/detail?uuid=M10E38B98CJWX1KWAM824BO6PB-3O29&rnd=709350.36331

Number:  196-716-795

Optional dial-in (USA toll):  +1-415-655-0001

Attendees

Goals

Discussion items

TimeItemWhoNotes
5mAgenda reviewJosh
  • anything we should prioritize?
 30m?UI Extensability Continuation from: https://openlmis.atlassian.net/wiki/x/rgzOBg
25mBugs and Root cause analysis

Dive into the number of bugs the project is finding and closing.

Has root cause analysis been effectively translated into effective tests?

Do we expect that we'll be finding fewer bugs each release in the future?  short term?  mid? long?

UI Extensions and maintainability


Current approach is a bit blunt, Angular has Decorators which might be finer-grained.

Trying this in Malawi could give some very useful feedback - goal is to achieve better maintainability with the same functionality


Three potential targets for trialing this new approach:

For all three above, the file and the unit tests had to be forked!


Think that trialing it with the product cell grid directive might show best ROI (so few changes to so much code that was forked).


Nick is going to fork the repo, try it out, and share it in the dev forum.  Next tech committee we'll talk about it again.


Bugs & Root Cause analysis

We've found that the past few sprints, a lot of our ticket load is bugs.  It's not clear to everyone if these bugs are:


Lets discuss...

Root cause analysis wiki:  https://openlmis.atlassian.net/wiki/x/FgfABg


Of the bugs fixed in the last 3-4 sprints:

  1. how many (percentage) have we done a root cause analysis for?
    1. Pawel just did a wide scoped one
    2. feeling that most have had at least an informal one

  2. how many of those led to new automated tests?
    1. for most we've added tests - just the latest bugs
    2. how effective have those new tests been in preventing regressions?
      1. unclear - we've been adding tests, more recent than not, and it's too early to tell if they're preventing regressions
      2. feeling that tests won't cover mis-aligned perceptual things
      3. anything angular directives testing is lacking
        1. causes perceptual things
        2. more effort for showing logic
        3. not TDD
  3. how many were regressions (meaning the same root cause was to blame)?
    1. 28 / 67 were regressions = about 41% (in call we're saying about half) (Pawel has created a Regression label, and taken a stab at labelling)
    2. 25/27 were marked UI - Pawel feels this is about right, not much was backend code except for the Malawi addition of Rejected status, refactor to Orderables
    3. the other half that's not regressions look like missed edge cases (user configuration among culprits)


We’ll also want participants to weigh in on:

  1. is the quality increasing?  Put another way:  will bugs in mature parts of the system be decreasing with time?
  2. are we on the right track to increasing quality? 
    1. how can we improve?


We didn't get to these questions as much, though we do have ideas for improvement and action items


Two areas


Regressions:


Edge cases:


Ideas:

Action items