2020 June, 24 Tech Committee Meeting notes

Date

 

7am PST / 4pm CEST

https://meet.google.com/cdi-fwzv-byh


Attendees

Discussion items

TimeItemWhoNotes
5mAgenda and action item review


Performance testing
  • No progress atm, focusing on COVID, this should change in July back to 3.10 work.
  • Will figure out more on resources in 2 weeks.
  • If in 2 weeks it's chosen, we want:
    • What are the biggest risks?
      • The results are shaky - they vary and that causes the results to skew / passes to fail
        • Put profile timinings in the java code, seeing how stable sections are.
      • The results are narrow - what does it mean when you have an individual end-point result to the user.
        • Goal for manual perf testing was to test if perf was better or worse than a specific release and dataset. Current setup in automated testing is to test an endpoint against a convention - we'd rather it be a pass-gate with respect to a specific release.
        • Goal in stock-management is to see it from an end-user perspective.
      • Demo-data in use in perf testing.  Manual perf testing is done on a country snap-shot from 2 years ago.  How do we make a fair comparison between results in automated testing to this old dataset?
      • What do you do with the result?  Reports are a bit cryptic...

Reporting stack progress

Moving away from "master" branch (forum post / next time)Josh Zamor
  • Master → Main for v3
  • With the number of v3 repositories we have, we'll need tools
    • doc links
    • build scripts

AOB

Notes


SIGLUS recording: https://openlmis.atlassian.net/wiki/x/EgAUNg

Action Items

  • Josh Zamor:  You have an issue assigned to you:  COV-110 - Getting issue details... STATUS
  • Josh Zamor: create forum post on "Would like to lay out the dev-to-deployment cycle (for upgrading)" means and what done looks like


OpenLMIS: the global initiative for powerful LMIS software