2020 February, 5 Tech Committee Meeting notes

Date

 

7am PST / 4pm CEST

https://zoom.us/j/428462410

Attendees


Discussion items

TimeItemWhoNotes
5mAgenda and action item reviewJosh Zamor
?Performance Goals
  • What should they be?
  • How to get to them?
20m (next time)Managing the infrastructure/tech stackSebastian Brudziński
  • What can we do to reduce the time spent fixing/working on the infrastructure and our tech stack in general
    • Jenkins upgrades
    • Jenkins problems - random errors/running out of disk space
    • Jenkins falls down sometimes and needs manual starting
    • Performance tests with MW dataset (quite some manual work to get it loaded to perftest and then restore the original one)
    • Deployment problems - Docker certs expiring (the need to re-provision)
    • Functional tests unstable
    • Performance tests unstable
    • more?
20mNew batch endpointsMateusz Kwiatkowski
10mTechnical Committee call scheduleJosh ZamorPoll results:  https://forum.openlmis.org/t/technical-committee-meetings-in-2020/5431

Notes


Performance Goals


  • Last release (3.7) started perf discussions, wanted improvements in 3.8 (which did focus on perf with 3 dev sprints)
  • 3.8 summary:
    • Did improve results with huge amount of products (10k products)
    • Not as much improvement with fewer products (1k products)
    • Overall stakeholders unhappy with improvement
  • What should they be?
    • how big?
    • times?
      • 500 ms is set for endpoints, however there's doubt that's achievable for some.
      • We've spent a lot of time on endpoint performance, however we should spend more focus on apparent performance on pages
      • We should get more realistic on perf characteristics on other measures such as <first display, Time Till Interactive>, TTFB
        • We've done some of this in the manual testing
  • How to get them?
    • Caching in UI has helped in the UI (with versioning),
    • Caching in the UI improvements was lost due to additional data needed
    • We've maybe achieved all the low-hanging fruit, next step for big improvements would be an infra change (caching microservice level, json normalization - normalization as in data repeated in response): would need experimentation to find if the approach would help
      • Have a shared dictionary on products - we did achieve this in that a requisition now doesn't include full product data, but rather it
      • One idea:  could pre-load dict, however a user may not need all products - we would need to get a pattern for loading a slice of the dictionary


Forum topic:

  • Want something actionable (current resources in reasonable timeframe)
  • Room for big ideas
  • Revisit in next call - hopefully with something we can do or are doing


New Batch Endpoints

  • See notes in agenda


Action Items

  • Klaudia Pałkowska (Deactivated):  one-page design document for how core supports batch requisitions (endpoints Requisitions has now, endpoints being proposed, how the fit and work together).  Goal is to support a reasonable base in OpenLMIS that many implementations will reasonable use and build off of.
  • Josh Zamor to start the performance discussion in the forum:  summary, themes, call for next steps