Last release (3.7) started perf discussions, wanted improvements in 3.8 (which did focus on perf with 3 dev sprints)
Did improve results with huge amount of products (10k products)
Not as much improvement with fewer products (1k products)
Overall stakeholders unhappy with improvement
What should they be?
500 ms is set for endpoints, however there's doubt that's achievable for some.
We've spent a lot of time on endpoint performance, however we should spend more focus on apparent performance on pages
We should get more realistic on perf characteristics on other measures such as <first display, Time Till Interactive>, TTFB
We've done some of this in the manual testing
How to get them?
Caching in UI has helped in the UI (with versioning),
Caching in the UI improvements was lost due to additional data needed
We've maybe achieved all the low-hanging fruit, next step for big improvements would be an infra change (caching microservice level, json normalization - normalization as in data repeated in response): would need experimentation to find if the approach would help
Have a shared dictionary on products - we did achieve this in that a requisition now doesn't include full product data, but rather it
One idea: could pre-load dict, however a user may not need all products - we would need to get a pattern for loading a slice of the dictionary
Want something actionable (current resources in reasonable timeframe)
Room for big ideas
Revisit in next call - hopefully with something we can do or are doing
New Batch Endpoints
See notes in agenda
Klaudia Pałkowska (Deactivated): one-page design document for how core supports batch requisitions (endpoints Requisitions has now, endpoints being proposed, how the fit and work together). Goal is to support a reasonable base in OpenLMIS that many implementations will reasonable use and build off of.
Josh Zamor to start the performance discussion in the forum: summary, themes, call for next steps