September 12 2017

Call Information

  • 9:00 AM PDT - Seattle
  • 12:00PM EDT - New York, DC
  • 6:00PM UTC+2 - Zambia
  • 7:00PM EAT - Tanzania

  • Webex Link
  • Meeting Number: 192 173 465
  • Host Key: 352864
  • Audio Connection +1-415-655-0001 

Last Meeting NotesAugust 29 2017


ItemLead (Time)Notes

Software Development Update

  • Roadmap: Vaccine timeline aggressive for November.  Will update team soon on timeline. Gap Analysis features will be slotted in Roadmap if proposal with Digital Square is accepted
  • Current Sprint: should finish Wed; Got good beta chunk of features in - still working on some.  Hope to have them done by end of the month

OpenHIE Supply Chain Sub-committee Meeting

  • Draft Agenda :
  • Details:
    • "We will be having the first OpenHIE Supply Chain Subcommunity call on Friday, September 15 at 11:00 a.m. EDT. (Click here for additional time zones)

      For additional information about this new group, including instructions on joining the mailing list as well as additional toll free country numbers, please click here. The wiki page is under construction, so please feel free to make or suggest changes.

      On the day of the call, please go to the OpenHIE Supply Chain Subcommunity and select today’s meeting to sign yourself into the meeting."

Mary Jo Kochendorfer (Deactivated) (5 minutes)

OpenHIE pulling together agenda.  Participants attending may include Chai, JSI, VR, UNICEF, USAID?, Gates?

Tenly talked with Paul and Jennifer yesterday.

Gap Analysis Update

  • Priorities are in (sheet), will circulate the suggested roadmap via email (prior to governance committee)

Thanks to all who gave priorities - most all are there.

Offline - Mary Jo and Carl will work on weighting priorities (global vs in country) and will bring this back to PC via email.  These will be presented to governance committee next week.

Performance Deep Dive

  • Tracking performance
  • Approach to improving performance
Josh Zamor (20 minutes)

Any specific topics? Performance has been an issue in Malawi - would like to see approaches/options to 1. track performance from server and client end 2. approaches to improving performance

  • ( - section on performance with 3 sub areas (performance testing, performance data, performance tips)
  • examples of metrics we are interested in listed (i.e. calls to the server)
  • scaling is less of a focus - we are not Amazon
  • How to do testing - Taurus is the tool used (note difference between performance testing and load testing - we do more performance such as how things are working) 
  • Taurus supports at least a dozen tools 
  • Run on JMeter (industry standard)
  • Very focused on CI - will see in Jenkins (see performance plugins; high-level trends).  Performance trend is a popular one to see
  • Demo of test done - follow scenario in GitHub docs
  • Do some load testing (stress testing) - generally not a focus
  • How are jobs used to pass or fail things we are interested in? Criteria in Git-Hub
  • Testing the actual server - want to be sure hardware supports end users in time specified - all run on AWS  - Build Pipeline kicks off of the ref data service (performance tests, requisition performance tests)
  • Q. How do you mimic a scenario with network connectivity issues?  
  • A. Don't have those mimics yet but where do we want to go next?   Want to use metrics coming back from tests to infer what the network will experience. Other things can be set (Network Link Conditioner in OS10)
  • Another "where to go next" - Performance Data.  Doesn't mimic what clients see but replicates number requisitions etc.  Team still building this out.  There are next steps at the end of the document.  Will do "performance oriented datasets" - some performance data available in current directory
  • Another "where to go" - everything now mimics RESTful endpoints. We also want to test what users see on the screen - drive Selenium tests through Taurus
  • Another "to do" - make our pipelines smarter

Touch base on how offline is done and how this contributes to performance improvement

Parambir: Is there a plan that we can have a tool or methodology to monitor what’s going on at the client end to gather metrics? Testing extensively does help, but we need to gather data on what went wrong on the client end.

Josh: Nick and I have had discussions about how to track what’s happening in the browser. One of the things we do have, Malawi has, Google analytics. You should be able to go to your google analytics page to see what the user has experienced.

Parambir: Does that capture network interruptions, or if there was something wrong in the browser?

Josh: We have to figure out what kind of data we send, and how the UI is going to collect that data when its running. We have been talking about that, and we are going to have to be mindful on how we build it. So we don’t send so much information that it won’t go over the network we are trying to report on.

Nick: We want to identify pages that are taking the most network time to load. We don’t have the details to parse what is happening yet.

Parambir: I’m trying to understand where we are at. As part of the testing, we should include something that we can gather data and review. It should be about gathering data in a real-time environment. Getting the telemetry data, at least that would really help us. If we need a small app at the client end, that would help. As long as we know which parameters we need to collect – not collecting personal data.

Josh: That would be a great place for the PC to think about which metrics you need to see. This helps us analyze whether we buy vs build. From the network perspective we have Scalyr. We can see how the server is doing. You can look at the web server to see the responses, response times from the server’s perspective. If you want to find out how specific java code is working, you can look at the log messages from the perspective of that function. So for a user request, you can see how long it took.

Mary Jo: Great next steps for PC, to define what metrics we want to see to support the team in prioritizing what to focus on next for performance testing. 

Josh: We could put together a couple of basic questions to start collaborating. Computer hardware, viruses, networks, and getting into the guts of OpenLMIS performance.

Parambir: Collecting the platform specs, browser version, etc is important. If we identify specific end users then it helps us to get to know when the pain happens.

Mary Jo: I’ll chat with team and determine when to send out questions. Please do respond to email requests to help us think this through. Come prepared for the discussion so that we are bringing our best foot forward.

Questions on upcoming work/features:

View Requisitions page enhancements: OLMIS-2700 - Getting issue details... STATUS

  • What are the primary use cases for this page?
  • Are users interested in viewing historical requisitions?
  • Are there any comments on the filters and sorts?
Sam Im (Deactivated) (10 min)
Member updates? Upcoming travel or opportunities?10 minutes



Amanda BenDor (Unlicensed), Ashraf, Parambir S Gill (Unlicensed)Chris Opit (Unlicensed)Christine LenihanBrandon Bowersox-JohnsonMary Jo Kochendorfer (Deactivated)Josh ZamorNick Reid (Deactivated)Sam Im (Deactivated)Tenly Snow (Deactivated)Dércio DuvaneAlfred Mchau


Video: OpenLMIS Product Committee Meeting-20170912 1559-1.arf (download the WebEx video player here)


Does anyone have concerns with the suggested new design for batch approval view?

We are proposing a scroll bar to improve performance. The Malawi team and some other folks familiar with our UI should review this improvement. Send us an email or ping us on Slack if you see any concerns with this design.

Original look and feel:

Proposed new look and feel: 

The details of the proposed new design are in the ticket:  OLMIS-3097 - Getting issue details... STATUS

OpenLMIS: the global initiative for powerful LMIS software