Non-Functional Requirements - performance
Please see http://docs.openlmis.org/en/latest/conventions/nfr.html# for the current status of defining the non-functional requirements for OpenLMIS.
Profiles and Working Data Definitions
OpenLMIS is designed to be used in environments with differing levels of connectivity and by users that will use different sizes of working data. e.g. off-line or in-process stored locally and / or that a user works with at any given time.
The proposed working data is broken up into a proposal for the current system functionality and a long-term proposal that incorporates the needs to adequately test the stock management features. The long-term working data will likely require some significant architectural changes to some of the services, particularly in the UI and Offline areas, which is why it is not being considered for the current working data.
See External User Profiles for more details | System Specs | Mozambique | Malawi | Tanzania | Zambia | Proposed | Long-Term Proposal | OLMIS UAT | Malawi DEV |
---|---|---|---|---|---|---|---|---|---|
Administrator (Implementer or overall system admin) | Same as below |
|
|
|
|
|
| Facilities: 2729 | Facilities: 877 |
N Program Supervisor (interested in national or regional view) |
|
|
| ?? |
|
|
| Facilities: 336 |
|
District Storeroom Manager (submits requisitions on behalf of facilities) |
|
|
|
| All Facilities in the district: An average of : 25
|
|
| Facilities: 673 |
|
Storeroom Manager (or Store Manager who submits requisitions on behalf of its facility) |
|
|
|
| 4 Programs (ARVs, Lab Products, Essential Medicines, HIV test kits, |
|
| Facilities: 673 |
|
Warehouse clerk | <need to define> |
|
|
|
|
|
| Facilities: 17 |
|
Notes:
Malawi would like to see 6,000 products at the district level (slightly better connectivity - 3G). Malawi has 400 for full supply. Malawi will find out about HF.
Moz up to 1,600 products. HF would manage less than 500.
Zambia: 1100 products in the central not FTAP
TZ: 1400 products
The warehouses carry 5-6k product lists
Performance
Application Performance Goals
We use the word 'goal' rather than 'requirement' here to denote that these performance goals, at least at the current time, should be considered aspirational rather than hard requirements. The current system has a long way to go to achieve these goals but we believe that we will reach them (or get closer to reaching them) more quickly by setting more aggressive targets rather than more conservative ones.
Rather than defining the goal for each use case/scenario, we are defining an overall baseline and then only call out the deviations from that baseline. Measuring the Time to First Byte (TTFB) for the server response is a good initial requirement as it limits the possible external factors that could influence the result and tests already exist in the OpenLMIS build pipelines to gather this data. Other metric requirements that could be added later include:
Page Upload Time (PUT) : request
Page Download Time (PDT): request → response → download
Page Load Time (PLT): request → response → download → client rendering → page ready
These additional metrics would give a more representative measure of how the system feels for the actual users; it doesn't mean much to users if the TTFB is <200ms if the download and rendering take another 59 seconds. Each of these metrics would provide an indicator for a specific part of the overall performance:
Metric | Indicator |
---|---|
Page Upload Time (PUT) | Client-Side Processing Time and Data Volume |
Time to First Byte (TTFB) | Server-Side Processing Time |
Page Download Time (PDT) | Page Data Volume and Overall Request Timeline |
Page Load Time (PLT) | Client-Side Page and Data Processing Time |
A common understanding on the context for these goals will also be helpful, especially:
When a use-case requires multiple actions, which one or ones should be used?
When an action results in multiple requests, which one or ones should be used?
What type and quality of network connection should be simulated?
From discussions with Team MtG:
Basic 3g network
Limited CPU/RAM hardware profile
What baseline latency should be simulated?
What options for the above will allow us to most easily standardize and automate the gathering of this performance data?
Note that for the purposes of these goals we may want to keep the overall measurement (PLT) broad and not too closely tied to the underlying requests being made while still being easily reproducible.
Server Hardware Profile
The server hardware profile is assumed to be in line with the recommendations here.