Info |
---|
Please see http://docs.openlmis.org/en/latest/conventions/nfr.html# for the current status of defining the non-functional requirements for OpenLMIS. |
...
See External User Profiles for more details | System Specs | Mozambique | Malawi | Tanzania | Zambia | Proposed | Long-Term Proposal | OLMIS UAT | Malawi DEV |
---|---|---|---|---|---|---|---|---|---|
Administrator (Implementer or overall system admin) | Same as below |
|
|
|
|
|
| Facilities: 2729 Programs: 4 Full-supply: 1040 Non-full supply: 2 Products: 10061 Users: 1204 Facility types: 5 | Facilities: 877 Programs: 6 Full-supply: 271 Non-full supply: 1065 Products: 1495 Users: 513 Facility types: 13 |
N Program Supervisor (interested in national or regional view) |
|
|
| ?? |
|
|
| Facilities: 336 Programs: 1 Full-supply: 24 Non-full supply: 2 | |
District Storeroom Manager (submits requisitions on behalf of facilities) |
|
|
|
| All Facilities in the district: An average of : 25 |
|
| Facilities: 673 Programs: 2 Full-supply: 1027 Non-full supply: 2 | |
Storeroom Manager (or Store Manager who submits requisitions on behalf of its facility) |
|
|
|
| 4 Programs (ARVs, Lab Products, Essential Medicines, HIV test kits, |
|
| Facilities: 673 Programs: 2 Full-supply: 1027 Non-full supply: 2 | |
Warehouse clerk | <need to define> |
|
| Facilities: 17 Programs: 1 Full-supply: 13 Non-full supply: 0 |
...
- Malawi would like to see 6,000 products at the district level (slightly better connectivity - 3G). Malawi has 400 for full supply. Malawi will find out about HF.
- Moz up to 1,600 products. HF would manage less than 500.
- Zambia: 1100 products in the central not FTAP
- TZ: 1400 products
- The warehouses carry 5-6k product lists
Performance
...
Application Performance Goals
We use the word 'goal' rather than 'requirement' here to denote that these performance goals, at least at the current time, should be considered aspirational rather than hard requirements. The current system has a long way to go to achieve these goals but I we believe that we will reach them (or get closer to reaching them) more quickly by setting more aggressive targets rather than more conservative ones.
Rather than defining the goal for each use case/scenario, perhaps we should define are defining an overall baseline and then only define call out the deviations to from that baseline? We need to decide on an initial metric(s) to use for measuring use case performance. I suggest that measuring . Measuring the Time to First Byte (TTFB) for the server response would be is a good initial requirement as it limits the possible external factors that could influence the result and should be possible to add this to tests that already exist in the OpenLMIS build pipelines , perhaps into the Contract tests? to gather this data. Other metric requirements that could be added later include:
- Page Upload Time (PUT) : request
- Page Download Time (PDT): request → response → download
- Page Load Time (PLT): request → response → download → client rendering → page ready
These additional metrics would give a more representative measure of how the system feels for the actual users; it doesn't mean much to users if the TTFB is <200ms if the download and rendering take another 59 seconds. Each of these metrics would provide an indicator for a specific part of the overall performance:
Metric | Indicator |
---|---|
Page Upload Time (PUT) | Client-Side Processing Time and Data Volume |
Time to First Byte (TTFB) | Server-Side Processing Time |
Page Download Time (PDT) | Page Data Volume and Overall Request Timeline |
Page Load Time (PLT) | Client-Side Page and Data Processing Time |
A common understanding on the context for these goals will also be helpful, especially:
- When a use-case requires multiple actions, which one or ones should be used?
- When an action results in multiple requests, which one or ones should be used?
- What type and quality of network connection should be simulated?
- From discussions with Team MtG:
- Basic 3g network
- Limited CPU/RAM hardware profile
- From discussions with Team MtG:
- What baseline latency should be simulated?
- What options for the above will allow us to most easily standardize and automate the gathering of this performance data?
Note that for the purposes of these goals we may want to keep the overall measurement (PLT) broad and not too closely tied to the underlying requests being made while still being easily reproducible.
Info | ||
---|---|---|
| ||
The server hardware profile is assumed to be in line with the recommendations here. |
Use Case/Scenario | Profile | Relevant Working Data | Baseline 3.3.1 | Proposed | ||||||
---|---|---|---|---|---|---|---|---|---|---|
PUT (sec) | TTFB (ms) | PDT (sec) | PLT (sec) | PUT (sec) | TTFB (ms) | PDT (sec) | PLT (sec) | |||
(Default) | (Any) | (Profile Working Data) | 500 | 5 | ||||||
Login | Storeroom manager | N/A | 36 | (500) | (5) | |||||
Initiate Requisition | District Storeroom Manager (see above) | # Processing Periods: 12 (plus general working data above) | 5480 | 22 | 32 | (500) | (5) | |||
Save (sync) Requisition | (same) | (same) | 2.7 | 7150 | 4.86 | 24 | (500) | (5) | ||
Submit Requisition | (same) | (same) | 2.71 | 7590 | 2.43 | 34 | (500) | (5) | ||
Authorize Requisition | (same) | 2.71 | 7270 | 2.43 | 22 | (500) | (5) | |||
Approve Requisition | N Program Supervisor | How many Requisitions are waiting for approval? Malawi: average 40 (Per Malawi’s processes, the districts prefer to approve the forms collectively. So in this case, ~40 forms to approve at a time, we do have an outlier, they have 80 forms they would like to approve at one go) | 2.6 | 2430 | 2.43 | 25 | (500) | (5) | ||
Batch Requisition Approval | N Program Supervisor | How many Requisitions? Malawi: average 40 | (3.3.0) 104 | 1000 | 30 | |||||
Convert to Order (one) | Warehouse clerk | Max number of approved requisitions waiting for approval Malawi: average 80 | 1440 | 5 | (500) | (5) | ||||
Convert to Order (multiple) | Warehouse clerk | How many approved requisitions? Malawi: 30 - 40 | (8) 20 | (8) 1000 | (8) 10 | |||||
Filter performance on the convert to order page | Max number of approved requisitions waiting for approval Define the # of variables for the filter | |||||||||
View Requisition (filter performance) | 250 | 0.5 | ||||||||
Fulfill Order |
...
Scalability
Requirement from CdI: The application should support at least 1700 users. The system should support at least 1000 concurrent users. The system should support 1500 health centers, with one user per health center. The system should also support an additional 50 facilities with an average of 3 users each.
This requirement may want to be more detailed as what does it mean to "support" 1700 users? Likewise, does what does the usage of 1000 concurrent users actually look like? How many concurrent requests would that likely entail and what type of concurrent operations are likely be to occurring?
- Concurrent requests per second
- Concurrency requirements for specific use cases
- How many concurrent Requisition Submission's can the system handle?
- How many concurrent Stock Management changes?
- Number of active sessions
- How much memory does each active session require on the server?
- Reliability under load (processor, requests, etc)
- Does the server become unresponsive when resource usage is maxed out?
- E.g. How would the system handle a DDOS attack?
Availability
- Requirements for system uptime
- Timeliness of reporting data
- Expected maintenance tasks
- Windows to perform these tasks
Network Usage
Client Side:
- Expected number of retries (note the possible interaction with server reliability)
- Timeouts for network connections
- Data Compression
Server Side:
- Network throughput under load
- Concurrent network connections
Browser
This section outlines general performance metrics for the OpenLMIS-UI running in a web browser
...