Introduction
User Acceptance Testing (UAT) for SELV is intended as an opportunity for HSG to ensure the system (including both the website and reporting) is ready to be used by VillageReach and PAV staff. Prior to UAT, ISG has conducted functional testing of the individual components of SELV, as well as basic end-to-end tests from data entry through reporting. UAT moves beyond basic functional validation to focus on actual user behavior in the field, ensuring that the overall system meets user needs and works as intended.
Roles & Responsibilities
HSG & ISG are jointly responsible for UAT testing, however ultimately HSG is responsible for approving the results of UAT and deciding to roll-out to production. Roles & responsibilities are as follows:
Responsible | Approver | Consulted | Informed | |
---|---|---|---|---|
UAT Plan | Sarah | Timoteo | Wendy, Vidya, Ron, Josh | n/a |
UAT Execution | Timoteo, Vidya, Sarah | Timoteo | n/a | Wendy, Ron, Josh, Allen |
Defect Triage | Sarah | Wendy | Timoteo, Vidya, Ron, Josh, Allen | n/a |
UAT Coordination | Sarah | n/a | n/a | Wendy, Ron, Josh, Vidya, Allen |
UAT Approval | Timoteo | Wendy | Vidya | Ron, Josh, Sarah, Allen |
RACI Definitions:
Responsible: Those who do the work to achieve the task.There is at least one role with a participation type of responsible, although others can be delegated to assist in the work required (see also RASCI below for separately identifying those who participate in a supporting role).
Accountable: (also approver or final approving authority)The one ultimately answerable for the correct and thorough completion of the deliverable or task, and the one who delegates the work to those responsible.[7] In other words, an accountable must sign off (approve) on work that responsible provides. There must be only one accountable specified for each task or deliverable.
Consulted: (sometimes counsel)Those whose opinions are sought, typically subject matter experts; and with whom there is two-way communication.
Informed: Those who are kept up-to-date on progress, often only on completion of the task or deliverable; and with whom there is just one-way communication.
Location
All UAT testing will be done in the VillageReach Mozambique office. If a test is also executed in Seattle that is fine, but it should not be considered "passed" until it has been verified locally.
UAT Testing Approach
Testing can take many forms. One approach is to write specific test cases (scripted tests) in advance of test sessions. Another alternative is exploratory testing, where general areas of testing are defined in advance but actual test execution is left up to the tester. Exploratory testing is more fluid and varied, exercising less predictable paths through the system. The advantage of this approach is that different scenarios are tried, which is more likely to result in identifying bugs. Because SELV has already been tested using a scripted testing approach (by ThoughtWorks), the UAT testing will primarily be exploratory:
- Scripted testing will be used for a basic set of SELV "smoke tests" that can be executed repeatedly whenever there is a new build (see attachment) SELV Smoke Test Tracking Sheet.xlsx
- Exploratory testing against the test strategies defined in the table below will be used for the majority of system testing
UAT Testing Strategy
Validations fall into four major categories. The testing strategy for each category is listed below.
ID | Category | Scope | Test Strategies |
---|---|---|---|
1 | SELV Website for Field Coordinators |
| A. Volume - Argentina to enter Maputo Jan 2014 data B. "Real-World" Scenarios - Mimic actual field coordinator data entry conditions and scenarios, including online & offline usage C. UI Coverage - Exercise each option available in the UI to verify functionality and translation D. Facility Special Cases - Enter distribution data into SELV for special cases such as No Visit / Not in DLS, routine data collection only, or a visit where all fields are NRd E. Browser Compatibility - Switch between most frequently used browsers/versions while testing. Focus testing on target browser/version for Field Coordinators. |
2 | SELV Website for Administrators |
| A. System Configuration - Configure the system for production readiness, with the byproduct of testing admin functionality B. UI Coverage - Exercise each option available in the UI to verify functionality and translations C. "Real-World" Scenarios - Execute typical administrator functions as defined by Timoteo. This will include the Data Edit Tool. D. Monthly Reporting Scenario - Work through business process required to verify data completeness prior to distributing monthly reports
|
3 | vrMIS Data Migration |
| A. Compare vrMIS and SELV Reports - Randomly select a period and confirm vrMIS reports and SELV reports match (allowing for known exceptions). B. Spot Check - Two-hour time box to explore comparing vrMIS and SELV data C. Data Set Completeness - Check that historical data for all periods and facilities has been migrated as expected (check for newly added facilities, vaccines, etc.) |
4 | VillageReach Analytical (English) Report Workbook |
| A. Facility Special Cases - For cases entered in 1D, confirm that facility-level reports included or exclude the data as appropriate. B. Provincial-Level Data Check - For data entered by Argentina in activity A1, confirm Analytical Workbook reports match vrMIS for same period (with known exceptions) C. HSG Review - Check that information presented on reports meets HSG expectations and needs |
5 | Provincial (Portuguese) Report Workbooks |
| A. Compare Provincial Reports with Analytical Report - For a selected month, spot check to verify that data in each provincial workbook matches data in analytical report B. Translation - Select a particular provincial workbook and verify all translations are correct C. HSG Review - Check that information presented on reports and PDF meets HSG expectations and needs. Confirm HSG understands how to use Tableau. |
6 | SELV on Tablet |
| A. Connectivity in Mozambique - Verify connectivity solution works in-country B. UI Coverage - Exercise each option available in the UI to verify functionality and translation C. "Real-World" Scenarios - Mimic actual field coordinator data entry conditions and scenarios, including online & offline usage D. Break In - Attempt to access functionality other than OpenLMIS E. End-to-End Field Test - Field Coordinators to bring tablet into the field and confirm usage or hardware, software, and connectivity in actual vaccine distribution setting F. Version Updates - Verify version updates can be deployed to the tablet
|
Test Strategy Details
Some test strategies, particularly those where we aim to mimic real-world field conditions, or those that test specific data and reporting scenarios, require further definition before UAT. These are outlined below:
1B - SELV Website for Field Coordinators Real World Scenarios
<need input from Timoteo>
1D - Facility Special Cases
- Visited / Reporting
- Visited / Not Reporting
- Not Visited / Reporting
- Not Visited / Not Reporting
- Not Visited / Not in Program
- Not Visited / No Fuel or Per Diem
2A - Administrator System Configuration
- Remove Pull Roles from Admin Accounts
- Add Wendy and Vidya as Users
2C - Administrator Real World Scenarios
<need input from Timoteo>
4A - Report Special Cases
<need to complete this table>
Reporting? | Health Units Reporting | Child Coverage Rate | Full Delivery | ||
---|---|---|---|---|---|
Visited / Reporting | Y | 1 | |||
Visited EPI Use: All N/A Child Coverage: All N/A | N | 0 | |||
Not Visited / Reporting | Y | 1 | |||
Not Visited / Not Reporting | 0 | ||||
Not Visited / Reason: Not in Program | n/a | ||||
Not Visited / Reason: No Fuel or Per Diem | n/a | ||||
Measles Doses on Child Coverage All Null | |||||
Measles Doses on Child Coverage |