3.2.1 Regression Test Plan
The purpose of this test plan is to outline regression testing for 3.2.1 release candidate.
Roles & Responsibilities
QA Team Leads
Sam and Joanna will be the QA leads for each team.
Owner | Responsibilities | Questions |
---|---|---|
Sam (for Team ILL) Joanna (for Team Parrot) |
|
|
Team ILL & Team Parrot |
|
|
Bug Triage team
Members | Responsibilities | Questions |
---|---|---|
|
|
Process Overview
Test Cycles
Test Cycles will be created and managed by Sam and Joanna for each team. Each QA lead will assign test cases to team members. During each day the team will test starting in the morning. Then Sam and Joanna will triage to report end of day testing status for their team and then prepare for the next day of testing.
Guidelines for executing test cases within the correct Test Cycle are listed below:
- The component leads will identify any missing test cases and detail them in the Test Case Coverage section below.
- Then the QA leads will assign the test case to the test cycle and to the team members that will execute the test cases. How to create Test Cycles process is located here:Testing Process & Test Plans#CreatingaTestCycle
- The Test Cycle will include tests for all components. Sam and Joanna will create a minimum of 3 Test Cycles for the 3.2.1 Regression Testing. Three Test Cycles are created per team. We may need more test cycles depending on the bug cycles.
- Regression Phase 1 - Parrot, Regression Phase 1 - ILL
- Bug Fix Phase 1 - Parrot, Bug Fix Phase 1 - ILL
- Regression Phase 2 - Parrot, Regression Phase 2 - ILL
- If a test case has been executed and is in a status that needs to be retested, Sam or Joanna must create the new Test Cycle and assign the test case before testing can begin. Do not run a test case using the Ad hoc test cycle.
- Sam and Joanna will determine per their team when a new test cycle is created and assign the test executions to team members.
Estimated daily schedule
Time of Day | 11/1 | 11/2 (start of test run) | 11/3 (bug review day) | 11/6 | 11/7 |
---|---|---|---|---|---|
Morning | At end of Showcase, determine if we are ready to publish Release Candidate Team ILL releases each component and Ref-Distro Release Candidate Team ILL deploys RC1 to UAT server (so we are ready for Test Cycle Phase 1) | Team Parrot executes Test Cycle Phase 1 (Joanna Bebak (Deactivated) & Nikodem Graczewski (Unlicensed)) Joanna triage bugs as needed | Bug fix day in test | Test Cycle Bug Fix in test Malawi should start testing by Monday 11/6 | Execute smoke tests (TBD) Execute Final Test Cycle Phase 2 |
Midday | Joanna review bugs and developer's proposed solutions Team ILL executes Test Cycle Phase 1 Sam & Joanna triage bugs after test cycle is completed | Team ILL may execute manual UI Performance Testing with 3.2.1-RC1 | Team ILL may execute manual UI Performance Testing with 3.2.0 | Team ILL may execute manual UI Performance Testing with older versions or newer RCs TBD | |
End of Day | Prep for next day test executions | Review of bugs and presenting that to the team | Prep for next day test executions | Final testing status and Go/No-go |
Test Case Coverage
Each component owner is responsible for ensuring there is complete test coverage in Zephyr before testing begins.
- Review test cases for your component: Search in Zephyr for all test cases by component. Instructions are here:Testing Process & Test Plans#SearchforTestCasesbyComponentorbyLabel
- Compare the test cases to the feature for your component (links to the features are in the table below).
- Missing test scenarios must be listed in the table below.
- Once the test cases are created and labeled with the component, add the test case number to the table below so that Joanna or Sam can add them to the correct Test Cycle.
10/23 decision: We will not include CCE test cases in the 3.2.1 Regression testing because CCE will not be included in the 3.2.1 release candidate.
Component | Component Label | Owner | Missing Test Scenarios | Zephyr Test Case |
---|---|---|---|---|
Requisitions (this testing includes Manage POD and Orders) | Requisition | |||
Cold Chain Equipment | CCE |
| - OLMIS-3430Getting issue details... STATUS - OLMIS-3431Getting issue details... STATUS - OLMIS-3432Getting issue details... STATUS - OLMIS-3433Getting issue details... STATUS | |
StockManagement |
| |||
Administration UI | Administration |
| Joanna Bebak (Deactivated) - when we have testers completing administrative functions, we will need to make sure they don't change any users that are part of the test cases in the Testing Data section below. Sam Im (Deactivated) removed test cases OLMIS-3132 and OLMIS-3133 because we have not implemented these yet. | |
Manual UI Performance Testing | Brandon Bowersox-Johnson |
| (not in Zephyr currently) |
Users and Environment
We will use both the test and the uat environments to complete this regression testing. The first phase of regression testing will be done in the UAT environment. If bugs are found, the team will work on the bug fixes and execute tests in the test environment. Once the bugs have been resolved, the teams will coordinate deployment into the UAT environment for the final regression testing phase.
Test Cycle | Test environment |
---|---|
Regression Phase 1 - Parrot Regression Phase 1 - ILL | https://uat.openlmis.org |
Bug fix testing | https://test.openlmis.org |
Regression Phase 2 - Parrot Regression Phase 2 - ILL | https://uat.openlmis.org |
UI testing
This section includes list of types of devices/browsers are supported and which are prioritize for manual testing.
Past versions of OpenLMIS have officially supported Firefox. For OpenLMIS 3.2.1, we are prioritizing support of Chrome because of global trends (eg see Mozambique Stats) along with its developer tools and its auto-updating nature.
For QA testing of OpenLMIS our browser version priorities are:
- Chrome 52+ (test on Chrome 52 and Chrome latest)
- Firefox 48+ (test on Firefox 48 and Firefox latest)
The next most widely-used browser version is IE 11, but we don't recommend testing and bug fixes specifically for any Internet Explorer compatibility in OpenLMIS.
The operating systems on which we should test in are:
- Windows 7 (by far the most widely used in Mozambique, Zambia and Benin data AND globally)
- Windows 10
Note: The QA team is doing some testing using Linux (Ubuntu) workstations. That is fine for testing the API, but Linux is not a priority environment for testing the UI and for final testing of OpenLMIS. It's important to test the UI using Chrome and Firefox in Windows 7 and Windows 10. We are utilizing Browserstack to assist testing in Windows.
In other words, OpenLMIS developers and team members may be using Mac and Linux environments. It is fine to report bugs happening in supported browsers (Chrome and Firefox) on those platforms, but we won't invest QA time in extensive manual testing on Mac or Linux.
We have asked for different OpenLMIS implementations to share their google analytics to better inform how we prioritize and invest in browser and device support going forward.
Supported Devices
OpenLMIS 3.2.1 is only officially supporting desktop browsers with use of a pointer (mouse, trackpad, etc). The UI will not necessarily support touch interfaces without a mouse pointer, such as iPad or other tablets. For now, we do not need to conduct testing or file bugs for tablets, smart watches, or other devices.
Screen Size
We suggest testing with the most popular screen sizes:
- 1000 x 600 (this is a popular resolution for older desktop screen sizes; it is the 16:9 equivalent of the age-old 1024x768 size)
- 1300 x 975 (this is a popular resolution for newer laptop or desktop screens)
The UI should work on screens within that range of sizes. Screen size can be simulated in any browser by changing the size of the browser window or using Chrome developer tools.
Bandwidth
OpenLMIS version 3.2.1 is tested using a bandwidth of 384 Kbps, which is equivalent to a 3G (WCDMA standard) connection. We recommend end users using either this speed or higher for optimal usability.
Testing Data
Component | Username | Program | Team Parrot testers | Team ILL testers | Concerns |
---|---|---|---|---|---|
Requisitions | srmanager1, smanager1, psupervisor, wclerk1 srmanager2, smanager2, psupervisor, wclerk1 srmanager4 (for second approval), smanager4, dsrmanager, psupervisor administrator (testing requisition template updates/changes or program settings changes) | Family Planning Essential Meds Essential Meds and Family Planning | Joanna Bebak (Deactivated) - we should list the tester's name next to the login they will be using here. |
| |
Stock Management | srmanager2 divo1 | ||||
Administration (for all admin test cases) | administrator admin |
Executing Test Cases
All testers must follow the test case execution process detailed here:Testing Process & Test Plans#ExecutingTestCase
If there are any questions about how to execute a test case, or questions about the test case steps, please contact your QA team lead.
Creating bugs and assigning priorities
Step by Step instructions on how to create a bug/defect is located here:Testing Process & Test Plans#EnteringDefectsduringRegressiontesting
For Regression testing we will follow this bug prioritization (also outlined on docs.openlmis.org):
Priority Level | Example |
---|---|
Blocker |
|
Critical |
|
Major |
|
Minor |
|
Trivial |
|
Malawi Bug tracking and triage
Malawi will complete testing in their own enviroment with their own components. Testing will be adhoc and tracked manually. The testing does not include test cases assigned in Zephyr, or on this page. As the Malawi team is testing, any questions should be posted in the QA Slack channel so the core team can respond. It is the Malawi team's responsibility to determine whether the bug is specific to Malawi components, or a core bug.
Bug Triage Process:
- When a bug is entered by the Malawi team it should be assigned to the epic: - OLMIS-3427Getting issue details... STATUS
- Instructions on how to enter bugs/defects are located here:Testing Process & Test Plans#EnteringDefectsduringRegressiontesting
OpenLMIS: the global initiative for powerful LMIS software