The purpose of this test plan is to outline regression testing for 3.2.1 release candidate.
Roles & Responsibilities
QA Team Leads
Sam and Joanna will be the QA leads for each team.
Sam (for Team ILL)
Joanna (for Team Parrot)
- Create Test Cycles
- Create missing test cases and assign to test cycles
- Assign Test Cases to team members
- Point person for any questions about test cases from the team
- Review execution of the test cycles, there should be positive progress during the day
- Prioritize bugs per test cycle, Check that developers have detailed proposed solutions (if time allows or developer's experience allows)
- Report the status of each test cycle, including defects reported, before end of day
- Review automated testing and provide status before end of day (go to http://build.openlmis.org/view/all/builds)
- Sam Im (Deactivated) One question concerning the next phase of regression tests. In this phase, some of the test cases were added to more than one test cycle, and thus executed by more than one person (frequently by members of both teams, and in several cases, even by three people, as they were added to the Team Parrot's cycle and two team ILL's cycles). Will it be so also in the next phase? I'm asking about it because during standard regression test cycle executions, the test cases were divided between testers and each test case was always executed only once, by one person.
|Team ILL & Team Parrot|
- Execute test cases assigned to you
- Record the test execution by following these steps:Testing Process & Test Plans#ExecutingTestCase
- Enter Defects as needed (by following the steps detailed in the "creating bugs" section below)
- If there are any Blocker bugs, try to spend time completing a root cause analysis and detail in the bug ticket
- When a defect is found, research and provide proposals in the ticket for review by Sam & Joanna (as time allows)
- Assist other testers as needed
- Josh Zamor to set up testing server environment(s)
- Josh Zamor to provide updates on automated performance testing results
- Brandon Bowersox-Johnson to coordinate manual UI Performance Testing
- For the test run in Sprint 38 testers will be Sam Im, Visvapriya Kandasamy, maybe one dev from Team Parrot if they are not working on critical tickets
- Brandon Bowersox-Johnson will work with Paweł Gesek on assigning responsibility for member on Team Parrot to help test UI performance during the release candidate testing
Bug Triage team
Mary Jo Kochendorfer (Deactivated)
Sam Im (Deactivated)
- Review list of bugs provided by Sam & Joanna
- Prioritize bugs
- Provide priority to Sam & Joanna
- Sam & Joanna create test cycles for retesting
- Sam & Joanna provide status update on bug fixes
- Brandon Bowersox-Johnson to create a new board for the bugs and tracking resolution. Also communicate in slack the bugs that have been added to the board for the day.
- Do we need to discuss LOE for the proposed solutions for each bug? What if a bug fix takes longer than a day? Yes, LOE will be included as part of the triage discussion.
- When should we meet every day? 10:30am daily
- Should we include Malawi bug triage during this meeting? Yes starting on Monday 11/6
Test Cycles will be created and managed by Sam and Joanna for each team. Each QA lead will assign test cases to team members. During each day the team will test starting in the morning. Then Sam and Joanna will triage to report end of day testing status for their team and then prepare for the next day of testing.
Guidelines for executing test cases within the correct Test Cycle are listed below:
- The component leads will identify any missing test cases and detail them in the Test Case Coverage section below.
- Then the QA leads will assign the test case to the test cycle and to the team members that will execute the test cases. How to create Test Cycles process is located here:Testing Process & Test Plans#CreatingaTestCycle
- The Test Cycle will include tests for all components. Sam and Joanna will create a minimum of 3 Test Cycles for the 3.2.1 Regression Testing. Three Test Cycles are created per team. We may need more test cycles depending on the bug cycles.
- Regression Phase 1 - Parrot, Regression Phase 1 - ILL
- Bug Fix Phase 1 - Parrot, Bug Fix Phase 1 - ILL
- Regression Phase 2 - Parrot, Regression Phase 2 - ILL
- If a test case has been executed and is in a status that needs to be retested, Sam or Joanna must create the new Test Cycle and assign the test case before testing can begin. Do not run a test case using the Ad hoc test cycle.
- Sam and Joanna will determine per their team when a new test cycle is created and assign the test executions to team members.
Estimated daily schedule
|Time of Day||11/1||11/2 (start of test run)||11/3 (bug review day)||11/6||11/7|
At end of Showcase, determine if we are ready to publish Release Candidate
Team ILL releases each component and Ref-Distro Release Candidate
Team ILL deploys RC1 to UAT server (so we are ready for Test Cycle Phase 1)
Team Parrot executes Test Cycle Phase 1 (Joanna Bebak (Deactivated) & Nikodem Graczewski (Unlicensed))
Joanna triage bugs as needed
|Bug fix day in test|
Test Cycle Bug Fix in test
Malawi should start testing by Monday 11/6
Execute smoke tests (TBD)
Execute Final Test Cycle Phase 2
Joanna review bugs and developer's proposed solutions
Team ILL executes Test Cycle Phase 1
Sam & Joanna triage bugs after test cycle is completed
|Team ILL may execute manual UI Performance Testing with 3.2.1-RC1||Team ILL may execute manual UI Performance Testing with 3.2.0||Team ILL may execute manual UI Performance Testing with older versions or newer RCs TBD|
|End of Day||Prep for next day test executions||Review of bugs and presenting that to the team||Prep for next day test executions||Final testing status and Go/No-go|
Test Case Coverage
Each component owner is responsible for ensuring there is complete test coverage in Zephyr before testing begins.
- Review test cases for your component: Search in Zephyr for all test cases by component. Instructions are here:Testing Process & Test Plans#SearchforTestCasesbyComponentorbyLabel
- Compare the test cases to the feature for your component (links to the features are in the table below).
- Missing test scenarios must be listed in the table below.
- Once the test cases are created and labeled with the component, add the test case number to the table below so that Joanna or Sam can add them to the correct Test Cycle.
10/23 decision: We will not include CCE test cases in the 3.2.1 Regression testing because CCE will not be included in the 3.2.1 release candidate.
|Component||Component Label||Owner||Missing Test Scenarios||Zephyr Test Case|
|Requisitions (this testing includes Manage POD and Orders)||Requisition|
- Product Grid edge cases (still in progress
OLMIS-3365Getting issue details...
- Any test cases missing label
|Cold Chain Equipment||CCE|
- Role based access control testing (example of Requisitions test cases are linked in this ticket:
OLMIS-2787Getting issue details...
- Any test cases are missing label (Notification related test cases?)
- Any missing edge cases? (
OLMIS-3192Getting issue details...
needs test steps for error handling)
OLMIS-3430Getting issue details...
OLMIS-3431Getting issue details...
OLMIS-3432Getting issue details...
OLMIS-3433Getting issue details...
OLMIS-3434Getting issue details...
OLMIS-3435Getting issue details...
Connecting Stock Management and Requisition Services
- Any missing edge cases for connecting stock and requisitions?
- Any test cases missing label
- Find and label test cases for assigning roles to a user (should include error scenarios), and reset password test cases
- Any missing edge cases
- Any test cases missing label
Joanna Bebak (Deactivated) - when we have testers completing administrative functions, we will need to make sure they don't change any users that are part of the test cases in the Testing Data section below.
Sam Im (Deactivated) removed test cases OLMIS-3132 and OLMIS-3133 because we have not implemented these yet.
|Manual UI Performance Testing||Brandon Bowersox-Johnson|
- Instructions for how to run these manual tests
|(not in Zephyr currently)|
Users and Environment
We will use both the test and the uat environments to complete this regression testing. The first phase of regression testing will be done in the UAT environment. If bugs are found, the team will work on the bug fixes and execute tests in the test environment. Once the bugs have been resolved, the teams will coordinate deployment into the UAT environment for the final regression testing phase.
This section includes list of types of devices/browsers are supported and which are prioritize for manual testing.
Past versions of OpenLMIS have officially supported Firefox. For OpenLMIS 3.2.1, we are prioritizing support of Chrome because of global trends (eg see Mozambique Stats) along with its developer tools and its auto-updating nature.
For QA testing of OpenLMIS our browser version priorities are:
- Chrome 52+ (test on Chrome 52 and Chrome latest)
- Firefox 48+ (test on Firefox 48 and Firefox latest)
The next most widely-used browser version is IE 11, but we don't recommend testing and bug fixes specifically for any Internet Explorer compatibility in OpenLMIS.
The operating systems on which we should test in are:
- Windows 7 (by far the most widely used in Mozambique, Zambia and Benin data AND globally)
- Windows 10
Note: The QA team is doing some testing using Linux (Ubuntu) workstations. That is fine for testing the API, but Linux is not a priority environment for testing the UI and for final testing of OpenLMIS. It's important to test the UI using Chrome and Firefox in Windows 7 and Windows 10. We are utilizing Browserstack to assist testing in Windows.
In other words, OpenLMIS developers and team members may be using Mac and Linux environments. It is fine to report bugs happening in supported browsers (Chrome and Firefox) on those platforms, but we won't invest QA time in extensive manual testing on Mac or Linux.
We have asked for different OpenLMIS implementations to share their google analytics to better inform how we prioritize and invest in browser and device support going forward.
OpenLMIS 3.2.1 is only officially supporting desktop browsers with use of a pointer (mouse, trackpad, etc). The UI will not necessarily support touch interfaces without a mouse pointer, such as iPad or other tablets. For now, we do not need to conduct testing or file bugs for tablets, smart watches, or other devices.
We suggest testing with the most popular screen sizes:
- 1000 x 600 (this is a popular resolution for older desktop screen sizes; it is the 16:9 equivalent of the age-old 1024x768 size)
- 1300 x 975 (this is a popular resolution for newer laptop or desktop screens)
The UI should work on screens within that range of sizes. Screen size can be simulated in any browser by changing the size of the browser window or using Chrome developer tools.
of 384 Kbps, which is equivalent to a 3G (WCDMA standard) connection. We recommend end users using either this speed or higher for optimal usability.
|Component||Username||Program||Team Parrot testers||Team ILL testers||Concerns|
srmanager1, smanager1, psupervisor, wclerk1
srmanager2, smanager2, psupervisor, wclerk1
srmanager4 (for second approval), smanager4, dsrmanager, psupervisor
administrator (testing requisition template updates/changes or program settings changes)
Essential Meds and Family Planning
|Joanna Bebak (Deactivated) - we should list the tester's name next to the login they will be using here.|
- Demo data restriction: May need to refresh environment if all current period reqs are processed (request and post status in QA slack channel)
|Administration (for all admin test cases)|
Executing Test Cases
All testers must follow the test case execution process detailed here:Testing Process & Test Plans#ExecutingTestCase
If there are any questions about how to execute a test case, or questions about the test case steps, please contact your QA team lead.
Creating bugs and assigning priorities
Step by Step instructions on how to create a bug/defect is located here:Testing Process & Test Plans#EnteringDefectsduringRegressiontesting
For Regression testing we will follow this bug prioritization (also outlined on docs.openlmis.org):
- Cannot execute function (cannot click button, button doesn't exist, cannot complete action when button is clicked)
- Cannot complete expected action (does not match expected results for the test case)
- No error message when there is an error
- We will not not release with this bug
- Error message is unactionable by the user, and user cannot complete next action (500 server error message)
- Search results provided do not match expected results based on data
- Poor UI performance or accessibility (user cannot tab to column or use keyboard to complete action)
- We should not release with this bug
- Performance related (slow response time)
- Major asthetic issue (See UI Styleguide for reference)
- Incorrect filtering, but doesn't block users from completing tasks and executing functionality
- Wrong user error message (user does not know how to proceed based on the error message provided)
- Asthetics (spacing is wrong, alignment is wrong, see UI Styleguide)
- Message key is wrong
- Console errors
- Service giving the wrong error between services
Malawi Bug tracking and triage
Malawi will complete testing in their own enviroment with their own components. Testing will be adhoc and tracked manually. The testing does not include test cases assigned in Zephyr, or on this page. As the Malawi team is testing, any questions should be posted in the QA Slack channel so the core team can respond. It is the Malawi team's responsibility to determine whether the bug is specific to Malawi components, or a core bug.
Bug Triage Process:
- When a bug is entered by the Malawi team it should be assigned to the epic:
OLMIS-3427Getting issue details...
- Instructions on how to enter bugs/defects are located here:Testing Process & Test Plans#EnteringDefectsduringRegressiontesting