3.2.1 Regression Test Plan


The purpose of this test plan is to outline regression testing for 3.2.1 release candidate. 

Roles & Responsibilities

QA Team Leads

Sam and Joanna will be the QA leads for each team. 

OwnerResponsibilitiesQuestions

Sam (for Team ILL)

Joanna (for Team Parrot)

  • Create Test Cycles 
  • Create missing test cases and assign to test cycles
  • Assign Test Cases to team members
  • Point person for any questions about test cases from the team
  • Review execution of the test cycles, there should be positive progress during the day
  • Prioritize bugs per test cycle, Check that developers have detailed proposed solutions (if time allows or developer's experience allows)
  • Report the status of each test cycle, including defects reported, before end of day 
  • Review automated testing and provide status before end of day (go to http://build.openlmis.org/view/all/builds)
  • Sam Im (Deactivated) One question concerning the next phase of regression tests. In this phase, some of the test cases were added to more than one test cycle, and thus executed by more than one person (frequently by members of both teams, and in several cases, even by three people, as they were added to the Team Parrot's cycle and two team ILL's cycles). Will it be so also in the next phase? I'm asking about it because during standard regression test cycle executions, the test cases were divided between testers and each test case was always executed only once, by one person.
Team ILL & Team Parrot
  • Execute test cases assigned to you
  • Record the test execution by following these steps:Testing Process & Test Plans#ExecutingTestCase
  • Enter Defects as needed (by following the steps detailed in the "creating bugs" section below)
  • If there are any Blocker bugs, try to spend time completing a root cause analysis and detail in the bug ticket
  • When a defect is found, research and provide proposals in the ticket for review by Sam & Joanna (as time allows)
  • Assist other testers as needed
  • Josh Zamor to set up testing server environment(s)
  • Josh Zamor to provide updates on automated performance testing results
  • Brandon Bowersox-Johnson to coordinate manual UI Performance Testing
  • For the test run in Sprint 38 testers will be Sam ImVisvapriya Kandasamy, maybe one dev from Team Parrot if they are not working on critical tickets
  • Brandon Bowersox-Johnson will work with Paweł Gesek on assigning responsibility for member on Team Parrot to help test UI performance during the release candidate testing

Bug Triage team

MembersResponsibilitiesQuestions

Mary Jo Kochendorfer (Deactivated)

Brandon Bowersox-Johnson

Sam Im (Deactivated)

  • Review list of bugs provided by Sam & Joanna
  • Prioritize bugs
  • Provide priority to Sam & Joanna
  • Sam & Joanna create test cycles for retesting
  • Sam & Joanna provide status update on bug fixes
  • Brandon Bowersox-Johnson to create a new board for the bugs and tracking resolution. Also communicate in slack the bugs that have been added to the board for the day.
  • Do we need to discuss LOE for the proposed solutions for each bug? What if a bug fix takes longer than a day? Yes, LOE will be included as part of the triage discussion.
  • When should we meet every day? 10:30am daily
  • Should we include Malawi bug triage during this meeting? Yes starting on Monday 11/6

Process Overview

Test Cycles

Test Cycles will be created and managed by Sam and Joanna for each team. Each QA lead will assign test cases to team members. During each day the team will test starting in the morning. Then Sam and Joanna will triage to report end of day testing status for their team and then prepare for the next day of testing.

Guidelines for executing test cases within the correct Test Cycle are listed below:

  1. The component leads will identify any missing test cases and detail them in the Test Case Coverage section below.
  2. Then the QA leads will assign the test case to the test cycle and to the team members that will execute the test cases. How to create Test Cycles process is located here:Testing Process & Test Plans#CreatingaTestCycle
  3. The Test Cycle will include tests for all components. Sam and Joanna will create a minimum of 3 Test Cycles for the 3.2.1 Regression Testing. Three Test Cycles are created per team. We may need more test cycles depending on the bug cycles.
    1. Regression Phase 1 - Parrot, Regression Phase 1 - ILL
    2. Bug Fix Phase 1 - Parrot, Bug Fix Phase 1 - ILL
    3. Regression Phase 2 - Parrot, Regression Phase 2 - ILL
  4. If a test case has been executed and is in a status that needs to be retested, Sam or Joanna must create the new Test Cycle and assign the test case before testing can begin. Do not run a test case using the Ad hoc test cycle.
  5. Sam and Joanna will determine per their team when a new test cycle is created and assign the test executions to team members.

Estimated daily schedule 

Time of Day11/111/2 (start of test run)11/3 (bug review day)11/611/7
Morning

At end of Showcase, determine if we are ready to publish Release Candidate


Team ILL releases each component and Ref-Distro Release Candidate


Team ILL deploys RC1 to UAT server (so we are ready for Test Cycle Phase 1)

Team Parrot executes Test Cycle Phase 1 (Joanna Bebak (Deactivated)Nikodem Graczewski (Unlicensed))

Joanna triage bugs as needed

Bug fix day in test

Test Cycle Bug Fix in test

Malawi should start testing by Monday 11/6

Execute smoke tests (TBD)

Execute Final Test Cycle Phase 2

Midday

Joanna review bugs and developer's proposed solutions

Team ILL executes Test Cycle Phase 1

Sam & Joanna triage bugs after test cycle is completed

Team ILL may execute manual UI Performance Testing with 3.2.1-RC1Team ILL may execute manual UI Performance Testing with 3.2.0Team ILL may execute manual UI Performance Testing with older versions or newer RCs TBD
End of DayPrep for next day test executionsReview of bugs and presenting that to the teamPrep for next day test executionsFinal testing status and Go/No-go

Test Case Coverage

Each component owner is responsible for ensuring there is complete test coverage in Zephyr before testing begins. 

  1. Review test cases for your component: Search in Zephyr for all test cases by component. Instructions are here:Testing Process & Test Plans#SearchforTestCasesbyComponentorbyLabel
  2. Compare the test cases to the feature for your component (links to the features are in the table below).
  3. Missing test scenarios must be listed in the table below.
  4. Once the test cases are created and labeled with the component, add the test case number to the table below so that Joanna or Sam can add them to the correct Test Cycle.

10/23 decision: We will not include CCE test cases in the 3.2.1 Regression testing because CCE will not be included in the 3.2.1 release candidate.

ComponentComponent LabelOwnerMissing Test ScenariosZephyr Test Case
Requisitions (this testing includes Manage POD and Orders)Requisition
  • Product Grid edge cases (still in progress  OLMIS-3365 - Getting issue details... STATUS )
  • Any test cases missing label

Cold Chain EquipmentCCE
  • Role based access control testing (example of Requisitions test cases are linked in this ticket: OLMIS-2787 - Getting issue details... STATUS )
  • Any test cases are missing label (Notification related test cases?)
  • Any missing edge cases? ( OLMIS-3192 - Getting issue details... STATUS  needs test steps for error handling)

OLMIS-3430 - Getting issue details... STATUS

OLMIS-3431 - Getting issue details... STATUS

OLMIS-3432 - Getting issue details... STATUS

OLMIS-3433 - Getting issue details... STATUS

OLMIS-3434 - Getting issue details... STATUS

OLMIS-3435 - Getting issue details... STATUS

Stock Management

Connecting Stock Management and Requisition Services

StockManagement
  • Any missing edge cases for connecting stock and requisitions?
  • Any test cases missing label

Administration UIAdministration
  • Find and label test cases for assigning roles to a user (should include error scenarios), and reset password test cases
  • Any missing edge cases
  • Any test cases missing label

Joanna Bebak (Deactivated) - when we have testers completing administrative functions, we will need to make sure they don't change any users that are part of the test cases in the Testing Data section below.

Sam Im (Deactivated) removed test cases OLMIS-3132 and OLMIS-3133 because we have not implemented these yet.

Manual UI Performance Testing
Brandon Bowersox-Johnson
  • Instructions for how to run these manual tests
(not in Zephyr currently)

Users and Environment

We will use both the test and the uat environments to complete this regression testing. The first phase of regression testing will be done in the UAT environment. If bugs are found, the team will work on the bug fixes and execute tests in the test environment. Once the bugs have been resolved, the teams will coordinate deployment into the UAT environment for the final regression testing phase.


Test CycleTest environment

Regression Phase 1 - Parrot

Regression Phase 1 - ILL

https://uat.openlmis.org
Bug fix testinghttps://test.openlmis.org

Regression Phase 2 - Parrot

Regression Phase 2 - ILL

https://uat.openlmis.org

UI testing

This section includes list of types of devices/browsers are supported and which are prioritize for manual testing.

Past versions of OpenLMIS have officially supported Firefox. For OpenLMIS 3.2.1, we are prioritizing support of Chrome because of global trends (eg see Mozambique Stats) along with its developer tools and its auto-updating nature.

For QA testing of OpenLMIS our browser version priorities are:

  1. Chrome 52+ (test on Chrome 52 and Chrome latest)
  2. Firefox 48+ (test on Firefox 48 and Firefox latest)

The next most widely-used browser version is IE 11, but we don't recommend testing and bug fixes specifically for any Internet Explorer compatibility in OpenLMIS.

The operating systems on which we should test in are:

  1. Windows 7 (by far the most widely used in Mozambique, Zambia and Benin data AND globally)
  2. Windows 10

Note: The QA team is doing some testing using Linux (Ubuntu) workstations. That is fine for testing the API, but Linux is not a priority environment for testing the UI and for final testing of OpenLMIS. It's important to test the UI using Chrome and Firefox in Windows 7 and Windows 10. We are utilizing Browserstack to assist testing in Windows.

In other words, OpenLMIS developers and team members may be using Mac and Linux environments. It is fine to report bugs happening in supported browsers (Chrome and Firefox) on those platforms, but we won't invest QA time in extensive manual testing on Mac or Linux.

We have asked for different OpenLMIS implementations to share their google analytics to better inform how we prioritize and invest in browser and device support going forward.

Supported Devices

OpenLMIS 3.2.1 is only officially supporting desktop browsers with use of a pointer (mouse, trackpad, etc). The UI will not necessarily support touch interfaces without a mouse pointer, such as iPad or other tablets. For now, we do not need to conduct testing or file bugs for tablets, smart watches, or other devices.

Screen Size

We suggest testing with the most popular screen sizes:

  1. 1000 x 600 (this is a popular resolution for older desktop screen sizes; it is the 16:9 equivalent of the age-old 1024x768 size)
  2. 1300 x 975 (this is a popular resolution for newer laptop or desktop screens)

The UI should work on screens within that range of sizes. Screen size can be simulated in any browser by changing the size of the browser window or using Chrome developer tools.

Bandwidth

OpenLMIS version 3.2.1 is tested using a bandwidth of 384 Kbps, which is equivalent to a 3G (WCDMA standard) connection. We recommend end users using either this speed or higher for optimal usability.

Testing Data

ComponentUsernameProgramTeam Parrot testersTeam ILL testersConcerns

Requisitions



srmanager1, smanager1, psupervisor, wclerk1

srmanager2, smanager2, psupervisor, wclerk1 


srmanager4 (for second approval), smanager4, dsrmanager, psupervisor


administrator (testing requisition template updates/changes or program settings changes)

Family Planning

Essential Meds


Essential Meds and Family Planning

Joanna Bebak (Deactivated) - we should list the tester's name next to the login they will be using here.
  • Demo data restriction: May need to refresh environment if all current period reqs are processed (request and post status in QA slack channel)
Stock Management

srmanager2

divo1





Administration (for all admin test cases)

administrator

admin





Executing Test Cases

All testers must follow the test case execution process detailed here:Testing Process & Test Plans#ExecutingTestCase

If there are any questions about how to execute a test case, or questions about the test case steps, please contact your QA team lead.



Creating bugs and assigning priorities

Step by Step instructions on how to create a bug/defect is located here:Testing Process & Test Plans#EnteringDefectsduringRegressiontesting

For Regression testing we will follow this bug prioritization (also outlined on docs.openlmis.org):

Priority LevelExample
Blocker
  • Cannot execute function (cannot click button, button doesn't exist, cannot complete action when button is clicked)
  • Cannot complete expected action (does not match expected results for the test case)
  • No error message when there is an error
  • We will not not release with this bug
Critical
  • Error message is unactionable by the user, and user cannot complete next action (500 server error message)
  • Search results provided do not match expected results based on data
  • Poor UI performance or accessibility (user cannot tab to column or use keyboard to complete action)
  • We should not release with this bug
Major
  • Performance related (slow response time)
  • Major asthetic issue (See UI Styleguide for reference)
  • Incorrect filtering, but doesn't block users from completing tasks and executing functionality
  • Wrong user error message (user does not know how to proceed based on the error message provided)
Minor
  • Asthetics (spacing is wrong, alignment is wrong, see UI Styleguide
  • Message key is wrong
  • Console errors
  • Service giving the wrong error between services
Trivial
  • Anything else

Malawi Bug tracking and triage

Malawi will complete testing in their own enviroment with their own components. Testing will be adhoc and tracked manually. The testing does not include test cases assigned in Zephyr, or on this page. As the Malawi team is testing, any questions should be posted in the QA Slack channel so the core team can respond. It is the Malawi team's responsibility to determine whether the bug is specific to Malawi components, or a core bug.

Bug Triage Process:

  1. When a bug is entered by the Malawi team it should be assigned to the epic:  OLMIS-3427 - Getting issue details... STATUS
  2. Instructions on how to enter bugs/defects are located here:Testing Process & Test Plans#EnteringDefectsduringRegressiontesting


OpenLMIS: the global initiative for powerful LMIS software