Test Plan for 3.3 Release Meeting notes

Date

Attendees

Goals

  • Test Plan and next steps to prepare for 3.3 release
    • What is the timeline? Plan for 1 sprint 
    • Who are the resources? Who is out of office the last sprint in March? Josh is out

Discussion items

TimeItemWhoNotes
30-45min

Discussion for this meeting:

  • Test case review and clean up in preparation (Sam and Joanna are currently working on this):
    • Removing duplicate test cases, updating descriptions and adding an overview in the test cases, removing specific user logins that are in test cases
    • We are not having both teams run the same test cases (based on feedback from testing previous release) 
    • Previously test cases were organized and tracked by component. Sam and Joanna will organize the test cases based on the defined phases above for the 3.3 release.
  • Testing Phases to define: Do we agree on this phased approach based on feedback from testing the previous release?
    • Phase 1 Full Feature testing of complete workflows (1-2 days): Local Fulfillment, Vaccine stock based requisitions (Vaccine SBR), POD (what about reporting?) (If blocker bugs are found, you don't move to the next phase, you fix the bugs and deploy a new release candidate and start again on phase 1.) (3 assigned testers) Testing in test and uat
    • Phase 2 Full Regression testing of complete workflows (1-2 days): Requisitions, External Fulfillment, Stock Management, CCE, Administration (3 assigned testers) Testing in test and uat
    • Phase 3 Bug Fixes & edge cases, Exploratory Testing, Translations (2 days): Review proposed solutions for bugs, Testing any bug fixes, edge case testing
      • This test phase can be run by other team members during Phase 1 and 2. 
      • Exploratory testing: testing the UI, going through all screens to check for inconsistencies, attempting to do anything that can be done on each page. 
      • If there are bugs, then testing for this phase will be done in uat only.
    • Which phase is performance testing? (Edge cases label for testing)
      • We will need specific check in points to review status, but this doesn't necessarily need its "own" phase.
  • Scheduling: manual testing takes a lot of time
    • Are we planning enough time to complete all the testing, bug fix, and re-testing?
    • NO new feature work. What else can be planned for non-testers during this testing? 
    • Defining the expected test results for each phase before we can enter the next phase (BEFORE we start testing)
      • This will be defined and reviewed by Sam, Mary Jo, Brandon (Example: Zero blocker bugs before we start phase 2, Zero bugs with priority higher than Minor before we finish Phase 3)
  • Demo data: We need more demo data when we are testing a release, what are we going to need for 3.3?
    • Define the scenarios that we need to test to support the performance metrics (reference the workflow diagrams?) Performance Metrics
    • Do we need to update demo data for existing functionality that will be in regression testing so that we can have more than one tester at a time? Demo data is expensive to update (a few hours by one dev) so we would like to try another approach.
      • Another idea is to give more people access to their own environment for testing. $5/per day per personal environment. Needs approval from Mary Jo.


Other topics being addressed by Sam & Joanna:

  • Communication on the Test Plan and the daily testing status
    • Communication of the test plan before we start testing for the release
    • What needs to be communicated daily?
      • Test Cycle execution status
      • #of test cases that were executed, passed, and failed
      • Any test cases or items that need attention and review by Team ILL
    • Best time of day (for each team to communicate morning status and end of day status & share blockers)
      • Beg of day, post what we are doing today
      • End of day, post status of what we have done and anything pending
  • Creating & reproducing bugs, and bug triage (for both teams to review and discuss)
    • Communicating status of bug triage
  • Environments: so far separate environment works better for QA (Joanna & Sam will control deployments to UAT)
  • Organizing who is testing what test cases (more than just QA team)
    • Tracking progress of test case execution and communicating if we need help

What is the definition of showstopping UI bugs? Can we find example tickets for each section? What is blocker, and what is a lower priority?


Exploratory Testing Phase:

We will focus on testing edge cases and causing errors related to the following categories:

  • Bug definition
    • Functional issues: Offline functionality isn't working 
      • All functional bugs should be Blocker or Critical bugs:  OLMIS-3983 - Getting issue details... STATUS   OLMIS-4076 - Getting issue details... STATUS
    • UI to Server failure: bugs that are caused because the UI passes incorrect values to server, not human readable 
    • Configuration issues: demo data is wrong 
    • Visual inconsistencies: Visual bugs (text goes outside of a modal), missing error icon 
      • Critical:  OLMIS-3500 - Getting issue details... STATUS  
      • Minor:  OLMIS-3987 - Getting issue details... STATUS
    • Presentational issues: Workflow bugs (back button doesn't go to where you expect it), tabbing 
      • Blocker:  OLMIS-3508 - Getting issue details... STATUS  
      • Minor:  OLMIS-3746 - Getting issue details... STATUS

For exploratory testing we want a step by step process for bug reproduction: When I am testing and I've caused an error, what are the steps to identifying what caused the error so I can reproduce it, and then log the bug with enough detail to reproduce? Nick Reid (Deactivated)



Action items

  •  

OpenLMIS: the global initiative for powerful LMIS software