Problem: in releasing we do performance testing, we deploy the (old) malawi dataset to a server, as well as testing on a local instance. Usually we take measurements from response endpoints (browser inspector), what we want to measure is the user experience: click a button (action) to it being ready for the next action. Much of this is dependent on the user's machine, 30-40% could be a variance based on differences in tester + machine, effecting the overall results.
Solution:Â Had some ideas, tried some in 3.9 testing.
Would like to improve the approach in Q2.
Would also like to extend the testing to more areas:Â Stock Management.
Would like to ensure it's not an overburden on the release process as we do the above
SIGLUS
This is ending in May
Would like an update in our next call - overview would be helpful for what's coming back to core.
Extend invite for communication lines
Offline capabilities in Stock Management
What would we like to use or change in the technical piece from Requisition
Epic is created but needs to be refined.
Sprint Showcases
Showcasing via Zoom is not very smooth
laptop resets
microphone issues
Move to hangouts?
number of people for free version? Usually less than 10 so it should work. 10 people is the max for free version.
works for daily standups (however SD has paid version - up to 200 users)
SD could setup the showcase meetings
Recording not supported out of the box? Google Meet? Come on Google...