Team Parrot will start Sprint 52, but will focus on fixing the performance issue for single approve, and performance testing for convert to order. The main goal is to demonstrate/repair that we have NOT degraded in performance.
No new ticket work will start until we fix performance. (Except for Jakub - more details below.)
Sprint Q&A: Team Parrot will discuss the performance testing results and day's status.
Performance Testing & Resolution:
Single approve - the fix doesn't quite get us to the performance guidelines. What else could we do?
Our baseline for single approve is ~20 seconds overall, so that is what we are aiming for in order to say we have comparable performance. looking at the 3.3.0 numbers, our median value looks to be about ~40 seconds before any fix, so in order for us to get down to ~20 second baseline, we need to reduce by about 15-20 seconds, meaning the “fix” should ideally get down to the low single digits
Any solution will require a regression of all requisition test cases (and maybe convert to order).
Research, collect and record results. Solutions to single approve performance should be reviewed by Sebastian. The goal is to match the baseline or increase performance without impacting too many services which would cause more regression testing.
Convert to order - We need to prove we didn't degrade in performance.
Everyone needs to test and update performance page with results (testing one convert to order, multiple (8 to compare to baseline) convert to orders, multiple times)
3.2.1 baseline for convert to order: Convert 8 requisitions: 20s (POST /convertToOrder 15s, GET /requisitionsForConvert 2s) Convert 1 requisition: 6s (POST /convertToOrder 3s, GET /requisitionsForConvert 2s)
Once we do some testing and get several more data points, then analyze if performance has gotten slower and if so, what is causing it
Analyze convert to order logic and determine solutions to speed it up. Solutions to convert to order performance should be reviewed by Sebastian. The goal is to match the baseline or increase performance without impacting too many services which would cause more regression testing.
Additional notes:
Sebastian Brudziński - one thought could be to make supply line configuration change for supervisory nodes (which we recently did for Malawi) in perftest. Do we think that may have an impact on the performance results in perftest when comparing to our baseline?
Sebastian Brudziński - an option is to spin up a 3.2.1 server with same MW perftest data to be able to compare with 3.3 perftest for the convert to order (Chongsun writes up instructions on how and why)
Only if we don't get consistent results on 3.3 perftest
If we don't get consistent results, then we can try to see if we don't get consistent results on 3.2.1 perftest, and if so, we can say we didn't degrade because there is no consistent baseline for 3.2.1
Jakub Kondrat can move forward with creating tickets for and replicating to other services.
Chongsun Ahn (Unlicensed) will support all performance testing questions and feedback (Josh is out on Thursday).
Josh please add ticket here, and flag which are for Team ILL/Parrot:
(Team ILL)
(Team ILL)
(Team ILL)
UI
Other?
Should we do this? Nikodem Graczewski (Unlicensed): Looks like this issue has been resolved. I've marked this ticket as "Dead".
Bugs and Tech Debt
Sebastian Brudziński We would like to start a new process where a percentage of every sprint is set aside for Bugs and Tech Debt. For bugs we propose 20% of the sprint, and you would pull in bugs from the prioritized bug list that Team ILL maintains either on these wiki pages or directly from the Jira backlog (Sam Im (Deactivated) will provide the list). For Tech Debt we propose you/Team Parrot choose what items add the most value (Josh Zamorhas ideas about how to get this started).
Team has groomed and estimated Tech Debt for this sprint:
Team Ona
Team ILL
Team ILL still needs to conduct spring planning for 52.
The following were notes and was communicated at the top of the page
Team Parrot will start Sprint 52, but will focus on fixing the performance issue for single approve, and performance testing for convert to order. The main goal is to demonstrate/repair that we have NOT degraded in performance.
Sprint Q&A: Team Parrot will discuss the performance testing results and day's status.
Performance Testing & Resolution:
Single approve - the fix doesn't quite get us to the performance guidelines. What else could we do?
Our baseline for single approve is ~20 seconds overall, so that is what we are aiming for in order to say we have comparable performance. looking at the 3.3.0 numbers, our median value looks to be about ~40 seconds before any fix, so in order for us to get down to ~20 second baseline, we need to reduce by about 15-20 seconds, meaning the “fix” should ideally get down to the low single digits
Any solution will require a regression of all requisition test cases (and maybe convert to order).
Convert to order - We need to prove we didn't degrade in performance.
Everyone needs to test and update performance page with results (testing one convert to order, multiple convert to orders, multiple times)
3.2.1 baseline for convert to order: Convert 8 requisitions: 20s (POST /convertToOrder 15s, GET /requisitionsForConvert 2s) Convert 1 requisition: 6s (POST /convertToOrder 3s, GET /requisitionsForConvert 2s)
Additional notes:
Sebastian Brudziński - one thought could be to make supply line configuration change for supervisory nodes (which we recently did) in perftest. Do we think that may have an impact on the performance results in perftest when comparing to our baseline?
Sebastian Brudziński - an option is to spin up a 3.2.1 perftest, and have the 3.3 perftest for comparison for the convert to order (Chongsun writes up instructions on how and why)
Jakub Kondrat can move forward with creating tickets for and replicating to other services.
Research, collect and record results. Solutions to single approve performance should be reviewed by Sebastian. The goal is to match the baseline or increase performance without impacting too many services which would cause more regression testing.