Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Next »

Date

7am PST / 4pm CET

https://zoom.us/j/211353423

Attendees


Discussion items

TimeItemWhoNotes
5mAgenda and action item reviewJosh

Build failures
  • Errors, Failures & Notifications
    • Statuses
      • Failed - Job couldn't complete
      • Unstable - Tests recently didn't pass (e.g. functional tests vs performance tests)
    • CI & Notifications
      • Test code builds vs notify team there is a problem
        • Master branch in service vs cross-service tests
      • Are we genuinely having a lot of build failures?
    • Early deployments (right after unit tests pass)
(next time)Regression testing & environmentsSam Im (Deactivated), Joanna Bebak (Deactivated)Wesley Brown, Sebastian Brudziński
  • UAT, QA, Regression testing and RCs.

Notes


Build Failures


  • It's not clear from looking at the Slack channel (#build) to see if we're having genuine errors.
    • Genuine that's on the critical path for delivering (master branch of a service)
    • If there's an issue with the build server, then perhaps that's not a genuine failure.


What we want:


  • Statuses
    • failure == Build pipeline issue
    • unstable == Quality gates didn't pass (don't just think of linters)


  • Channels
    • Dev - For unstable notifications or a build pass (success) as a result of a commit
      • For failures that originate not from master branch of a component, only email, don't post to #dev
    • Build - Failure's of build pipeline


Early Deployments

  • Time from push to deployed to test server would be > 1 hr
  • Today it's easier to have a commit hang out as failed & unstable because the change is available on the test server & QA & UAT
    • For trivial fixes it goes quickly to QA before pipeline is done
    • For long-standing failures, it's okay to leave it as failed


Next steps:

  • Draw out exemplar pipeline and it's gatekeepers w/ environments
  • Revisit in future



Regression Testing & Environments


Summary:

  • Mid-release regression testing is performed by QA team, to find big (blockers and critical) issues early
  • Problem: We have a large number of big issues caused by things that are in-progress (e.g. can't go to convert to order screen)
  • Solution:  Could have a separate environment with an appropriately snapshot state of the code that's suitable for regression testing, which doesn't have all the in-progress sawdust.
    • Concern:  Wouldn't have latest changes for the 1-2 weeks testing goes on.  Usually QA team is doing this as their lowest priority so it takes a bit of time.


Next steps

  • Need Sam and/or Joanna as stakeholders to discuss wider

Action Items

  • Sebastian Brudziński to make a ticket for making consistent the build statuses across jobs (and for what we want with channel notifications)
  • No labels