2019-02-19 TC Meeting notes

2019-02-19 TC Meeting notes

Date

Feb 19, 2019

7am PST / 4pm CET

Meeting Link

https://zoom.us/j/211353423

Attendees

  • @Łukasz Lewczyński (Deactivated)

  • @Mateusz Kwiatkowski

  • @Klaudia Pałkowska (Deactivated)

  • @Wesley Brown

  • @Sebastian Brudziński

  • @Josh Zamor (Deactivated)

  • @Chongsun Ahn (Unlicensed)

  • @Elias Muluneh

  • @Paulina Buzderewicz



Discussion items

Time

Item

Who

Notes

Time

Item

Who

Notes

5m

Agenda and action item review

Josh





Build failures



  • Errors, Failures & Notifications

    • Statuses

      • Failed - Job couldn't complete

      • Unstable - Tests recently didn't pass (e.g. functional tests vs performance tests)

    • CI & Notifications

      • Test code builds vs notify team there is a problem

        • Master branch in service vs cross-service tests

      • Are we genuinely having a lot of build failures?

    • Early deployments (right after unit tests pass)

(next time)

Regression testing & environments

@Sam Im (Deactivated), @Joanna Bebak (Deactivated)@Wesley Brown, @Sebastian Brudziński

  • UAT, QA, Regression testing and RCs.

Notes



Build Failures



  • It's not clear from looking at the Slack channel (#build) to see if we're having genuine errors.

    • Genuine that's on the critical path for delivering (master branch of a service)

    • If there's an issue with the build server, then perhaps that's not a genuine failure.



What we want:



  • Statuses

    • failure == Build pipeline issue

    • unstable == Quality gates didn't pass (don't just think of linters)



  • Channels

    • Dev - For unstable notifications or a build pass (success) as a result of a commit

      • For failures that originate not from master branch of a component, only email, don't post to #dev

    • Build - Failure's of build pipeline



Early Deployments

  • Time from push to deployed to test server would be > 1 hr

  • Today it's easier to have a commit hang out as failed & unstable because the change is available on the test server & QA & UAT

    • For trivial fixes it goes quickly to QA before pipeline is done

    • For long-standing failures, it's okay to leave it as failed



Next steps:

  • Draw out exemplar pipeline and it's gatekeepers w/ environments

  • Revisit in future





Regression Testing & Environments



Summary:

  • Mid-release regression testing is performed by QA team, to find big (blockers and critical) issues early

  • Problem: We have a large number of big issues caused by things that are in-progress (e.g. can't go to convert to order screen)

  • Solution:  Could have a separate environment with an appropriately snapshot state of the code that's suitable for regression testing, which doesn't have all the in-progress sawdust.

    • Concern:  Wouldn't have latest changes for the 1-2 weeks testing goes on.  Usually QA team is doing this as their lowest priority so it takes a bit of time.



Next steps

  • Need Sam and/or Joanna as stakeholders to discuss wider

Action Items

@Sebastian Brudziński to make a ticket for making consistent the build statuses across jobs (and for what we want with channel notifications)

OpenLMIS: the global initiative for powerful LMIS software