Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: OLMIS-3245 revisions based on review feedback

...

Note
titleWorking Draft

This is a working draft of a new release process that OpenLMIS is considering adopting in Fall 2017 for future releases. Once this is reviewed, including by Technical and Product committee stakeholders, this information will be incorporated into the official release documentation on docs.openlmis.org and this wiki page will be archived.

Core Distribution Release Process

Gliffy
imageAttachmentIdatt115986447
baseUrlhttps://openlmis.atlassian.net/wiki
migration1
nameRelease Candidate Process (2017 draft)
diagramAttachmentIdatt115986443
containerId115985133

...

  • Multiple agile teams develop OpenLMIS , in addition to receiving services/components and review and incorporate Pull Request contributions
  • Microservices architecture provides separation between the numerous components
  • Automated test coverage prevents regressions and gives the team the safety net to release often
  • Continuous Integration and Deployment (CI/CD) ensures developers get immediate feedback and QA activities can catch issues quickly
  • Code is peer reviewed during Jira ticket workflow and in Pull Requests
  • Documentation, CHANGELOGs and CHANGELOGs demo data are kept up-to-date along with the corresponding code and demo dataas code development happens in each service/component

Do Release Preparation

  • Verify the pre-requisites, including all automated tests are passing and all CHANGELOGs are up-to-date; see Release Prerequisites 
  • Conduct a manual regression test cycle 1-2 weeks before the release, if possibleWrite draft Release Notes including sections on 'Compatibility', 'Changes to Existing Functionality', and 'New Features'
  • Shift agile teams' workloads to bugs and clean-up, rather than committing large new features or breaking changes ("slow down" 1-2 weeks before release)
  • Write draft Release Notes including sections on 'Compatibility', 'Changes to Existing Functionality', and 'New Features'
  • Schedule for releases is currently monthly patch releases and quarterly minor releases (as of 3.2.0, this has been the schedule since 3.0.0)

...

  • Each component that has any changes since the last release is released and semantically versioned (e.g., openlmis-requisition v6:6.3.4 or openlmis-newthing v1:1.0.0-beta)
    • Note: Usually, all components are released with the Reference Distribution. Sometimes, due to exceptional requests, the team may release a service/component at another time even when there is not another Reference Distribution release. Conversely, there are times a Reference Distribution will be released that includes an older, stable version of a service/component rather than making or including a new release of a specific service/component.
  • Reference Distribution Release Candidate is released with these components (e.g., openlmis-ref-distro v3:3.7.0-rc1)
  • Share Release Candidate with the OpenLMIS community along with the draft Release Notes and invite testing and feedback

...

Implementations are typically composed of multiple core OpenLMIS components plus some custom components or extensions, translations and integrations. It is recommended that OpenLMIS implementations follow a similar process as above to receive, review and verify that updates of OpenLMIS v3 perform correctly with their customizations and configuration.

Key differences for implementation releases:

  • Upstream Components: Implementations treat the OpenLMIS core product as an "upstream" vendor distribution. When a new core Release Candidate or Release are available, they are encouraged to pull the new upstream OpenLMIS components into the implementations CI/CD pipeline and conduct testing and review.
  • Independent Review: It is critical for the implementation to conduct its own Review Period. It may be a process similar to the diagram above, with multiple Release Candidates for that implementation and with rounds of manual regression testing to ensure that all the components (core + custom) work together correctly.
  • Conduct Testing/UAT on Staging: Implementations should apply Release Candidates and Releases onto testing/staging environments before production environments. There may be a full manual regression test cycle or a shorter smoke test as part of applying a new version onto the production environment. There should also be a full set of automated tests and performance tests, similar to the core release process above, but with production data in place to verify performance with the full data set. The testing/staging environment should have a recent copy of production data and should have an identical environment.
  • Follow Best Practices: When working with a production environment, follow all best practices: schedule a downtime/maintenance window before making any changes; take a full backup of code, configuration and data at the start of the deployment process; test the new version before re-opening it to production traffic; always have a roll-back plan if issues arise in production that were not caught in previous testing.

...