March 2016 Re-Architecture Design Workshop

Agenda attached below


Day 1

Introductions & Agenda Overview

  • Chatham House Rules - Notes will be deidentified.
  • Project Objective: Ability for donors to invest in building something once, and have it be available for all instances on that version or higher. Concern: In any specific project, does the donor really want a country-specific result (less costly, more specific) or do they want to build once and reuse - can we get donors to sign on the dotted line about this? 
  • Target Application: Headless, Reference UI.

Ideal Project Timeline

  • MVP by the end of the year. Date driven (not feature driven).
  • What is the driver of this end of year date? There is no hard deadline, but in general there are multiple issues driving the need for a tight delivery timeline: 
    • More than 6 months and turns into waterfall project 
    • New implementations and donors don't want to wait
    • Need a clear date and momentum to support partner, country, and implementor needs. 
  • Is a December 2016 date realistic? 
    • Will contracting with BMGF and dev partner really be done by May 1? What about the holidays in December? 
    • Suggested Alternative – beta in October, and MVP in January.
  • What is the scope of the MVP? 
    • Is it the 2.0 toggle-on features? More/Less? 
    • Needs to be defined (not agreed yet). – Added to Tuesday agenda. 
  • For consideration when making a decision on the architectural design: Degree to which ongoing development can be parallelized to accelerate development.

Airing of Grievences

  • For Review: OpenLMIS Pain Points
  • Pain points page updated per new additions from this meeting
  • Top six pain points from today's voting:

Principle: Shared Value

               1. No framework for creating dashboard and reports

2. Lack of 'offline first' functionality

3. Ability to contribute w/out toe-steping

4. Monolithic application tightly coupled with UI

5. Scant API and Doc
6. Country-specifc data collection forms w/out forking 

  • Compared this list with the results from the product committee. Discussed priority of migration, which was listed in the product committee list, but not on the list in this meeting. Noted that we don't have stakeholders around existing migrations in this meeting. Proposal that funding for a migration path should be approached separately - could either be part of the "sushi menu" in the April proposal to BMGF (JSI may take the lead on this), or subsequent to VIMS deployment and 3.0 beta release later this year.  

Feature Variability Matrix Review

  • For Review: Feature Variability Matrix
  • Review of Deployments:
    • VIMS is on a dev branch of eLMIS and is not yet deployed - What is the LOE to bring VIMS into 2.0 (parking lot)
    • All eLMIS deployments (Cote D'Ivore, TZ, Zambia, Zanzibar) are on the same code base (eLMIS bitbucket)
    • No versions are currently funded to upgrade to 2.0 or 3.0
  • Review of Features:

OpenLMIS Domains, as User Stories

See OpenLMIS Domains.

OpenLMIS Domain Modeling

Reviewed CRDM work and agreed on the following primary domains:

  • Requisitions
  • Receiving
  • Inventory Management
  • Fulfilling Requisitions
  • Forecasting
  • Cold Chain Equipment Management
  • Informed Push

See Domain Modeling Technical Deep Dive

Key Items To Address Later in Design Session Week

  • What is the MVP? Should it include a responsive design refresh?
  • Principle: How to approach problem to promote parallel development
  • How do we incentivize building in core, not forking, and governance through code?


3.0 - re-architecture MVP release

3.x - 18 month scope window

4.x - belongs in the domain - not scoped for 3.1


Day 2

AM Session - MVP Definition Wall Exercise

Sarah Jackson (Deactivated) to add photos

PM Session - Architecture

Principles for OpenLMIS Architecture:

  • Modularity
  • Loosely Coupled but Highly Cohesive
  • Extensions
  • Backwards Compatibility of Core
  • Shared Value
  • Performant at the Last Mile
  • Scalable to Regional Size
  • Secure
  • Deployable w/ou custom code
  • Offline as first class
  • Easy for New (low capacity) Developers to Start
  • Easy for contributors to be good citizens with minimal coordination
  • Architecture supports governance through code

Potential Architectures


  •  Modular
  • Extensible (for anything)
  • Configurable (for high-value things)
  • Modules backwards-compatible with core upgrades
  • Shared Value
  • Performant (including at last mile)
  • Scalable up to regional size
  • Secure
  • Deployable without custom code
  • Offline as first class
  • Simplicity of Development for new developers
  • Architecture enables/supports desired community governance and process

Microservices Approach:

Requisition Service

  • “New Requisition with blank status” service endpoint

    • Refers to reference data for facility, program, commodity

    • Has-many Requisition Items

  • “Save Draft Requisition” service endpoint

  • “Submit Requisition” service endpoint

  • “Approve Requisition” service endpoint

    • (sync approach) Creates an Order object by calling another service

  • (async approach) publish “Feed of Approved Requisitions”

Fulfillment Service

  • (async approach) subscribes to the Requisition service’s feed of Approved Reqs

  • (sync approach) “Create Order”

  • “Query Unfulfilled Orders”

  • “View Order Details”

Scenario 1: add a new calculation algorithm for suggested order size

 => option: we planned for this, so this is its own microservice. You just replace it

 => option: we planned for this, so

Scenario 2: collection of varying-by-country program data, to be collected with requisition (as a separate, required, section/tab on the requisition form)

 => option: store that form submission as json (either in its own document DB, or as a free-text column in the requisition table)

 => option: first, user submits program data to its own service. Then, submit requisition to req service (which may check that other data was submitted before it accepts/approves a submission)

Scenario 3: additional approval step to compare to historic data from DHIS2

 => option: fork the requisition service and change a small piece of it

 => option: split the requisition service (pulling out the approval service)

How do we do reporting?

=> option: ETL process (where “E” calls API, not DB) and generates your cubes

=> option: data lake

Cold chain equipment:

  • Fulfilling needs to start querying cold chain to ensure machines available

  • R+R might be where you capture status of cold chain equipment

Java Relational Domain Model and Extension Points Approach:

Standard layered architecture with proper object-oriented design at the domain layer.

Services used for orchestration (and for reference data, just for CRUD operations)

  • Repositories (talk to single shared database)
  • Domain Layer (object-oriented Java objects)
  • Business Layer (Java API)
  • REST API endpoints (“Controllers”)

Tied together with Spring IOC container and application context or equivalent

Modules can include:

  • Java code (e.g. implementations of the Strategy pattern)

  • Instructions about which extensions attach to which (core) extension points

  • (HTML+JS+CSS) -OR- you fork/contribute to the reference UI

A module that adds significant functionality would add all layers (and it could add tables to the shared database)

Requisition Service & Fulmillment Service (actually these are the same)

Scenario 1: add a new calculation algorithm for suggested order size

=> implement the CalculationStrategy interface in your own module’s code, load this (instead) via the spring application context (your module is a JAR file)

Scenario 2: collection of varying-by-country program data, to be collected with requisition (as a separate, required, section/tab on the requisition form)

=> strategy pattern for “is the program data recorded, so I can save a requisition”; there is a part of the business layer

Scenario 3: additional approval step to compare to historic data from DHIS2




  • Allowing people to fork one microservice limits effect of bad forking behavior (many smaller repositories => people will tend to fork less/smaller)

  • Someone else can create a new service (or override one) in any language/framework


  • Overriding one endpoint within a service requires “forking” the whole service if an extension point has not been created

  • Complex model compared to monolithic application

Java Relational Domain Model and Extension Points Approach:


  • Familiar model for typical Java developers

  • Easy to extend/configure at known/predefined points

Double-edged Swords:

  • Forced to go through shared governance process to change the monolithic domain model


  • Must use Java (or other JVM languages)

  • (Without letting modules provide UI components) pushes forking of the reference UI


Monolithic Core vs. Independent Services for Bounded Contexts: Different Philosophies


The above discussion didn't flow very smoothly, and we eventually realized that we're mainly arguing about different philosophies about the value proposition, rather than actual architecture...

Actually, we're really talking about Monolithic Core vs. Independent Services for Bounded Contexts

Independent Services for Bounded Contexts:

  • Reducing dependencies to allow more independent evolution of functionality without significant worry about breaking changes

    • Allows people to work in a smaller code repo => less stepping on toes

  • Shared value is the agreed-upon contracts (for state transitions)

  • Scale development better by letting different teams work on different services at different paces

  • Microservices are more complex, rather than simpler, as an architecture. (But they can allow an individual service to be simpler.)

Monolithic Core

  • Shared value is the shared core domain model

  • If someone wants to change something core (e.g. requisitions), force people to go through community governance, to create the shared value

Day 3

Architecture, continued...

  • Is there a false dichotomy between a micro services approach, and a monolithic core? Is this really a continuum or how many services are created?
  • We agree on:
    • OpenLMIS is headless providing services to UI
    • There will be more than one repository
  • How to approach this today? Work on design, and that will drive out architecture, or decide on architecture, then proceed to design (differences of opinion in group)
  • There are two different, but related, discussions: A) Architecture, B) How we arrange OpenLMIS. Decided to discuss B.

How We Arrange OpenLMIS


a) Core - Without this, you don't have OpenLMIS. OpenMRS model.

b) Mix and Match Model - Use what you like. Salesforce model.



  • Core: functionality that is available out of the box, and is managed through governance, without the user interface. The Core supports shared benefit, not that the core prevents people from doing what they want to do.
  • Bounded Context: A bounded context does not have a dependency on another context. For example: Requisition / Fulfillment / Inventory Mgmt / etc. 


  • Is there significant variation in what OpenLMIS needs to do, or is it pretty the same? Variation to date has been dictated by partners to date, actual domain may be more diverse (examples: humanitarian response, last mile health)
  • Example: County A doesn't like the requisition process and wants to do something different. How would that work in both scenarios? Is the result the same?
    • A) Core Example
      • Assume there are not extension points to support what they need to do
      • We would recommend that they write a brand new requisition process as a module  (note: it's still possible that they could instead choose to fork the core)
      • As a trailing innovation model, community may chose to change or replace the requisition process
      • There is now R and R1
      • Community determines what should happen next (trailing innovation): R1 should replace R? Add new extension points? Do nothing?
    • B) Mix and Match Model 
      • Project needs a different requisition process
      • They fork the requisition process (service) and make their changes
      • There is now R and R1
    • Are these results different? How does governance vary in these models?
      • Both will require coordination and governance:
        • In Core model, coordination occurs around the core codebase and extension points
        • In Mix and Match model, coordination occurs around the interface boundaries


This table records the pros and cons that were mentioned about both approaches. In most cases there was no agreement about whether these are actually pros and fonts. 


Core Approach

(OpenMRS Approach)

Encourages community to come back together on Core

Higher reliability of core product (…?)

Is there really a well understood definition of what should be in Core?

Higher merge cost for bringing in changes

Slower pace of change – devs need to wait for updates from core – encourages forking

If the extension points aren't there, you have to fork, or wait for global team to add the extension point

Mix and Match Approach (Salesforce Approach)

Supports quick innovation and product evolution

Enables multiple partners to work independently

Network effect between micro services - difficult to coordinate and ensure application works

If data is in a shared data store, the micro services aren't really independent

What is OpenLMIS in this scenario? Just the infrastructure?

Points of Agreement

1)   There will be several repositories that comprise OpenLMIS: > 1

2)   The collection of repositories in the OpenLMIS org in github = the product (at least, the code of the product)

3)   The governance of OpenLMIS comprehensively manages all of the repos in the OpenLMIS github – no matter the underlying architecture

4)   The “core” is the set of headless modules/services/objects which implement the domain model

5)   The reference UI comprises the set of code (which may be in multiple repos) that implements an out of the box UI exposing OpenLMIS functionality. The ref UI must also allow easy ability to override particular functionality/look/feel in the user interface w/o forking the whole reference UI

6)   The domain model is defined as set of bounded contexts for logistics, and has extension points.

7)   The number of bounded contexts and their definition is still TBD

8)   We have still not decided whether to implement the domain model as a set of microservices or via a java object model with api interfaces


PM Sessions:

Break-out into technical design track, and product management track.

Product management

Discussed grant proposal, and aligning white label mobile app work with re-architecture:

Developed basic user stories list for Inventory Management domain

Day 4

Package Standards to Investigate


Stock Management in 3.x

What to include in scope for 3.x? There is a feeling that what is currently in the 2.0 branch isn't completely sufficient, but in the absence of complete, multi-country requirements, we have a suggestion to ensure that what currently exists in 2.0 + the user stories being developed by Brian, Lakshmi and Sarah be considered for inclusion. 


Technical Deep Dive

See Domain Modeling Technical Deep Dive

UI Technologies

Lean towards continuing with AngularJS, for now. (Current team members are familiar with AngularJS, and we'd be able to preserve more of the existing front-end code.) A module UI could potentially let future UI screens be done in another technology.

Current UI is not using Angular routing, but rather is a collection of many single-page apps. (This might actually make it easier for us to move towards a modular UI.)

OpenMRS has already written a plugin module architecture based on Spring, that lets modules add Java code, and attach to extension points. If this is basically what we want to do, we can copy a lot of it. (E.g. some of this code.)


Day 5

Case Study in Architecture: OpenMRS

Goal was to develop a platform, but that was not achieved in the first cut, so there was a need to re-architect to introduce a modular architecture. The InfoPath form-entry was the first module introduced, and for a few years, most implementations simply used the core plus one of three Form Building options: InfoPath (first module); HTML; XFORM.

The UI needed to be redone to support real-time data entry eventually (vs supporting the paper form process).

Long process to make changes to OpenMRS core, which took on average about 6 months to move from ideation to release, and sometimes longer, which became a problem when the work on the UI and workflows picked up steam. Simply took too long, so the modular approach was needed and became the preference of many contributors.

2005 - 2011: Development occurred mostly on 'core'/trunk path, but then rapid and significant shift to modules as it provided contributors the ability to iterate rapidly and deliver their product faster. In retrospect, the 'shift' should have happened earlier.

Current State: Still concept of 'Core' (data model and Web Services exposing the Core), but most work is now done in modules. Core prioritizes stability over flexibility. Never envisioned that there would be so many modules, but the reference application now includes more than 30! (There are more than 100 modules altogether.) 

Some Illustrative Notes:

Concept Dictionary: Core has no hardcoded data model tables, e.g. "Vitals" table with 'weight', 'height', etc.; rather there are 'entity', 'attribute' & 'value' tables, which makes it (much) easier to customize, though it makes analysis a bit more difficult. This is the 'heart' of OpenMRS Core – provide the building blocks for others to use to build their own pieces of functionality.

Logic Module: Should be able to manage "Calculations" (e.g. BMI, asthma, etc. – large variety of rules) as observations. Ultimately was only ever used in one place, however, as the project was under resourced, code was complex and was never abstracted for wider consumption. Was later removed from Core and into a module. In retrospect, even though it seemed like this functionality should be a 'Core' feature, it would have been better to have made it a module from the start to enable rapid iteration.

Lesson Learned was that no feature should be added to Core until it is "finished" b/c of the slow release cadence. Core prioritizes stability over flexibility, and thus if you need to iterate to get to a point of "completeness", then better to isolate the code into a module where it will be easier to work on and have more releases. Also, there is more value to put the interface into Core rather than the implementation itself.

 'Allergy' Module: Example of a module that rapidly developed and matured to the point at which it was pulled into 'Core'.

Key Takeaways for OpenLMIS Re-Architecture

  1. Prefer Industry Standard Technologies & Solutions vs Custom Development, e.g.

    1. Reporting/Business Intelligence packages vs custom reports
    2. ERP functionality has been solved
    3. (offline) Data Collection tools
    4. Inventory Management
  2. Keep the things that don't change often separate from the things that do. (i.e. level of 'completeness' or maturity)
    1. Modules allow community members to more rapidly iterate–and mature–their ideas
    2. Core features and functions should not change as often, but there should be mechanism to allow for change
  3. The value of 'Core' is the encapsulation and expression of abstractions of project-specific implementations.
    1. Thus, the governance and technical committees should be more focused facilitating the possible connections between components than the specific implementations of those components, i.e. by finding the commonalities and variabilities among project-specific features.
  4. Each layer may need different extension points vs a module needing to be a complete vertical or self-contained piece. 
  5. Providing a well-thought, clearly defined, publicly available interface promotes the organic growth of the ecosystem
    1. There must be a well-defined process to develop and contribute a module (e.g. exemplars and templates, Maven archetypes, Yeoman Generator for UI)

(Re)Architecture Recommendations

  1. OpenLMIS should consist of a headless service layer with a reference UI
  2. The existing application should be split into distinct components with clearly defined interfaces (vs database-level integration)
    1. A component should never access another component's database tables directly.
    2. Components should first be designed RESTfully to support modularity
  3. Enforce modularity by separating them into separate code repositories

  4. Modularity and extensibility is needed for all layers (Front-end, Service, Data), but there are different ways to solve this:
    1. Open WebApps for Front-End
    2. Consider leveraging parts of OpenMRS modular framework (e.g. Spring application context), but we don't need runtime extensibility (at this point in time)
    3. Use Maven for packaging and versioning
  5. The Reference Application will provide guidance and exemplars for securing a module/component.
  6. Reporting: The Reference Application will include a recommended approach to reporting using industry standard business intelligence tools and approaches as a standalone piece within the architecture.

To Rewrite or Refactor – or False Dichotomy?

  1. In order to achieve the stated goals of the project (Shared Value), a new architecture will need to replace the existing regardless of approach.
  2. Existing code will be leveraged and reused where and when possible, but new frameworks and models also will be implemented to facilitate rapid development and future supportability.
  3. There is a need to have 3.0-x available as soon as possible for new country implementations.
  4. To minimize the impact on the end user (and project budgets), a one time migration is strongly preferred vs incremental updates for existing implementations, which would be slower to absorb and respond to over time.