Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 32 Next »

Working Draft

This page represents a working draft. Once it has been finalized, this warning will be removed.


Technical Setup Guide

The technical portion of the OpenLMIS Implementer's Guide is aimed at two types of users:

  1. Technical Implementor: A technical person who is reading OpenLMIS documentation in order to start or plan out an implementation. They might not be actually doing the programming tasks, but they understand ICT4D and they will be arranging or managing the programmers or vendors who will do the technical tasks.
  2. Software Developer: A user who will actually do software programming on the OpenLMIS system, such as customizing, extending, configuring, installing, etc.

Architecture

OpenLMIS' ReadTheDocs site is the definitive source of documentation. Its architecture and UI structure sections, along with an accompanying Architecture Overview wiki page, provide a good introduction to the project's architecture.

Server Environments

OpenLMIS uses Docker and may be hosted either on premise or in the cloud within an implementer's choice data center. The latter option has many benefits. The hardware requirements necessary for running OpenLMIS depend partly on the size of an implementation’s dataset. During the development, testing, and rollout phases of an implementation, these requirements are often unclear, and may vary while data is migrated from a pre-existing system into OpenLMIS. Having access to scalable cloud compute resources is thus important during these early phases of the project. Later, during the support phase, the robust availability and accessibility of a cloud-hosted solution are important. The core OpenLMIS team therefore recommends the use of cloud based hosting for production grade implementations. A number have been deployed within Amazon Web Services (AWS), and OpenLMIS’ deployment topology documentation targets the platform.

Most implementations require the creation and use of the following instances of OpenLMIS: 

  •  Development: Used by programmers to test new, rapidly changing, and potentially fragile customization work being done on behalf of an implementation.
  •  User Acceptance Test (UAT): Used by QA and other stakeholders to test features which have been progressed beyond the development phase but which require verification prior to promotion to the Production server.
  •  Production: Runs the instance of OpenLMIS intended for regular use by end users.
  •  Training (optional): A near-clone of the Production server intended to allow new users to freely experiment with the system. Implementers may potentially rely on the UAT server to fulfill this role later in the project’s lifecycle.

Each of the above instances of OpenLMIS should be isolated. To ensure that performance and other non-functional requirements are consistently tested and met, it is important that the hardware and configuration of each instance match that of the Production instance.

Managing Backups

OpenLMIS recommends cloud-based deployments use RDS (Amazon Relational Database Service). See the OpenLMIS Recommended Deployment Topology and RDS configuration guides. RDS offers good value for the cost partly because it automatically creates and retains backups. Details about the way in which its backups are configured and used are independent of OpenLMIS and available in the RDS Backup and Restore documentation.

If you are using PostgreSQL inside Docker, which is the default for a local installation of OpenLMIS, then see the Postgres Guide's Backup and Restore documentation.

Monitoring and Alerting

Monitoring a server’s health is an important step toward ensuring its uptime. Tools like Scalyr may be used to do so cheaply and effectively. Alerts should be sent to a shared email group or forum whenever conditions like the following occur:

  •  The database’s connection pool is low.
  •  An abnormally high number of database connections exist.
  •  The database’s read or write latency is high.
  •  The load balancer’s latency is high.
  •  The database or application server are low on CPU or disk resources.
  •  The database or application server are low on performance credits (applicable only if using a burstable or throttled cloud resource).
  •  Errors or exceptions are logged by the application server.
  •  The OpenLMIS webserver returns an HTTP status code of 5xx for more than several (5 - 10) minutes.
  •  The OpenLMIS webserver begins returning more 4xx or 5xx error codes than average.
  •  Any of the microservices which comprise OpenLMIS become unresponsive

Choosing the right thresholds for many of these values is an art which requires iteration. Configuring them for use with the UAT server early in the project is therefore helps to ensure that reasonable values are chosen for the Production server in time for its use.

Continuous Integration and Deployment (CI/CD)

The OpenLMIS project uses an instance of Jenkins at build.openlmis.org to provide continuous integration and delivery (CI/CD). Doing so provides a number of benefits, including:

  • Quality gates. Every time new code is contributed, tests are automatically run to help ensure that regressions are not introduced, performance characteristics are logged, and code standards are followed. 
  • Server uptime. Manually upgrading software can be error prone. By relying on an automated process, the OpenLMIS team saves time while ensuring that steps are followed consistently.

Implementers are encouraged to setup CI/CD within their own projects, and will find a number of such jobs on build.openlmis.org. OpenLMIS is comprised of microservices, and each has their own build job. The OpenLMIS-auth-service, for example, is representative of how CI of a typical component is performed. Meanwhile, other jobs in deployment pipelines are used to update servers with the latest build of each component. The OpenLMIS-3.0-deploy-to-test job, for instance, is illustrative of how such CD can be preformed. Although anonymous users currently lack access to the configuration of jobs on build.openlmis.org, the OpenLMIS team hopes to grant universal read-only access to all such users soon.

More information about Jenkins can be found here. Meanwhile, detail about the Core project’s use of it is available here. Of course, implementers can effectively use Bamboo, Jenkins, Travis, or any other such tool. The use of CI/CD infrastructure is important, though, for any implementation that involves even moderate code modification.

Installation

The deployment section of the OpenLMIS Documentation offers official guidance about installing OpenLMIS. It describes the openlmis-deployment scripts written for use with a CI/CD tool like Jenkins and intended for production-grade deployments. In contrast, the quick setup guide which accompanies the stock openlmis-ref-distro may be used to easily install a local test version of OpenLMIS. In a nutshell, developers only need to:

  1. Clone the openlmis-ref-distro repository.
  2. Download the default .env file into the newly created openlmis-ref-distro's directory
  3. Optionally edit the BASE_URL and VIRTUAL_HOST values with the .env file.
  4. Run docker-compose pull followed by docker-compose up
  5. Browse to http://<your ip-address> using administrator and password respectively as the user name and password.

The above quick-setup installs sample data and configuration. The available set of sample users share "password" as a password and may be examined by browsing to Administration → Users.

User Interface Configuration and Customization

Because OpenLMIS is comprised of an extensible set of microservices, a small and dedicated project is used to specify which such services should constitute a given deployment. While OpenLMIS Reference Distribution is an example of one such project, OpenLMIS Malawi Distribution is an example of another. In both cases, the docker-compose.yml file specifies which Docker images should be used for the deployment.

The OpenLMIS Reference Distribution’s docker-compose.yml file references several services, including one called reference-ui. Similarly, the OpenLMIS Malawi Distribution project references one called UI. The reference-ui and UI services each provide images with user interfaces. To create a custom user interface project, an implementer only needs to do the same: update the docker-compose.yml file in their distribution’s project such that it refers to their own UI image. The Malawi example does this by replacing:

  reference-ui:
        image: openlmis/reference-ui:5.0.4-SNAPSHOT
        env_file: .env
        depends_on: [consul]

With this:

ui:
    image: openlmismw/ui:1.2.0-SNAPSHOT
    env_file: .env
    depends_on: [consul]

More generally, the pattern is as follows. Note that UserInterfaceServiceName will be referenced throughout the remainder of this section.

UserInterfaceServiceName:
    image: projectName/ UserInterfaceServiceName:VersionIdentifier
    env_file: .env
    depends_on: [consul]


With such configuration in place, an implementation will use the specified UI image. The Malawi UI project serves as a good example for how the UI image should be defined. The docker-compose.yml file within it references dev-ui, auth-ui, fulfillment-ui, and various other images provided by the core OpenLMIS project. Together, in tandem, these images include all of the UI's assets. The HTML, JavaScript, build tools, and other resources the images provide are only exposed indirectly within the UserInterfaceServiceName project. This keeps the project lean - most files are unlikely to need customization and therefore aren’t duplicated within it. Files that do need modification, however, can easily be updated as described below.

The OpenLMIS-UI build process overrides files in the default reference UI with those in the UserInterfaceServiceName/src directory. Implementers can therefore easily overwrite default files (eg: HTML and JavaScript) with those of their own. Simply add a file with the same name and path to UserInterfaceServiceName/src. Nevertheless, in cases wherein only minimal change to a file is necessary, overriding the file in its entirety is heavy handed and leads to unnecessary maintenance burden. OpenLMIS’ UI Extension Guide offers better alternatives.

The OpenLMIS build process recursively scans UserInterfaceServiceName/src for *.scss files. The CSS generated from the Sass is concatenated into a single file and made available to the client. All of the files within UserInterfaceServiceName/src are exposed via the webserver either directly or after having been transcompiled. 

Putting all of these concepts together, implementers can easily style OpenLMIS’ UI by simply adding any Sass file to the /src directory. The following code in this file is all that was necessary for one implementation to choose its own colors:

$mw-teal: #008080;
$mw-green: #44B704;

$green: $mw-green;

$brand-primary: $mw-teal;
$link-color: $mw-teal;

As the above example suggests, OpenLMIS' colors and spacing are implemented using Sass variables and can thus easily be overridden. Because images can be embedded within CSS, specifying a custom logo is equally as easy. Images may be placed anywhere, as can the custom Sass file. For the sake of maintenance, however, it's ideal to place all brand-related customizations into a single folder as shown in this example.

Customizing Text and Translations

This part of the UI Extension Guide provides detail on how text within the UI may be added or overridden. Implementations which only have to support English can simply add a messages_en.json file to the same directory used to store the rest of their customizations. An example of one such file is available here

Please see the UI Extension Guide for additional information, including guidance on how to customize the UI's behavior. 

Additional Customization

Step 5 of the Implementer/Administrator Guide describes how to customize additional aspects of OpenLMIS, including its order-number format and built-in reports.

Upgrading OpenLMIS

OpenLMIS has regular releases of new versions of its micro-services. These releases are usually accompanied by a release of the OpenLMIS reference distribution that bundles services known and intended to work well together. The following general process is how new versions of micro-services may be incorporated into an existing OpenLMIS instance.

It is recommended that these steps be tested first on a test/staging or dev server where testing is conducted before scheduling a time to upgrade a live production instance. Before upgrading production, use best practices such as taking backups and having a roll-back plan in case anything goes wrong.

  1. Update the docker-compose.yml within the deployment's main project (see an example docker-compose.yml) such that it refers to the desired version numbers of the OpenLMIS core components.
  2. If using a CI/CD tool such as Jenkins, use it to deploy the change. Otherwise, manually run "docker-compose down," "docker-compose pull," and then "docker-compose up --build --force-recreate -d" on the machine hosting OpenLMIS.
  3. Test! Although the OpenLMIS team strives to ensure the quality of its releases, it is important to verify that updates do not adversely effect third party changes. It is especially important to test local customizations and local configuration.

If bugs or issues are identified in the underlying OpenLMIS components, follow the Reporting Bugs guidelines including the section there called "Coordinating with the Global Community". You may choose to submit a bug fix as a Pull Request, or to wait for the global open source project to fix the bug and release that fix as part of a future release.


  • No labels