Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This document identifies the steps needed to stand up and configure the reporting stack for a new server.

NOTE: Before you get started, the OpenLMIS server needs to have a user with username "admin".

NOTE: Step-by-step instruction can be found on the page Set up the reporting stack step-by-step

Update the Config and Start the Services

...

  1. Review and update the .env variables

    1. Update all passwords (Note: If the database username/password is updated, update the database SQLALCHEMY_DATABASE_URI)

    2. Update the domain names for superset and Nifi in /etc/hosts file

    3. If using SSL, change the *_ENABLE_SSL variable to true and add your SSL certificate and chain into /openlmis-ref-distro/reporting/config/services/nginx/tls directory
  2. Update the OpenLMIS system's URL in the Superset config.py file. They are currently pointing to UAT.openlmis.org
    1. Open /openlmis-ref-distro/reporting/config/services/superset/superset_config.py
    2. Change the following variables to point to your version of OpenLMIS
      1. base_url
      2. access_token_url
      3. authorize_url
      4. HTTP_HEADERS allow-from variable
  3. Start the services:

    Code Block
    # Destroy any existing volumes
    docker-compose down -v
    # Build and start the services
    docker-compose up --build
    # If not using Scalyr, you should use:
    # docker-compose up --build -d --scale scalyr=0


  4. After 2 or 3 minutes open your web browser and navigate to the URLs
    1. Nifi can be accessed at {BaseUrl}/nifi
    2. Superset can be accessed at {BaseUrl}/login

...

Info

NB: If it's the first time running the reporting stack, and you'd like to get data to view immediately, follow the steps bellow:

  1. Stop all process groups
  2. Edit the baseURL, admin_username and admin_password in requisitions, reference data and permissions process groups by right clicking anywhere in the process group then selecting variables
  3. Edit the first processor in each of the process groups editing Scheduling strategy under SCHEDULING setting from CRON Driven to Timer Driven and set the Run Schedule to 100000 Sec
  4. In Requisitions connector, edit the processor titled Get requisitions from /api and add this ${baseUrl}/api/requisitions/search?access_token=${access_token} to the remote URL property
  5. Start all process groups with the materialized views process group being the last since it refreshes the views based on the data from the other three process groups
  6. Revert all changes done to the Scheduling strategy to CRON Driven to have the data pulled in everyday and remote URL for requisitions


Loading Charts and Dashboards

...