Stand-up and Configure the Reporting Stack
This document identifies the steps needed to stand up and configure the reporting stack for a new server.
NOTE: Before you get started, the OpenLMIS server needs to have a user with username "admin".
NOTE: Step-by-step instruction can be found on the page Set up the reporting stack step-by-step
Update the Config and Start the Services
Clone the OpenLMIS-ref-distro GitHub repository into your local developer environment. We assume your environment already has Docker installed.
git clone https://github.com/openlmis/openlmis-ref-distro
Once cloned update your settings.env file as is a standard practice to identify the IP address that consul will run on. Make sure to update the SCALYR_API_KEY if you're planning on using it.
Update the following configuration variables that are specific for the reporting stack:
Review and update the .env variables
Update all passwords (Note: If the database username/password is updated, update the database SQLALCHEMY_DATABASE_URI)
Update the domain names for superset and Nifi in
/etc/hosts
file- If using SSL, change the *_ENABLE_SSL variable to true and add your SSL certificate and chain into /openlmis-ref-distro/reporting/config/services/nginx/tls directory
- Update the OpenLMIS system's URL in the Superset config.py file. They are currently pointing to UAT.openlmis.org
- Open /openlmis-ref-distro/reporting/config/services/superset/superset_config.py
- Change the following variables to point to your version of OpenLMIS
- base_url
- access_token_url
- authorize_url
- HTTP_HEADERS allow-from variable
Start the services:
# Destroy any existing volumes docker-compose down -v # Build and start the services docker-compose up --build # If not using Scalyr, you should use: # docker-compose up --build -d --scale scalyr=0
- After 2 or 3 minutes open your web browser and navigate to the URLs
- Nifi can be accessed at {BaseUrl}/nifi
- Superset can be accessed at {BaseUrl}/login
Load in data from NiFi flows
Now, we need to update Nifi with the appropriate BASE URL and secrets to properly extract information and store it.
- Open Nifi {BaseUrl}/nifi
- Login to nginX using the credentials defined in the previous step
- Four processor groups will automatically load with all the configurations and running.
NB: If it's the first time running the reporting stack, and you'd like to get data to view immediately, follow the steps bellow:
- Stop all process groups
- Edit the
baseURL
,admin_username
andadmin_password
in requisitions, reference data and permissions process groups by right clicking anywhere in the process group then selecting variables - Edit the first processor in each of the process groups editing
Scheduling strategy
underSCHEDULING
setting fromCRON Driven
toTimer Driven
and set the Run Schedule to 100000 Sec - In
Requisitions connector
, edit the processor titledGet requisitions from /api
and add this${baseUrl}/api/requisitions/search?access_token=${access_token}
to the remote URL property - Start all process groups with the materialized views process group being the last since it refreshes the views based on the data from the other three process groups
- Revert all changes done to the Scheduling strategy to CRON Driven to have the data pulled in everyday and remote URL for requisitions
Loading Charts and Dashboards
Now, we need to load superset to be able get the data from the Postgres database and load charts and dashboards
- Click on provider then the
sign in
button which will then prompt you for authentication and authorization. - Go to the dashboard tab that lists the available dashboards
Backing Up Charts and Dashboards
After making changes to a dashboard, administrators may wish to backup its definition to a flat file. This may be done as shown below.
Some browsers prevent the download of .json files by default. If you aren't presented with a Save As dialog, ensure that your browser isn't blocking popups.
Notes about Importing the Backup File
If you export dashboard 123 built off of datasource 456 and try to load it into a fresh instance of superset, you’ll have a lot of broken pointers because that fresh instance of superset will try to start with ids of 1 for both.
For the reporting stack, we built everything off of a “fresh” instance of superset, so when the content is imported it doesn’t encounter this issue.
It should be possible to edit the .json file by hand to fix the broken pointers, although we have not not tried this yet and don't know precisely where they'll be encountered.
OpenLMIS: the global initiative for powerful LMIS software