Testing the OpenLMIS Reporting Stack

High Level Test Plan

This section provides an overview of the end state of the testing solutions for the reporting stack.

The functionality we would like to test isWe will automate tests by
  1. The reporting stack comes up correctly
    1. All the components start
      • Consul
      • NiFi
        • NiFi Registry
      • Zookeeper
      • Kafka
      • Postgres
      • Superset
    2. All the required prerequisites are loaded
      1. Templates in NiFi
      2. Tables in Postgres
      3. Metrics, Slices, and Dashboards in Superset
    3. All one-time processes are stopped
      • Flow to insert data into Superset, any other "startup" flows
    4. All the ongoing processes are started
      • Flow to batch pull from OpenLMIS
  2. The reporting stack handles incoming data correctly
    1. New data added to OpenLMIS is shown in the dashboards correctly
      • NiFi can pull from the OpenLMIS API
      • Kafka sees updated data
      • Postgres sees the updated data
      • Superset displays the updated data
    2. Corrupt data is handled correctly
      • Partial data is ignored
      • Incorrectly formatted data is ignored
      • Arbitrary data is ignored
      • Data encoding a shell script is ignored
      • Data encoding a SQL injection is ignored
    3. Data up to a throughput of N is handled correctly
      • This data is inserted into the various data stores correctly
  1. The reporting stack comes up correctly
    1. All the components start
      • For each component check that it is running on it's expected port and responds to a ping or the equivalent
    2. All the required prerequisites are loaded
      1. Templates in NiFi
        • Call the NiFi API for check templates are loaded
      2. Tables in Postgres
        • Call psql to ensure the expected tables exist
      3. Metrics, Slices, and Dashboards in Superset
        • Call the Superset API to check these are loaded, if that API functionality does not exist, call psql against the Superset metadata database to ensure the equivalent exists there
    3. All one-time processes are stopped
      • Call the NiFi API to check that the expected flows are in the stopped state
    4. All the ongoing processes are started
      • Call the NiFi API to check that the expected flows are in the started state
  2. The reporting stack handles incoming data correctly
    1. New data added to OpenLMIS is shown in the dashboards correctly
      • Insert data into OpenLMIS, check that it becomes available in NiFi logs, Kafka topics, Postgres tables, and Superset dashboards
        • Call the Superset API for a particular metric and dashboard to evaluate whether the returned data matches the expected metrics
    2. Corrupt data is handled correctly
      • Insert corrupt data into OpenLMIS, check that it does not become available in NiFi logs, Kafka topics, Postgres tables, and Superset dashboards
    3. Data up to a throughput of N is handled correctly
      • Insert N units of data into OpenLMIS, check that it does become available in NiFi logs, Kafka topics, Postgres tables, and Superset dashboards

Current Manual Test Plan

Test 1: Validate that services stand up correctly and web interfaces are available

  • Clone the Github repository for the target branch or tag and run the following commands
# Clone the repository
git clone https://github.com/openlmis/openlmis-ref-distro
# Enter the directory
cd openlmis-ref-distro
# Copy the settings.env
cp settings-sample.env settings.env
# Enter the reporting directory
cd reporting
# Stand up the docker reporting service
docker-compose up


When running it locally, add the following to the host's etc/host file before running docker-compose up

127.0.0.1   nifi.local superset.local

  • You should see all containers running and the logging should start. Wait approximately 3 minutes
  • In the computer's browser, enter http://nifi.local/nifi and you should see Nifi up and running
  • In the computer's browser, enter http://superset.local/login and you should see the Superset login the Admin and password are available in environment file

Test 2: Validate that Nifi is connected to the Nifi Registry

  • In the computer's browser, enter http://nifi.local/nifi
  • Drag a processor group onto the screen
  • Click the "Import" button
    • If the Import button does not appear, the Nifi Registry connection is not configured
    • Click on the far right menu icon and under controller settings registry client, confirm that the nifi registry is well configured
  • The next menu should populate with a Registry Name and list of available Buckets
    • If this does not populate, the connection to the Nifi Registry is not functioning

Test 3: Validate the Nifi "Kick Off" Flows Function as intended

The Nifi "Kick Off" flow focuses on standing up the integration with OpenLMIS v3, NiFi, Postgres and Superset. The end state is a fully functioning integration between these systems with data flowing between them. The minimal use case for this test focuses on facility information.

  • Load NiFi and login with the credentials defined in .env for NiFi's NGINX
  • Confirm that the following six process groups have been loaded
    1. Generate MeasureReports
    2. Generate Measures
    3. Materialized Views
    4. Reference Data Connector
    5. Requisitions connector
    6. Superset Permissions
      • Check that all passwords are loaded for controller services, InvokeHTTP processors and the global variables per process group are configured
      • If processor groups are not already loaded and in a running state, the test has failed.
  • Superset
    • Load superset and login with OpenLMIS v3 admin credentials
    • Under the dashboard list, load Administrative dashboard displaying geographic zone information
      • If you don't see this, the test has failed
    • You should see a table showing all facilities from OpenLMIS
    • Compare this table against the OpenLMIS v3 Admin screen for facilities list
    • There should be a 1-to-1 match.
      • If there isn't a match, the test has failed.

Test 4: Validate All Reports embedded within OpenLMIS

  • This should validate that all of the fields are displaying correctly and filters are functioning based on the demo data that's currently available in the system
  • Login into superset with your OpenLMIS credentials to generate a token that will authorize access 
  • The landing page should be http://superset.local/superset/welcome with the following dashboards list displayed

              

  • If any of the dashboards is missing from the list, the test has failed

    • Preview all dashboards to ensure
      • All charts have no errors
      • Data has been loaded
      • Dashboard filters work as expected
  • For each of the dashboards, they should be displayed as in the images below

          Administrative

          The administrative displays the Facility List which should be an exact 1-to-1 match to the facility list in Admin's OpenLMIS.

          The test fails if the chart is blank or there isn't a 1-to-1 match on the facilities.


        


          Stock Status

          The stock status dashboard should display 5 charts: Stock filter, Reporting rate and Timeliness, All-time reporting timeliness, Stock status over time and stock status table.

          All charts should display correctly as below otherwise the test fails.

          


          Adjustments

          The adjustment dashboard should display 4 charts: Adjustments filter, Adjustments Summary, All-Time Reporting Timeliness and Reporting Rate and Timeliness.

          The test fails if the dashboard doesn't look as below.

         

          Orders

         Orders dashboard should contain 5 charts: Orders filter, All-Time Reporting Timeliness, Total Cost of Orders, Emergency v. Regular Orders and Reporting Rate and Timeliness.

         After the loading the dashboard, the charts displayed should be as below varying from user to user.

         

          Consumption

          The consumption dashboard should display 4 charts and 1 markdown row as below. The charts expected are: Stock filter, Logistics Summary Report, Consumption Report and

          Total Adjustment Consumption per facility. The test fails if data is missing or any of the charts has an error.

          

          Stockouts

          The stockouts dashboard should display 5 charts and 1 markdown row as below. The charts expected are: Stock filter, Reporting Rate and Timeliness,

          All-Time Reporting Timeliness and District Stockout Rates over Time. If any of the charts doesn't display as below, the test fails. 

          

          Reporting Rate and Timeliness

          Reporting Rate and Timeliness dashboard should display the 4 charts below with no errors. The charts expected are: Reporting Rate Filter, All-Time Reporting Timeliness,

          Expected Facilities per Period and Reporting Rate and Timeliness.

         

Test 5: Validate that Nifi is functioning for Measure and Measure Reports

  • Ensure the client username and password is loaded in for token generation for the following processors in Measures and Get MeasureReports process groups:
    • Generate Measures > Get Measures > Invoke FHIR token
    • Generate Measures > Generate products and measure list > Get Products > Get access token
    • Generate MeasureReports > Get Measures > Invoke FHIR token
    • Generate MeasureReports > Create Token > Get Access Token
  • The client details are defined in the environment file and loaded in preload file
  • If any of the Basic user authentication credentials is not loaded, the processor will output the following response and no measures or measure reports generated:
{
  "timestamp": "2019-06-06T14:33:18.300+0000",
  "status": 401,
  "error": "Unauthorized",
  "message": "Bad credentials",
  "path": "/api/oauth/token"
}

Test 6: Validate FHIR Locations are available

  • Make a POST request to OpenLMIS with grant_type=client_credentials to generate a token with trusted-client credentials
  • Use the Bearer token above to make a GET request to the locations endpoint on FHIR server https://uat.openlmis.org/hapifhir/Location 
  • The result should be a JSON with various locations under the entry tree as sampled in the file fhir_locations.json

Test 7: Validate FHIR Measures are available

  • Generate a token as in Test 6
  • Using the Bearer token from the POST request, make a GET request to FHIR server to the measures endpoint 
  • The resulting JSON file is as follows fhir_measures.json
  • Under the total, confirm that the total measures are 7 for the following expected measures:
    1. beginning_balance
    2. total_received_quantity
    3. total_consumed_quantity
    4. total_losses_and_adjustments
    5. total_stock_out_days
    6. stock_on_hand
    7. stock_status

Test 8: Validate the quality of each measure

  • For each of the 7 measures in FHIR server, ensure that it has the following fields:
    • ResourceType
    • id
    • meta
    • name
    • Description
    • Status
    • Experimental
    • Group
      • Under each group is a dictionary of code text and description
  • The measure format should be similar to the sampled Stock status measure

Test 9: Validate the measureReports are available

  • Make a POST request to OpenLMIS with grant_type=client_credentials to generate a token with trusted-client credentials
  • Using the token, make a GET request to the MeasureReport endpoint
  • The resulting JSON is a list of measure reports generated for the 7 measures using correctly formatted requisitions data
  • The total number of measureReports is 77 for the 7 measures and valid 11 requisitions

Test 10: Validate the quality of 7 measureReports for one requisition

  • For each measure report, the following fields are expected:
  • The following JSON is expected for every measure report
  • Each measure report should contain the following fields which can be compared to a requisition:
    • resourceType: MeasureReport
    • id: MeasureReport ID
    • meta: versionID and lastUpdated fields
    • identifier: This contains the requisition ID
    • status: The requisition status
    • type: The measureReport type
    • measure: The measure ID
    • reporter: Contains the location reference
    • period: start and end dates
    • group: This contains the programName and their measure scores
  • Validate that the information in the measureReport is accurate compared to the source requisition by mapping the following:
    • Location
    • Requisition id
    •  Measure
    • Period matches the requisition reporting period
    • The values for each measure match what was reported in each requisition column

Test 11: Verify Requisition Creates 7 measureReports

  • Login to UAT using administrator credentials
  • Create a requisition which will be used to test that 7 measureReports are created for that single requisitions
  • In NiFi, stop Generate MeasureReports process group and start it once the requisition is approved. At the end of the flow, the 7 reports should have been generated
  • Make a POST request to the MeasureReport endpoint of FHIR server with the specific requisition ID as the identifier https://uat.openlmis.org/hapifhir/MeasureReport?identifier=requisition ID
  • This will return a total of 7 measureReports sample as follows requisition_reports.json 

Test 12: Verify the requisition updated the Superset dashboard

  • In NiFi, stop and start the requisitions connector and materialized view process groups to update the requisitions in the database
  • Login to superset with administrator credentials to verify the new requisition is reflected in the dashboard
  • In the dashboard list, select the Orders dashboard and use the appropriate filters used to create the requisition. The dashboard will change to display the single requisition as follows

         




OpenLMIS: the global initiative for powerful LMIS software