High Level Test Plan

This section provides an overview of the end state of the testing solutions for the reporting stack.

The functionality we would like to test isWe will automate tests by
  1. The reporting stack comes up correctly
    1. All the components start
      • Consul
      • NiFi
        • NiFi Registry
      • Zookeeper
      • Kafka
      • Postgres
      • Superset
    2. All the required prerequisites are loaded
      1. Templates in NiFi
      2. Tables in Postgres
      3. Metrics, Slices, and Dashboards in Superset
    3. All one-time processes are stopped
      • Flow to insert data into Superset, any other "startup" flows
    4. All the ongoing processes are started
      • Flow to batch pull from OpenLMIS
  2. The reporting stack handles incoming data correctly
    1. New data added to OpenLMIS is shown in the dashboards correctly
      • NiFi can pull from the OpenLMIS API
      • Kafka sees updated data
      • Postgres sees the updated data
      • Superset displays the updated data
    2. Corrupt data is handled correctly
      • Partial data is ignored
      • Incorrectly formatted data is ignored
      • Arbitrary data is ignored
      • Data encoding a shell script is ignored
      • Data encoding a SQL injection is ignored
    3. Data up to a throughput of N is handled correctly
      • This data is inserted into the various data stores correctly
  1. The reporting stack comes up correctly
    1. All the components start
      • For each component check that it is running on it's expected port and responds to a ping or the equivalent
    2. All the required prerequisites are loaded
      1. Templates in NiFi
        • Call the NiFi API for check templates are loaded
      2. Tables in Postgres
        • Call psql to ensure the expected tables exist
      3. Metrics, Slices, and Dashboards in Superset
        • Call the Superset API to check these are loaded, if that API functionality does not exist, call psql against the Superset metadata database to ensure the equivalent exists there
    3. All one-time processes are stopped
      • Call the NiFi API to check that the expected flows are in the stopped state
    4. All the ongoing processes are started
      • Call the NiFi API to check that the expected flows are in the started state
  2. The reporting stack handles incoming data correctly
    1. New data added to OpenLMIS is shown in the dashboards correctly
      • Insert data into OpenLMIS, check that it becomes available in NiFi logs, Kafka topics, Postgres tables, and Superset dashboards
        • Call the Superset API for a particular metric and dashboard to evaluate whether the returned data matches the expected metrics
    2. Corrupt data is handled correctly
      • Insert corrupt data into OpenLMIS, check that it does not become available in NiFi logs, Kafka topics, Postgres tables, and Superset dashboards
    3. Data up to a throughput of N is handled correctly
      • Insert N units of data into OpenLMIS, check that it does become available in NiFi logs, Kafka topics, Postgres tables, and Superset dashboards

Current Manual Test Plan

Test 1: Validate that services stand up correctly and web interfaces are available

# Clone the repository
git clone https://github.com/openlmis/openlmis-ref-distro
# Enter the directory
cd openlmis-ref-distro
# Copy the settings.env
cp settings-sample.env settings.env
# Enter the reporting directory
cd reporting
# Stand up the docker reporting service
docker-compose up


When running it locally, add the following to the host's etc/host file before running docker-compose up

127.0.0.1   nifi.local superset.local

Test 2: Validate that Nifi is connected to the Nifi Registry

Test 3: Validate the Nifi "Kick Off" Flows Function as intended

The Nifi "Kick Off" flow focuses on standing up the integration with OpenLMIS v3, NiFi, Postgres and Superset. The end state is a fully functioning integration between these systems with data flowing between them. The minimal use case for this test focuses on facility information.

Test 4: Validate All Reports embedded within OpenLMIS

              

          Administrative

          The administrative displays the Facility List which should be an exact 1-to-1 match to the facility list in Admin's OpenLMIS.

          The test fails if the chart is blank or there isn't a 1-to-1 match on the facilities.


        


          Stock Status

          The stock status dashboard should display 5 charts: Stock filter, Reporting rate and Timeliness, All-time reporting timeliness, Stock status over time and stock status table.

          All charts should display correctly as below otherwise the test fails.

          


          Adjustments

          The adjustment dashboard should display 4 charts: Adjustments filter, Adjustments Summary, All-Time Reporting Timeliness and Reporting Rate and Timeliness.

          The test fails if the dashboard doesn't look as below.

         


          Orders

         Orders dashboard should contain 5 charts: Orders filter, All-Time Reporting Timeliness, Total Cost of Orders, Emergency v. Regular Orders and Reporting Rate and Timeliness.

         After the loading the dashboard, the charts displayed should be as below varying from user to user.

         


          Consumption

          The consumption dashboard should display 4 charts and 1 markdown row as below. The charts expected are: Stock filter, Logistics Summary Report, Consumption Report and

          Total Adjustment Consumption per facility. The test fails if data is missing or any of the charts has an error.

          


          Stockouts

          The stockouts dashboard should display 5 charts and 1 markdown row as below. The charts expected are: Stock filter, Reporting Rate and Timeliness,

          All-Time Reporting Timeliness and District Stockout Rates over Time. If any of the charts doesn't display as below, the test fails. 

          


          Reporting Rate and Timeliness

          Reporting Rate and Timeliness dashboard should display the 4 charts below with no errors. The charts expected are: Reporting Rate Filter, All-Time Reporting Timeliness,

          Expected Facilities per Period and Reporting Rate and Timeliness.

         

Test 5: Validate that Nifi is functioning for Measure and Measure Reports

{
  "timestamp": "2019-06-06T14:33:18.300+0000",
  "status": 401,
  "error": "Unauthorized",
  "message": "Bad credentials",
  "path": "/api/oauth/token"
}

Test 6: Validate FHIR Locations are available

Test 7: Validate FHIR Measures are available

Test 8: Validate the quality of each measure

Test 9: Validate the measureReports are available

Test 10: Validate the quality of 7 measureReports for one requisition

Test 11: Verify Requisition Creates 7 measureReports

Test 12: Verify the requisition updated the Superset dashboard