Research, test and document a mechanism for version controlling and auto-loading Superset datasources, metrics, charts and dashboards

Description

We need a mechanism to automatically load Superset data sources, metrics, charts and dashboards when the docker container and Ansible playbooks are run. This mechanism should load the appropriate Superset configuration in the system and should be able to be maintained in a versioned way on GitHub.

Acceptance Criteria:

  • Document defining process is written and created in GitHub Readme

  • TBD: Developer documentation may need to be created on ReadTheDocs

Activity

Show:
Antonate Maritim
July 31, 2018 at 2:26 PM

To create a slice in superset, insert a row into the slices table with the correct key:value properties.

Antonate Maritim
July 27, 2018 at 1:30 PM

These are my findings so far about creating a datasource in superset:

1. Superset API has authorization errors when hitting the databaseview/api/create endpoint. Superset REST API documentation doesn't exist and from issues on Superset's repo, no one has been successful in interacting with the API.
2. Adding a datasource directly to superset's postgreSQL database has issues with hashing passwords. On the UI, the password on the SQL Alchemy URI is hashed before insertion.
3. Exporting a datasource as a .yaml file then importing it has the same password hashing issues

A workaround to both 2 and 3 is to click on edit the database and saving it without changing anything, this way the password in the SQL alchemy uri is hashed. Editing on NiFi returns an OK response but the database is not changed.
An upside to exporting the yaml files is all related tables to a datasource are exported as well.
A bug I just discovered on exporting a database with tables is that the maximum number of tables can only be 1.

Craig Appl
July 27, 2018 at 2:22 AM

, can you document the various methods you tried with superset and failure points in this ticket?

Craig Appl
July 5, 2018 at 8:50 PM
Clay Crosby
June 7, 2018 at 8:48 AM

I think we should try to fit it in for next Sprint if possible. I would prioritize the Scalyr ticket higher though, as this ticket is not required for SELV since we're doing the Superset build directly in the SELV instance of Superset

Done
Pinned fields
Click on the next to a field label to start pinning.

Details

Assignee

Reporter

Labels

Story Points

Sprint

None

Fix versions

Affects versions

Priority

Time Assistant

Created May 22, 2018 at 12:14 PM
Updated August 1, 2018 at 11:16 AM
Resolved August 1, 2018 at 11:16 AM