/
Automated Testing Analysis
Automated Testing Analysis
Currently we have about 10 core services and based on SonarQube all of them have code coverage under 78%. After analysis, I discovered some problems which are listed below.
Problems
- lack of integration tests on UI
- lack of edge cases in tests which is generally the cause of regression
- adding tests only for 'happy path'
a lot of duplicates (situation when we have a few tests for the method and in the most of them we have the same 'when... then')
- code duplicates between services (there are some classes which are duplicated in many services, but tests are in one service e.g. Message, PageableUtil)
- UI services coverages are not calculated on SonarQube, so we cannot monitor them
incorrect tests which pass - caused by the lack of TDD
- adding contract tests rarely
Proposed solutions
- add SonarQube plugin to Intelij to be able to observe code coverage
- use Selenium for UI integration tests
- use TDD to improve the value of added tests
- add contract test (with proper 'history') to ACC
- add tests for edge cases, exceptions
- add tests for message keys
- add tests for importers/exporters
, multiple selections available,
Related content
SonarQube and SonarLint User Guide
SonarQube and SonarLint User Guide
More like this
848: Sonar: integration tests not included in coverage
848: Sonar: integration tests not included in coverage
More like this
Regression Testing guide and improvements
Regression Testing guide and improvements
More like this
Deprecating manual test cases
Deprecating manual test cases
More like this
732: Sonar: projects for current v3 repositories
732: Sonar: projects for current v3 repositories
More like this
OpenLMIS v2.0 Development Metrics - WIP
OpenLMIS v2.0 Development Metrics - WIP
More like this
OpenLMIS: the global initiative for powerful LMIS software