Automated Testing Analysis
Currently we have about 10 core services and based on SonarQube all of them have code coverage under 78%. After analysis, I discovered some problems which are listed below.
Problems
- lack of integration tests on UI
- lack of edge cases in tests which is generally the cause of regression
- adding tests only for 'happy path'
a lot of duplicates (situation when we have a few tests for the method and in the most of them we have the same 'when... then')
- code duplicates between services (there are some classes which are duplicated in many services, but tests are in one service e.g. Message, PageableUtil)
- UI services coverages are not calculated on SonarQube, so we cannot monitor them
incorrect tests which pass - caused by the lack of TDD
- adding contract tests rarely
Proposed solutions
- add SonarQube plugin to Intelij to be able to observe code coverage
- use Selenium for UI integration tests
- use TDD to improve the value of added tests
- add contract test (with proper 'history') to ACC
- add tests for edge cases, exceptions
- add tests for message keys
- add tests for importers/exporters
OpenLMIS: the global initiative for powerful LMIS software