Regression Testing guide and improvements

The aim of this document is to gather guidelines to make releasing process faster. This document is supposed to point out what should be tested by manual, E2E and contract tests in order to offer a higher level of general testing automation. It should save our time spent on regression testing for release candidates and make the process faster.


What do the manual tests cover currently?

Currently, the manual tests cover both happy path and all found edge cases. We are still in progress by reviewing existing manual test cases but even now they look much better. The API test cases are marked as Dead and tickets to fill the gap with contract tests are created. Moreover, some test cases are too detailed because of checking whether modals or notifications contain the exact text instead of verifying if it gives a user all important information. These parts of tests cases should be modified in order to avoid the situation that someone fails the test case because the information is not identical.

Next steps:

  • Ensure that manual test cases are modified in terms of information displayed in modals - they shouldn't ask for an exact text match, but just give a context of the message that should be passed to the user (ticket to review and update TCs)


What quality are the manual regression tests bringing to the service?

Manual regression tests bring quality and value to the service. It is not always profitable to automate everything, as the process of automation of the same activities lasts longer than manual testing. Manual regression tests increase our system’s test-coverage which gives the confidence that there are no gaps in testing. Not every case can be automated and there is additional value, which manual tests can bring - they can verify the system's intuitiveness and usage. Everything comes at a price though. Executing a manual test case takes much longer than executing an automated test.


How can we improve quality?

Regular review of current test cases is a key factor to improve manual tests' quality. It demands a big effort to execute and update test cases. Not creating new test cases but modifying the existing ones affecting given functionality is a good practice to increase the quality. Some of the test cases check specific UI details or specific text information, which is not something which brings value to the service. Test cases’ quality can be improved by removing these parts.

Next steps:

  • Update test strategy to include a requirement to periodically review test cases (similar to how regression tests are scheduled for every other sprint) - discussion item on QA call
  • Spike/research on how to improve searching for existing test cases/test steps to more easily identify whether a feature/case is already covered by manual tests and to prevent duplicating test cases/steps (ticket)
  • Improve test strategy to more clearly explains/give examples on what should be checked/tested in manual steps and what shouldn't (ticket to move thoughts from this page to official docs and prepare examples)
  • Update developer documentation to include that quality is not only a concern of testers and that the committed changes are already tested by the developer before the ticket makes it to the QA column (ticket to update docs)


What can we do to make sure we cover all edge cases?

Experience and a wide range of knowledge of system are crucial to make sure that all edge cases are covered. OpenLMIS has never defined a formal process to identify edge cases for a ticket. To make sure we cover more edge cases, the developer responsible for implementing some functionality should suggest what should be tested manually.

Next steps:

  • Research / Define strategy to include edge cases in the tickets; when does that happen? At Sprint planning? When developer pick up the ticket?  (ticket to do the research and update workflow)


How do we make sure our manual tests make sense?

Currently, a new manual test is created when a new functionality is added to the system and there are no adjusted test cases yet. When a given test case partly covers a functionality, this test case is being updated - we do not create a new manual test then.

While writing a test case it is crucial to concentrate on a deep analysis of acceptance criteria and think of possible edge cases - what can go wrong, in what way a user can use a given functionality in an unsuitable way. Each new step of a test case demands coverage of every acceptance criteria. Sometimes there are dependencies between functionalities and services which need to be taken into account while writing a test case. It is impossible to have full assurance that everything is covered with tests. First and foremost, system's most important, from the user's point of view, functionalities should be tested and automated tests are the best at fulfilling this role.

All in all, manual tests make sense when they verify a specific edge case, which is less likely to happen in most of the situations. This kind of tests also works for verifying whether such files as reports are printed correctly with the right content. It is important to point, that neither manual tests nor automated tests should cover verifying UI details such as an exact text in a modal or label naming. These issues require some intuitiveness, which cannot be guaranteed by automated tests.

Currently, test cases are run in the context of a change that is happening in the given ticket and they ignore the fact that other parts of the system could be affected. Say a work on a Stock Management screen required a change to Orderables schema or API. The developer makes the change and the Stock Management screen works as expected, what is also verified with a test case. However, a change to Orderables schema or API affects many other places in the system that are not covered by that test case. We expect to catch the obvious and biggest issues with automated tests. For edge cases, we should have a better process to define which parts of the system are affected by a ticket and then an easy way to find all the test cases related to them.

Next steps:

  • PoC: Contract tests that verify from the UI perspective
  • Review new test cases as they are created - who should do this? Other testers? Developers? BA? Product Owner? - discussion item for QA call


What is the value that the manual tests bring vs the time we need to invest in executing those tests?

The value of manual tests for happy paths is not worth the time we need to invest in executing and creating those tests. Executing a manual test case with 15 steps usually lasts about 1h, while creating a new manual test case takes about 2 hours. We can save a lot of time replacing them with functional tests. In the case of edge cases, the value is significant. They are quite rare and the work needed to rewrite them to automatic tests is incommensurate.

In contrast, there are such benefits of automated tests as reducing lead time and increasing the number of deployments as releasing time would last significantly less. Leading companies/products are able to release several times a day.

Next steps:

  • Define strategy for existing and new manual test cases that verify very specific UI elements/behavior (eg. input placement, resizing, sticky columns, loading icon, drag&drop) - create a ticket to update test strategy and update/dead no longer relevant TCs


What edge cases the E2E tests won't cover?

Generally, we don't want to implement functional tests for edge cases. The main assumption is to cover them with manual test cases. However, E2E tests should not cover these edge cases which require executing multiple steps from different services.

On the other hand, this assumption should not relate to all the edge cases. Especially at the beginning of automating test cases, we should try to automate both happy paths and edge cases. This document should be updated after having written the first test for any edge case to provide a decision about our edge-cases-approach. 

  • Document what is covered by what type of tests - new ticket or update the below ticket
  • Merge testing documents into one ( OLMIS-5388 - Getting issue details... STATUS )

OpenLMIS: the global initiative for powerful LMIS software