Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

The aim of this document is to gather guidelines to make releasing process faster. This document is supposed to point out what should be tested by manual, E2E and contract tests in order to offer a higher level of general testing automation. It should save our time spent on regression testing for release candidates and make the process faster.

...

Currently, the manual tests cover both happy path and all found edge cases. We are still in progress with by reviewing existing manual test cases but even now they look much better. The API test cases are marked as Dead and tickets to fill the gap with contract tests are created. Moreover, some test cases are too detailed because of checking whether modals or notifications contain the exact text instead of verifying if it gives a user all important information. These parts of tests cases should be modified in order to avoid the situation that someone fails the test case because the information is not identical.

...

Manual regression tests bring quality and value to the service. It is not always profitable to automate everything, as the process of automation of the same activities lasts longer than manual testing. Manual regression tests increase our system’s test-coverage which gives the confidence that there are no gaps in testing. Not every case can be automated and there is additional value, which manual tests can bring - they can verify the system's intuitiveness and usage. Everything comes at a price though. Executing a manual test case takes much longer than executing an automated test.


How can we improve the quality?

Regular review of current test cases is a key factor to improve manual tests' quality. It demands a big effort to execute and update test cases. Not creating new test cases but modifying the existing ones affecting given functionality is a good practice to increase the quality. Some of the test cases check specific UI details or specific text information, which is not something which brings value to the service. Test cases’ quality can be improved by removing these parts.

...

How do we make sure our manual tests make sense?

Currently, a new manual test is created when a new functionality is added to the system and there are no adjusted test cases yet. When a given test case partly covers a functionality, this test case is being updated - we do not create a new manual test then.

...

All in all, manual tests make sense when they verify a specific edge case, which is less likely to happen in most of the situations. This kind of tests also works for verifying whether such files as reports are printed correctly with the right content. It is important to point, that neither manual tests nor automated tests should cover verifying UI details such as an exact text in a modal or label naming. These issues require some intuitiveness, which cannot be guaranteed by automated tests.

Currently, test cases are run in a the context of a change that is happening in the given ticket and they ignore the fact that other parts of the system could be affected. Say a work on a Stock Management screen required a change to Orderables schema or API. The developer makes the change and the Stock Management screen works as expected, what is also verified with a test case. However, a change to Orderables schema or API affects many other places in the system that are not covered by that test case. We expect to catch the obvious and biggest issues with automated tests. For edge cases, we should have a better process to define which parts of the system are affected by a ticket and then an easy way to find all the test cases related to them.

...

The value of manual tests for happy paths is not worth the time we need to invest in executing and creating those tests. Executing a manual test case with 15 steps usually lasts about 1h, while creating a new manual test case takes about 2 hours. We can save a lot of time replacing them with functional tests. In the case of edge cases, the value is significant. They are quite rare and the work needed to rewrite them to automatic tests is incommensurate.

...

  • Define strategy for existing and new manual test cases that verify very specific UI elements/behaviour behavior (eg. input placement, resizing, sticky columns, loading icon, drag&drop) - create a ticket to update test strategy and update/dead no longer relevant TCs

...

On the other hand, this assumption should not relate to all the edge cases. Especially at the beginning of automating test cases, we should try to automate both happy paths and edge cases. This document should be updated after having written the fist first test for any edge case to provide a decision about our edge-cases-approach. 

...

  • Document what is covered by what type of tests - new ticket or update the below ticket
  • Merge testing documents into one (
    Jira Legacy
    serverSystem JIRA
    serverId448ba138-230b-3f91-a83e-16e7db1deed1
    keyOLMIS-5388
    )

...

Should cover edge cases rather than happy paths

...

Should verify email messages are sent (before there are appropriate automated patterns for this)

(check if we have ticket to establish automated pattern)

...

Should cover checking reports

...

  • )

...

Should not check whether modal or notification contain exact text instead but rather verify if it gives a user all important information

...