Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents

Introduction

The purpose of this document is to outline the OpenLMIS test strategy.  Developers should reference the Testing Guide for the general automated test strategy for OpenLMIS.

The objective of manual and automatic tests are to catch bugs as quickly as possible after code is committed to make it less time-consuming to fix. We plan to automate tests as much as possible to ensure better control of product quality. All tickets should be tested manual by QA Team, selected tests will be automated at API level (by Cucumber) and UI level (ie. by Selenium).

Test Strategy document describes the following issues: 

  • UI testing – include list of types of devices/browsers are supported and which are prioritize for manual testing

  • QA responsibility

  • Tools – which tools QA Team use for testing OpenLMIS
  • Testing Standards - UI style guide compatibility, translations, performance standards
  • Testing workflow – describe workflow for manual testing, automated testing and regression, describe the way of reporting bugs, includes bug report pattern and acceptance criteria workflow

  • Testing environments and updating test data

  • Regression Testing

  • Requirements traceability - describe how to use labels and test cycles to support Zephyr traceability reports

Manual Testing vs Automated Testing

Testing Standards

  • Follows UI guidelines
  • Internationalization
  • Form Entry and Errors
  • Loading Performance
  • Offline Performance
  • Role Based Access Control
  • Exception Scenarios

UI testing

This section includes list of types of devices/browsers are supported and which are prioritize for manual testing.

Past versions of OpenLMIS have officially supported Firefox. For OpenLMIS 3.0, we are prioritizing support of Chrome because of global trends (eg see Mozambique Stats) along with its developer tools and its auto-updating nature.

For QA testing of OpenLMIS our browser version priorities are:

  1. Chrome 52+ (test on Chrome 52 and Chrome latest)
  2. Firefox 48+ (test on Firefox 48 and Firefox latest)

The next most widely-used browser version is IE 11, but we don't recommend testing and bug fixes specifically for any Internet Explorer compatibility in OpenLMIS.

The operating systems on which we should test in are:

  1. Windows 7 (by far the most widely used in Mozambique, Zambia and Benin data AND globally)
  2. Windows 10

Note: The QA team is doing some testing using Linux (Ubuntu) workstations. That is fine for testing the API, but Linux is not a priority environment for testing the UI and for final testing of OpenLMIS. It's important to test the UI using Chrome and Firefox in Windows 7 and Windows 10.

In other words, OpenLMIS developers and team members may be using Mac and Linux environments. It is fine to report bugs happening in supported browsers (Chrome and Firefox) on those platforms, but we won't invest QA time in extensive manual testing on Mac or Linux.

We have asked for different OpenLMIS implementations to share their google analytics to better inform how we prioritize and invest in browser and device support going forward.

Supported Devices

OpenLMIS 3.0 is only officially supporting desktop browsers with use of a pointer (mouse, trackpad, etc). The UI will not necessarily support touch interfaces without a mouse pointer, such as iPad or other tablets. For now, we do not need to conduct testing or file bugs for tablets, smart watches, or other devices.

Screen Size

We suggest testing with the most popular screen sizes:

  1. 1000 x 600 (this is a popular resolution for older desktop screen sizes; it is the 16:9 equivalent of the age-old 1024x768 size)
  2. 1300 x 975 (this is a popular resolution for newer laptop or desktop screens)

The UI should work on screens within that range of sizes. Screen size can be simulated in any browser by changing the size of the browser window or using Chrome developer tools.

Bandwidth

OpenLMIS version 3 is tested using a bandwidth of 384 Kbps, which is equivalent to a 3G (WCDMA standard) connection. We recommend end users using either this speed or higher for optimal usability.

Responsibility

Developers (all teams):

...

Look at the Testing Guide for more information.

Responsibility

SolDevelo QA Team:

  • creating features part of contract tests
  • creating java part of contract tests
  • create and update acceptance criteria
  • perform regression tests (every second sprint, i.e. once a month)
  • comprehensive manual tests after every sprint
  • create test plans and test regression cycles
  • perform manual tests:
  • daily bug tracking:

Communication

...

QA Testing Workflow within a Sprint

...

Testing

...

Step 1: Test Cases

When a new feature ticket is assigned to a sprint, the QA lead or tester must create a test case and link the new test case to the Jira ticket. This can happen in parallel to the development of the ticket. Test Cases must be created for each JIRA ticket that is assigned to a sprint that has to be tested manually - one mustn't create test cases for API changes, as contract tests are to be used for this kind of changes. It's advised to create test cases before a sprint begins, but this can also be done once a sprint has started. Note that sometimes a given feature might be interrelated with the already existing ones, which increases the risk of regression. In such situations, the developer working on the ticket should recommend the test cases that should be run in parallel to the testing of a given feature. If this proves impossible, they should inform the tester on the possible influence of the changes to the other parts of the system.

In JIRA, click on Create. Select the Project, in this case the project is OpenLMIS General. Then select the Issue Type as Test. The Summary, Components, and Description can be copied from the ticket to keep consistency, or help in rewriting the test case in the next steps. Note that every valid test case has to have the To Do status and Minor priority. The test case's name should be in accordance with the following convention: "Screen: Tested feature" (e.g. Users: Searching for users). After entering all data, click Create and a JIRA notification will pop up with the Test Case number.

Image Removed

Click on the test case number and it will bring you to this page to add the test steps and make required associations. The Labels help when querying for test cases to add to regression test cycles. For the example below, a suggested label would be Stock Management or Vaccines. When the tested ticket concerns UI changes and contains a mock-up, the mock-up should be added as an attachment to the ticket with the test case.

The Fix Version/s is an association that organizes the test cases. This is used to report test case execution in the Test Metrics Dashboard. If the Fix Version is not known at the time of creating the test, then select "Unscheduled". When the JIRA ticket is assigned to the sprint, the test case must be updated with the correct Fix Version or it will not appear in the Test Metrics Dashboard.

The Test Steps provide a step by step guide to any tester on how to complete proper validation of the JIRA ticket. The steps should explain in enough detail to the tester: the actions to make, and a description of the outcome of those steps.

Image Removed

Here is an example of some Test Steps:

Image Removed

Once the test case has been created, it needs to be associated to the JIRA ticket, as the example below:

Image Removed

Image Removed

Test Case Best Practices

Creating a test case for an edge case or unhappy path scenario

One needs to add several types of details in the test case description. These include pre-conditions, i.e. conditions that need to be met before the test case can be executed (e.g. At least one approved requisition needs to be created in the system; Logging into the application), the acceptance criteria from the ticket, a description of the scenario/workflow (i.e. whether it is a happy-path-one or an edge case, and what the workflow looks like; e.g. User X authorizes a requisition, User Y rejects it, User X authorizes the requisition again, etc.), and of the test case itself (e.g. Tests whether it is possible to edit an approved requisition; Tests whether authorized users can approve requisitions). In order to facilitate the execution of test cases concerning requisitions, one has to include the link to the Requisition States and Workflow diagram in the pre-conditions of such test cases.

If possible, one should not include too detailed data in the test case, e.g. user, facility or program names, as they may vary, depending on the implementation. So one should write e.g.: Choose any program (e.g. Family Planning); Knowing the credentials of any Stock Manager (e.g. srmanager2)Knowing the credentials of any user authorized to approve requisitions (e.g. administrator), instead of: Choose the Family Planning program or Knowing the password of srmanager2. Information on user names, roles and rights is available at https://github.com/OpenLMIS/openlmis-referencedata/tree/master/demo-data. Providing example test data can be especially helpful for users who are not very familiar with the system. One also has to remember to include the test data that are indispensable for the execution of the test case in the "Test Data" column for a given test step. In principle, all data that are necessary to execute the test case should be included in it (e.g. mock-ups) - one should write the test case in such a way that it's not necessary to go to the tested ticket in order to test it. 

Ideally, a test case should contain up to 40 steps at most. One can usually diminish their number by describing test actions in a more general manner, e.g.: Approve the requisition, instead of writing: Go to Requisitions > Approve, and describing the rest of the actions necessary for the achievement of the desired goal. Adding suitable pre-conditions, such as: Initiating, submitting and authorizing a requisition, instead of adding steps describing all of these actions in detail, also results in a shorter test case. Ideally, one should include all actions that are not verifying a given feature/bug fix in the pre-conditions. If it is not possible to keep the test case within the above-mentioned limit, one should consider creating more than one for a given ticket. Sometimes, splitting the testing of the ticket into more than one test case will prove impossible, though.

If this won’t result in a too long test case, one should include both the test steps describing positive testing (happy path scenario) and those concerning negative testing (edge case/unhappy path scenario) in one test case. A happy path scenario or positive testing consists in using a given feature in the default, most usual way, e.g. when there is a form with required and optional fields, one can first complete all of them or only the required ones and check what happens when one tries to save it. An edge case or negative testing will consist, in this example, in checking what happens when trying to save the form when one or more of the required fields are blank. It is advisable to test the happy path first and then, the edge cases, as the happy path is the most likely scenario and will be most frequently followed by users. Edge cases are the less usual and, frequently, not obvious scenarios, which are not likely to occur often but still need to be tested in order to prevent failures in the application from happening. They are taken into account because of software complexity (many possible variants of its utilization and thus situations that might occur), and because users, like all people, vary and can make use of the application in different ways.

If it proves impossible to contain all workflows in one test case, the happy path and the edge case(s) have to be described in separate ones, e.g. there should be one test case describing the testing of the happy path and one for each edge case. One also has to write separate test cases if the happy path and the edge case(s) contain contradictory steps or pre-conditions; e.g. the former concerns approving a newly-created requisition and the latter e.g. approving an already-existing old one, not meeting the currently-tested, new requirement.

Updating a test case for a bug

When one has to test a bug, one needs to browse through the existing test cases to check whether one that covers the actions performed when testing the bug fix already exists. In virtually all cases, this will be the case. If it is so, one needs to update the test case if necessary and link it to the ticket with the bug. Writing new test cases for bugs is not recommended and has to be avoided. Some bugs are found during the execution of already-existing test cases. In such a situation, the test case will already be linked to the ticket with the bug. One then needs to review it, and if there is a need for it, update it. In most cases, this will not be necessary. Note that sometimes, the bug may in fact not be related to the test case during the execution of which it had been discovered, or it might occur by the end of a given test case, and the preceding steps might not be necessary in its reproduction. In both of these situations, one needs to find the test case that is most likely to cover the feature that the bug concerns and update it accordingly. One also has to keep in mind that test cases have to support and test the functionality, as well as provide a way to ensure that the bug has been fixed.

If the bug wasn’t found during the execution of any test case, one still needs to check whether there is one containing steps enabling one to reproduce the bug. In order to do so, one needs to go to Tests > Search Tests in the top menu on Jira. Then, one is able to browse through all test cases in the project. Since there might be many of them, it is advisable to use a filter. The first option is to enter word(s) that in one’s opinion might occur in the test case in the Contains text input field and press Enter. The second one is clicking on More and choosing the criteria. Those that are most likely to prove helpful are Label and Component. One can also use already-existing global filters, which can be found by choosing the "Manage filters" option from the main Jira menu. Then, it'll be possible to search for and use the desired filter(s). They include e.g. Administration tests or Stock Management test cases, and might prove especially useful when searching for test cases.

Exploratory testing

When one is familiarizing oneself with the project or when one already knows the application but there are no tickets currently to test, one can perform exploratory testing. This kind of testing is not related to any ticket. It consists in testing the application without any previous plan, exploring it, in a way. It can be also considered as a form of informal regression testing, as frequently, bugs resulting from regression are found during it. While performing this kind of tests, it is advisable to be as creative as possible – to experiment with testing techniques and test steps, and not to follow the happy path but the edge cases, or to try to find the latter.

Exploratory testing in the UI will focus on testing edge cases and causing errors related to the following categories: Functional issues, UI to Server failure, Configuration issues, Visual inconsistencies, and Presentational issues. The bugs listed are examples of bugs for each category and their priorities. When completing exploratory testing and bugs are found, consider these bugs as references for how to record and prioritize bugs.

...

  • All functional bugs should be Blocker or Critical bugs:  Image RemovedOLMIS-3983 - Cannot access offline requisition DONE  Image RemovedOLMIS-4076 - It's possible to submit a requisition twice: duplicate status changes IN PROGRESS

...

...

...

...

Workflow

...

Step 2: Test Cycles

Test Cycles must be created at the beginning of each sprint. There should be one test cycle for the sprint, as well as a regression test cycle when required for a release. The purpose of creating test cycles is to track the progression of testing during a sprint cycle, and provide traceability for requirements.

Once the test case is written it needs to be associated to a Test Cycle. Click on the Add to Test Cycle button and it will bring you to this screen:

Image Removed

The Version should be the version that is associated with the JIRA ticket. The test cycle will be either the current sprint test cycle, or the current feature regression test cycle. If you are executing the test, or know who will be executing the test you can assign it here.

Typically the QA lead or someone designated will create the test cycles for each sprint and they will only need to be linked. If there are no test cycles to select then these are the fields you must enter to create a new test cycle. The following are two examples of test cycles created for a sprint. The Test Cycles must have the Version, Name, Description, and Environment because these are used in queries for reporting and tracking Test Metrics.

...

within

...

Once test cases have been assigned to a Test Cycle the execution of these tests is tracked in Zephyr.

When a test is ready to execute, open the test case and navigate to the bottom of the page where the Test Execution links are shown. Click on the "E" button to begin execution.

Image Removed

The Test Execution page appears that details the test case and each test step. Select Text Execution Status of WIP, this means that the test case is in progress. Zephyr will assign you as the person executing the test and automatically assign the start date and time. Assign the test execution to yourself and you are ready to begin testing. Each step has a status, comments, attachments, and bugs field.

While completing each step, if the expected result matches the actual result, change the status to Pass. If the step does not match, then the status is Failed. If for some reason the step cannot be executed, such as the system is down, then the status would be Blocked. Once a test is completed the status can be updated to reflect whether it Passed or Failed, and the status of the test execution will be saved to the test case as shown in the above screenshot. This status also appears in the Test Metrics Dashboard.

Image Removed

Once the test execution is complete, the Test Case should be marked as Done.

Step 4: Enter Bugs

During testing, if a test step fails and there is a different result, or an error appears that is not expected, then a bug must be entered. Click on the bugs Column and Create New Issue.

...

a

...

Image Removed

The new bug should be linked to the JIRA ticket that is linked to the test case.

Image Removed

When a bug is created it will automatically get sent to the "Roadmap" status. It should stay in this status until it has been triaged and reproduced - only then its status can be changed to "To Do" and when it happens, the bug becomes visible in the product backlog. 

In summary, each of these steps completes the QA workflow process for a sprint.

...

For each Sprint's Test Cycle, the QA lead must assign appropriate test cases so they can be executed during the sprint. These test cases will be selected based on the tickets assigned to the sprint after Sprint Planning. The QA lead must determine if test cases are missing for the tickets assigned to the sprint, and create those test cases to ensure complete test coverage. New features require new test cases, while bugs or regression test cycles may rerun existing test cases.

Sprint

...

...


QA

...

Bug Tracking

It is important to track the bugs introduced during each sprint. This process helps identify test scenarios that may need more attention or business process clarification. Bug tracking also helps identify delays with ticket completion.

  • When a test case fails, a bug must be created and linked to the ticket. This bug must be resolved before the ticket can be marked as done.
  • The same test case associated with the sprint's test cycle should be run (previous practice was to create a new test cycle but that makes the status of the test case in the reporting as failed and then a separate row shows as passed, very confusing)
  • If bug is created for a test case, it must be resolved before the test is executed again with the sprint cycle. 

Bug Triage during the sprint

The bug triage meeting is held twice every sprint - i.e. once a week. Before the meeting, bugs with the "Roadmap" status are attempted to be reproduced and if they still occur, their status is changed to "To Do". If not, they are moved to "Dead". During the meeting, the bugs reported are analyzed in terms of their significance to the project, their final priority is set and the acceptance criteria are updated.

Manual testing workflow

When the Dev finishes implementing a ticket, he/ she should change status from In Progress to QA. QA should check if all acceptance criteria are correct and actual.

Main QA Manual Workflow consist of following points:

  1. QA starts testing when ticket is moved to QA column in Jira sprint board, or when the ticket status has changed to QA. There is no automatic notification for this until the ticket is assigned, so QA must manually check or the Dev must notify the QA. (Notification to QA must be via QA Slack channel, or directly, and in the General slack channel referencing the ticket that needs to be tested.)
  2. QA deploys changes to test environment (uat.openlmis.org).
  3. QA creates test case, or executes existing test case that is linked to ticket. QA should test all acceptance criteria in ticket. The test case must be associated with the current sprint test cycle.
  4. If bugs are found, QA should change status for In Progress and assigned ticket to proper developer.
    1. When a bug is created it must be linked to the ticket.
    2. QA notifies developer that ticket has been moved back to In Progress (mention in ticket or mention in QA slack channel).
  5. When all works properly QA should change status for Done and assigned ticket to proper team leader.

During the testing process, QA should write comments in the ticket about testing features and add print screens of testing features.

Workflow between QA teams

To improve testing process testers should help each other:

  • If one of the team members doesn't have anything to do in their own team they will assist other team with testing
  • If any tester needs help testing they can report it on the QA Slack channel

Ticket workflow and prioritize

In the OpenLMIS project tickets may have the following statuses:

  • To Do - tickets intended for implementation,
  • In Progress - tickets during implementation or returned to the developer to fix bugs,
  • In Review - tickets in code review,
  • QA - tickets which shlould be tested
  • Done -  tickets which work correctly, they are ready to close.

In the OpenLMIS tickets may have the following priotity:

  • Blocker - high priority ticket,
  • Critical - high priority ticket,
  • Major - low priority ticket,
  • Minor - low priority ticket, 
  • Trivial - low priority ticket,.

It is very important to start testing tickets with Blocker and Critical priority. When QA is in the middle of testing proccess for task with a lower prioryty (ie. Major, Minor or Trivial), and task with higher priority (ie. Blocker or Critical) returns with changes to QA, as soon as possible QA should complete testing task with lower priority and begin testing task with higher priority.

Additionally tickets in the Sprint table ( https://openlmis.atlassian.net/secure/RapidBoard.jspa?rapidView=46 ) in QA column should be prioritized. Tickets with high priority (ie. Blocker or Critical) should be located on top of the QA column. Every morning QA Lead should set task in QA column in the correct order.

QA Contract Test Workflow (needs to be reviewed by team)

...

  1. Tickets designed to automate by the contract tests are found on the Contract Test Epic (OLMIS-1012). 
  2. QA write feature part of contract test. QA should add a feature in a ticket description. QA also should push feature file to Github. Name of pull request should consist of the name of the ticket+Feature and should be linked in the comment.
  3. When feature part is finished QA should assigned ticket to 
    1. should implement java part for contract test.
  4. QA verify if the test pass passes in Jenkins.
  5. Ticket The ticket should be closed (Done status) if pass all tests in Jenkins.

Contract test tickets are added to Sprints. Epic are grouped by Requisition, Distribution, etc. and by the user (ie. Admin).

Contract test Ticket Workflow

...

  • To Do – at this stage QA implements feature part of contract tests, when QA finish, he/ she should change status on In Progress and assigned ticket to Paweł Gesek
  • In Progress – at this stagePaweł Gesek implements java part of contract tests. When he finish finishes he should change status from In Progress to In review.
  • In review – QA verify by Jenkins if tests pass. When tests end with successful status should be changed for Done.
  • Done – contract test ticket is closed.  

...

We can see a comprehensive report of our contract test on our Jenkins, HERE.

There are two elements in on the page, chart, and table.

According to the chart, we can know all the status of the test.

...

  • The first column is our Feature, we can see how many test tests we are running in on our project. We can go to see the test details by click clicking it. We can see all the details by click Scenario and Steps of this test when we are in the Feature Report.
  • The second column is a status of each feature. 
  • The third column is a status of each Scenario in the feature.
  • The last column is about the status and time used for testing of each feature.

...

Regression

...

When creating the regression test cycle, the QA lead will search for the feature label and assign all tests to the regression test cycle.

Image Removed

Regular manual regression should be executed every second sprint (i.e. once a month). This kind of regression testing is focused on specific services, it doesn't entail testing the entire system. Sam Im (Deactivated) and Joanna Bebak (Deactivated) decide what services the regular, focused regression testing is to cover during the QA meetings, and their decision is based on the most-recent changes in the system. For instance, if many changes had been recently introduced in requisitions, the regression testing will consist in the execution of all test cases concerning the requisition service.

Big, full manual regression, consisting in manual tests of the whole system and the execution of all valid test cases, is held one or two weeks before the Release Candidate process. The latter is further described in the Versioning and Releasing document.

When bugs are found during regression testing they are labeled with the Regression label. Regression testing related bugs are considered either Critical or Blocker priorities that must be resolved within the next sprint. These bugs are also reviewed in the weekly bug triage meeting for completeness and to ensure communication of any concerns to stakeholders.

Regression Test cases for review and potential duplicate steps that are already in another test case:

...

Jira Legacy
serverSystem JIRA
columnskey,summary,type,created,updated,due,assignee,reporter,priority,status,resolution
serverId448ba138-230b-3f91-a83e-16e7db1deed1
keyOLMIS-3005

Workflow for blocked tickets

During testing there is a problem of blocked tickets. This is the situation when, because of external faults (not related to the content of the task), the tester can not assess a particular task. In that case, change ticket status to In Progress and assign to appropriate component leader, whose team is able to sove this external problem. It is important to accurately describe the problem in ticket's comment part. In case of doubt, which component leader to choose, QA should assign ticket to the team leader (Paweł Gesek) with a request to identify the right person.

Component Leaders:

...

Testing Environments

...


Updating Test Data

Before a big regression test, QA must update the test users and scenarios to support each of the features that are part of the big regression test. It is also important to note that a few test cases entail changing the permissions of demo data users. Before executing such a test case, one needs to inform others on the #qa Slack channel that one is going to change these permissions. Having executed the test case, one has to restore the user's permissions to the default ones manually or by scheduling a re-deploy to UAT. In both cases, one also has to inform users on #qa that one is changing the user's permissions back to normal. In general, one should avoid changing the permissions of demo data users if it can be avoided. Instead, one can create a new user and assign suitable rights to them.

Translation Testing

While OpenLMIS 3.0 supports a robust multi-lingual tool set, English is the only language officially provided with the product itself. The translation tools will allow subsequent implementations to easily provide a translation for Portuguese, French, Spanish, etc. QA testing activities should verify that message keys are being used everywhere in the UI and in error messages so that every string can be translated, with no hard coding. The v3 UI is so similar to v2 that QA staff could also apply the Portuguese translation strings from v2 to confirm that translations apply everywhere in v3.

Tools

  • Cucumber and InteliJ- for contract tests,

  • Browsers – Chrome 52+ (test on Chrome 52 and Chrome latest) and Firefox 48+ (test on Firefox 48 and Firefox latest),

  • Rest Clients - Postman and REST Client,

  • JIRA - issue, task, bug tracking,

  • Zephyr - test case and test cycle creation
  • Confluence - test case templates, testing standards
  • Reference Distribution,
  • Docker 1.11+,
  • Docker Compose 1.6+.

Requirements Traceability

  •  Define how to update test cases to ensure requirements traceability (future QA weekly meeting topic)
  •  Define testing metrics and review process for showcases (future QA weekly meeting topic)

...

Product Testing

Product testing is scheduled for each sprint and includes testing for new features or other adhoc test scenarios. This testing will be tracked using Test Sessions in Zephyr. Sam Im (Deactivated) is responsible for scheduling testing and assigning the test cases and testers to the Test Sessions so the execution can be tracked and bugs are logged as part of this session. 

...

  • Philosophy: (examples may include: we want to catch bugs as quickly as possible after code is committed to make it less time-consuming to fix; we want to automate as much of the testing as is feasible so that later when other developers extend or customize OpenLMIS they will have a safety net to know if they broke something)
  • UI testing: including lists of which devices/browsers are supported, and which we prioritize for manual testing
  • Translation: list of which languages are officially supported and which will be tested
  • Summarize who is responsible for which types of testing. Include links to current documentation. Some examples (these may be inaccurate, but just examples):
    • SolDevelo QA team: perform manual testing across browser/device/version combinations
    • Developers: write unit tests for all Java code; write protractor tests for all AngularJS UI
    • SolDevelo QA team: write cucumber acceptance tests for all high-level requirements and configuration differences across countries (such as push versus pull, different combinations of approval hierarchies and programs)
    • ___: performance testing
    • ___: security testing
    • ___: code style guide conformance
    • ___: interface style guide conformance (colors, fonts, sizes, UI consistency, accessibility)
    • ___: internationalization/translation testing
  • Reminder The remainder of our bug reporting guidelines (what details need to be in a bug)
  • Summary of bug workflow (where bugs go in JIRA, who will triage or assign, who will follow-up to verify the bug is fixed)

Performance Testing

...

  • )

...