chore(docs): Document some of our existing testing practices (#20691)
Document some of our current testing practices in hopes of helping reduce friction in PRs and communication in general. In theory, nothing in the text should be a surprise because these are things we have been discussing over many months (some things for almost 1.5 years) and are already present in the code.
This commit is contained in:
parent
e4639c153f
commit
4b8a612df4
|
@ -1,67 +1,43 @@
|
|||
## Getting Started
|
||||
|
||||
[Starting Guide](starting-guide.md)
|
||||
|
||||
[IDE Setup](ide-setup.md)
|
||||
|
||||
## Getting Started
|
||||
|
||||
- [Starting Guide](starting-guide.md)
|
||||
- [IDE Setup](ide-setup.md)
|
||||
|
||||
## Development Process
|
||||
|
||||
[Coding guidelines](new-guidelines.md)
|
||||
|
||||
[UI components coding guidelines](ui-guidelines.md)
|
||||
|
||||
[Release Checklist](release-checklist.md)
|
||||
|
||||
[Release Guide](release-guide.md)
|
||||
|
||||
[Merging PR process](merging-pr-process.md)
|
||||
|
||||
[PR Review Policy](pr-review-policy.md)
|
||||
|
||||
[Working on PR together with QA team](pipeline_process.md)
|
||||
|
||||
[Debugging](debugging.md)
|
||||
|
||||
[Patching](patching.md)
|
||||
|
||||
[Creating a pixel perfect UI](pixel-perfection.md)
|
||||
|
||||
[Contributing to status-go](status-go-changes.md)
|
||||
|
||||
[Malli schemas (recorded demo)](https://www.youtube.com/watch?v=SlRio70aYVI) ([slides](files/forging-code-with-schemas-sep-2023-slides.pdf))
|
||||
- [Coding guidelines](new-guidelines.md)
|
||||
- [UI components coding guidelines](ui-guidelines.md)
|
||||
- [Release Checklist](release-checklist.md)
|
||||
- [Release Guide](release-guide.md)
|
||||
- [Merging PR process](merging-pr-process.md)
|
||||
- [PR Review Policy](pr-review-policy.md)
|
||||
- [Working on PR together with QA team](pipeline_process.md)
|
||||
- [Debugging](debugging.md)
|
||||
- [Patching](patching.md)
|
||||
- [Creating a pixel perfect UI](pixel-perfection.md)
|
||||
- [Contributing to status-go](status-go-changes.md)
|
||||
- [Malli schemas (recorded demo)](https://www.youtube.com/watch?v=SlRio70aYVI) ([slides](files/forging-code-with-schemas-sep-2023-slides.pdf))
|
||||
|
||||
## Testing
|
||||
|
||||
[How to run local tests](testing.md)
|
||||
|
||||
[End-to-end tests (e2e) overview](how-to-launch-e2e.md)
|
||||
|
||||
[Component tests (jest) overview](component-tests-overview.md)
|
||||
|
||||
- [Overview](tests/tests-overview.md)
|
||||
- [How to run local tests](tests/how-to-run-local-tests.md)
|
||||
- [End-to-end tests (e2e) overview](tests/how-to-launch-e2e.md)
|
||||
- [Component tests (jest) overview](tests/component-tests-overview.md)
|
||||
|
||||
## Project details
|
||||
|
||||
[status-go introduction (recorded meeting)](https://drive.google.com/file/d/1B7TljmTZ8fHkqJH8ChU1Cp4FGDFM03gq/view)
|
||||
|
||||
[re-frame usage (recorded meeting)](https://drive.google.com/file/d/1qv_E0CEGzQpu_zGXD0gCTU5EvhC2k8Jy/view)
|
||||
|
||||
[status app functionality demo](https://drive.google.com/file/u/0/d/1PmwsMLTLDNNIdv5_6wvMOwoj2PfT50c6/view)
|
||||
- [status-go introduction (recorded meeting)](https://drive.google.com/file/d/1B7TljmTZ8fHkqJH8ChU1Cp4FGDFM03gq/view)
|
||||
- [re-frame usage (recorded meeting)](https://drive.google.com/file/d/1qv_E0CEGzQpu_zGXD0gCTU5EvhC2k8Jy/view)
|
||||
- [status app functionality demo](https://drive.google.com/file/u/0/d/1PmwsMLTLDNNIdv5_6wvMOwoj2PfT50c6/view)
|
||||
|
||||
## Misc
|
||||
|
||||
[Importing icons from Figma into project](export-icons.md)
|
||||
|
||||
[Updating Status APK builds for the F-Droid Android application catalogue](fdroid.md)
|
||||
|
||||
[Troubleshooting for known errors](troubleshooting.md)
|
||||
|
||||
|
||||
- [Importing icons from Figma into project](export-icons.md)
|
||||
- [Updating Status APK builds for the F-Droid Android application catalogue](fdroid.md)
|
||||
- [Troubleshooting for known errors](troubleshooting.md)
|
||||
|
||||
## Outdated:
|
||||
|
||||
[Old guidelines](codebase-structure-and-guidelines.md)
|
||||
|
||||
[Post mortem analysis](post-mortem.md)
|
||||
|
||||
- [Old guidelines](codebase-structure-and-guidelines.md)
|
||||
- [Post mortem analysis](post-mortem.md)
|
||||
|
|
|
@ -32,9 +32,9 @@ Ready for testing, a PR should meet the following criteria:
|
|||
### E2E tests and analyzing the results
|
||||
|
||||
The PR **MUST** be moved to the E2E column when it is ready for testing (**mandatory for all PRs**).
|
||||
That will also trigger e2e tests run. QAs are monitoring PRs from E2E column and take it into test.
|
||||
That will also trigger e2e tests run. QAs are monitoring PRs from E2E column and take it into test.
|
||||
This step cannot be skipped. So, at least one comment from the `status-im-auto` bot with results is a prerequisite for moving forward.
|
||||
Information on how to analyze tests can be found [here](https://github.com/status-im/status-mobile/blob/develop/doc/how-to-launch-e2e.md).
|
||||
Information on how to analyze tests can be found [here](https://github.com/status-im/status-mobile/blob/develop/doc/tests/how-to-launch-e2e.md).
|
||||
Tests might be flaky, as they depend on infrastructure - SauceLabs and Waku.
|
||||
|
||||
If there are `Failed tests` and you are not sure about the reason, you can always ping the mobile QAs for help (preferably in PRs by `@status-im/mobile-qa`).
|
||||
|
@ -46,14 +46,14 @@ Please, respect this rule.**
|
|||
|
||||
### Adding `skip-manual-qa`
|
||||
|
||||
**Do not hesitate to use a `skip-manual-qa`** if you're sure that it is a simple flow and you checked it.
|
||||
**Do not hesitate to use a `skip-manual-qa`** if you're sure that it is a simple flow and you checked it.
|
||||
- Please ask another team member before adding the `skip-manual-qa` label (PR/Status community/DMs) so that there's a second opinion.
|
||||
- The PR MUST have a proper reasoning why manual QA is skipped.
|
||||
- The PR MUST include the steps of testing that has been done by the developer prior to moving it forward.
|
||||
|
||||
**NOTE:** Make sure that QAs are OK with that;
|
||||
|
||||
Before merging PRs, please make sure that information is added about how you tested the PRs, that e2s have been passed and their results have been reviewed.
|
||||
Before merging PRs, please make sure that information is added about how you tested the PRs, that e2s have been passed and their results have been reviewed.
|
||||
|
||||
The QA team appreciates your help!
|
||||
|
||||
|
@ -72,7 +72,7 @@ The QA team appreciates your help!
|
|||
- QA engineer picks up one of PRs with the ```request-manual-qa``` label, drags the item to the ```IN TESTING``` column and assigns it to themselves.
|
||||
- During testing, QA will add comments describing the issues found, and also review automation tests results.
|
||||
Usually found issues are numbered as "Issue 1, Issue 2", etc.
|
||||
When the first round of testing is completed and all issues for this stage are found, the QA can add the ```Tested - Issues``` label and drag the card to the ```CONTRIBUTOR``` column. These two actions are optional.
|
||||
When the first round of testing is completed and all issues for this stage are found, the QA can add the ```Tested - Issues``` label and drag the card to the ```CONTRIBUTOR``` column. These two actions are optional.
|
||||
|
||||
**IMPORTANT NOTE:** when the issues are fixed, developer **MUST** notify the QA that it is ready to be re-tested again by mention them in the PR.
|
||||
|
||||
|
@ -86,7 +86,7 @@ After that the developer merges PR into develop.
|
|||
_**How do I know if a design review is needed?**_
|
||||
|
||||
There are three cases here depending on the changes in the PR:
|
||||
1. **Functional PRs with UI changes:** after the ```Tested - OK``` label is added, the QA moves the PR to the ```Design review``` column + mentions ```@Francesca-G``` in comments.
|
||||
1. **Functional PRs with UI changes:** after the ```Tested - OK``` label is added, the QA moves the PR to the ```Design review``` column + mentions ```@Francesca-G``` in comments.
|
||||
2. **Component PRs:** once the PR has received a review from developers and e2e tests results, it can be moved directly to the ```Design review``` column by the developer (manual testing step can be skipped) + the developer mentions ```@Francesca-G``` in comments.
|
||||
3. **Functional PRs changes in which are not related to UI (e.g. a crash fix):** skip the ```Design review``` step (the PR should only be manually tested by QA).
|
||||
|
||||
|
@ -99,7 +99,7 @@ There are three possible scenarios when the design review is completed:
|
|||
**Notes:**
|
||||
- If your PR has a long story and started from `develop` branch several days ago, please rebase it to current develop before adding label
|
||||
- if PR can be tested by developer (in case of small changes) and/or developer is sure that the changes made cannot introduce a regression, then PR can be merged without manual testing. Also, currently, PRs are not manually tested if the changes relate only the design (creation of components, etc.) and do not affect the functionality (see `skip-manual-qa` label)
|
||||
---
|
||||
---
|
||||
|
||||
#### Why my PR is in `Contributor` column?
|
||||
PR can be moved to this column by the ```status-github-bot``` or by QA engineer with label `Tested-issues` or if one of the requirements for manual QA was not met.
|
||||
|
@ -120,6 +120,6 @@ In the second case - after fixing of all found issues, the developer should ping
|
|||
6. In case of manual testing - the label ```Tested - OK``` from QA
|
||||
7. In case of design review - the approval from the designer
|
||||
|
||||
You can merge your PR into develop - some useful clues you can find [here](https://notes.status.im/setup-e2e#3-Merging-PR)
|
||||
You can merge your PR into develop - some useful clues you can find [here](https://notes.status.im/setup-e2e#3-Merging-PR)
|
||||
|
||||
HAPPY DEVELOPMENT! :tada:
|
||||
HAPPY DEVELOPMENT! :tada:
|
||||
|
|
|
@ -8,7 +8,7 @@ As a part of CI for Status mobile app and in order to ensure there are no regres
|
|||
- Automated tests written on Python 3.9 and pytest.
|
||||
- Appium (server) and Selenium WebDriver (protocol) are the base of test automation framework.
|
||||
|
||||
TestRail is a test case management system tool where we have test cases.
|
||||
TestRail is a test case management system tool where we have test cases.
|
||||
|
||||
|
||||
Each of the test case gets a priority (Critical/High/Medium)
|
||||
|
@ -20,21 +20,21 @@ For now we support e2e for Android only.
|
|||
Whenever we need to push set of test scripts we create 16 parallel sessions (max, but depending on amount of cases that are included in job) and each thread: 1) uploads Android .apk file to SauceLabs -> 2) runs through the test steps -> 3) receives results whether test failed on particular step or succeeded with no errors -> 3) Parse test results and push them as a Github comment (if the suite ran against respective PR) and into TestRail.
|
||||
|
||||
We push **whole automation test suite (currently 155, amount is changing)** against each nightly build (if the nightly builds job succeeded). Results of the test run are saved in TestRail.
|
||||
And also we push set of autotests whenever PR with successful builds got moved in to `E2E Tests` column from [Pipeline for QA dashboard ](https://github.com/status-im/status-react/projects/7).
|
||||
And also we push set of autotests whenever PR with successful builds got moved in to `E2E Tests` column from [Pipeline for QA dashboard ](https://github.com/status-im/status-react/projects/7).
|
||||
In that case we save results in TestRail as well and push a comment with test results in a respective PR.
|
||||
|
||||
For example: https://github.com/status-im/status-react/pull/9147#issuecomment-540008770
|
||||
|
||||
![](images/how-to-launch-e2e/how-to-launch-e2e-1.png)
|
||||
![](../images/how-to-launch-e2e/how-to-launch-e2e-1.png)
|
||||
|
||||
|
||||
The test_send_stt_from_wallet opens link in TestRail https://ethstatus.testrail.net/index.php?/tests/view/890885 where performed steps could be found
|
||||
List of all runs performed by test jobs could be found here https://ethstatus.testrail.net/index.php?/runs/overview/14
|
||||
List of all runs performed by test jobs could be found here https://ethstatus.testrail.net/index.php?/runs/overview/14
|
||||
|
||||
**For credentials for TestRail to see results ping Chu in DM**:
|
||||
|
||||
Opening any test run navigates you to list of test cases with results:
|
||||
![](images/how-to-launch-e2e/how-to-launch-e2e-2.png)
|
||||
![](../images/how-to-launch-e2e/how-to-launch-e2e-2.png)
|
||||
|
||||
|
||||
## What about launching e2e manually
|
||||
|
@ -53,12 +53,12 @@ Params to specify:
|
|||
- test_marks: tests by priorities (by default: `critical or high or medium`, which corresponds the whole suite; to launch the same suite as in PRs, use `critical or high`)
|
||||
- testrail_case_id: here is the list of test cases which you may find in test rail (4-digit value)
|
||||
|
||||
For easier access you can hit `Rerun tests` in GH comment and testrail_case_id/ apk_name/ pr_id will be filled automatically. For making sure that tests are being rerun on most recent e2e build it is recommended to paste link to the last e2e build in apk_name field. The list of PR builds can be found in Jenkins Builds block on PR page.
|
||||
![](images/how-to-launch-e2e/how-to-launch-e2e-3.png)
|
||||
For easier access you can hit `Rerun tests` in GH comment and testrail_case_id/ apk_name/ pr_id will be filled automatically. For making sure that tests are being rerun on most recent e2e build it is recommended to paste link to the last e2e build in apk_name field. The list of PR builds can be found in Jenkins Builds block on PR page.
|
||||
![](../images/how-to-launch-e2e/how-to-launch-e2e-3.png)
|
||||
And then hit ‘Build’.
|
||||
Once the job starts it picks up specified tests, runs them against provided apk and sends results to pull request.
|
||||
|
||||
Even we have 16 parallel sessions for testing it’s a time consuming operation (whole test suite we have automated at the moment takes ~140 minutes to finish).
|
||||
Even we have 16 parallel sessions for testing it’s a time consuming operation (whole test suite we have automated at the moment takes ~140 minutes to finish).
|
||||
So for PRs we pick only set of `critical or high` (you can also use this in TEST_MARKS param for job)
|
||||
tests (otherwise some PRs could wait their turn of the scheduled Jenkins job till the next day).
|
||||
|
||||
|
@ -78,9 +78,9 @@ Several examples of when test fails to succeed:
|
|||
|
||||
- **Valid issue in the automated test scripts** - that's what we're looking for
|
||||
|
||||
Example: here is the test results https://github.com/status-im/status-react/pull/13015#issuecomment-1016495043 where one test failed.
|
||||
Example: here is the test results https://github.com/status-im/status-react/pull/13015#issuecomment-1016495043 where one test failed.
|
||||
1. Open the test in TestRail and open session recorded for this test in SauceLabs
|
||||
![](images/how-to-launch-e2e/how-to-launch-e2e-4.png)
|
||||
![](../images/how-to-launch-e2e/how-to-launch-e2e-4.png)
|
||||
|
||||
|
||||
In TestRail you may find all the steps performed by the test.
|
||||
|
@ -98,6 +98,6 @@ Not all features of the app could be covered by e2e at the moment:
|
|||
## Brief flow for test to be automated
|
||||
|
||||
Whenever there is a need to have a new test:
|
||||
1) Create a test scenario in TestRail.
|
||||
1) Create a test scenario in TestRail.
|
||||
2) If certain item could be checked in scope of existing test case we update existing one (otherwise we may have thousands of test cases which is overkill to manage in TestRail as well as in automated test scripts). And also complex autotests increase probability to not catch regressions by stopping test execution (due to valid bug or changed feature) keeping the rest test steps uncovered. So here we need to balance when it makes sense to update existing test case with more checks.
|
||||
3) Then we create test script based on the test case, ensure test passes for the build and pushing the changes to repo.
|
|
@ -0,0 +1,167 @@
|
|||
# Tests
|
||||
|
||||
## Introduction
|
||||
|
||||
This document provides a general overview of the types of tests we use and when
|
||||
to use them. It is not meant to be a tutorial or a detailed documentation about
|
||||
testing in software development.
|
||||
|
||||
## Types of tests
|
||||
|
||||
Tests in `status-mobile` are comprised of:
|
||||
|
||||
- Unit tests
|
||||
- Subscription tests
|
||||
- Event tests
|
||||
- Tests for various utilities
|
||||
- [Component tests](./component-tests-overview.md)
|
||||
- Integration/contract tests
|
||||
- [End-to-end tests](./how-to-launch-e2e.md)
|
||||
|
||||
We apply the [test
|
||||
pyramid](https://en.wikipedia.org/wiki/Test_automation#Testing_at_different_levels)
|
||||
strategy, which means we want the majority of tests at the bottom of the
|
||||
pyramid. Those should be fast and deterministic and support REPL-Driven
|
||||
development (RDD). Slightly above them, we have component tests, then
|
||||
integration/contract tests and finally end-to-end tests. The closer to the top
|
||||
of the pyramid, the more valuable a test can be, but also more difficult to
|
||||
pinpoint why it failed and harder to make it dependable.
|
||||
|
||||
*Note*: there are literally dozens of [types of
|
||||
tests](https://en.wikipedia.org/wiki/Software_testing), each with its strengths
|
||||
and weaknesses.
|
||||
|
||||
We tend not to stub or mock implementations in our tests, which means our tests
|
||||
are [sociable](https://martinfowler.com/bliki/UnitTest.html).
|
||||
|
||||
## What to test?
|
||||
|
||||
The UI is driven by global & local state changes caused by events. Global state
|
||||
is managed by re-frame and local state by Reagent atoms or React hooks. Except
|
||||
for component and end-to-end tests, we test only non-UI code in `status-mobile`.
|
||||
Given that the UI is greatly derived from global state, by guaranteeing the
|
||||
state is correct we can prevent bugs and, more importantly, reduce the [cost of
|
||||
change](https://www.pmi.org/disciplined-agile/agile/costofchange).
|
||||
|
||||
We strive to minimize the amount of _business logic_ in views (UI code). We
|
||||
achieve this by moving capabilities to status-go and also by adhering to
|
||||
re-frame's architecture.
|
||||
|
||||
Whenever appropriate (see section `When to test?`), we _may_ test:
|
||||
|
||||
- Re-frame events.
|
||||
- Re-frame subscriptions.
|
||||
- Utility functions.
|
||||
- User journeys through integration/contract tests.
|
||||
|
||||
Interestingly, we don't test re-frame _effects_ in isolation.
|
||||
|
||||
### What are status-mobile integration and contract tests?
|
||||
|
||||
The mobile _integration tests_ can be used to "simulate" user interactions and
|
||||
make actual calls to status-go via the RPC layer and actually receive signals.
|
||||
We can also use these tests to verify the app-db and multiple subscriptions are
|
||||
correct. We use the word _simulate_ because there is no UI. Basically, any flow
|
||||
that can be driven by re-frame events is possible to automatically test. There
|
||||
is no way to change or inspect local state managed by React.
|
||||
|
||||
A _contract test_ has the same capabilities as an integration test, but we want
|
||||
to draw the line that they should focus more on a particupar RPC endpoint or
|
||||
signal, and not on a user journey (e.g. create a wallet account). In the future,
|
||||
we may consider running them automatically in status-go.
|
||||
|
||||
**Note:** integration tests and contract tests are currently overlapping in
|
||||
their responsibilities and still require a clearer distinction.
|
||||
|
||||
## When to test?
|
||||
|
||||
(Automated) tests basically exist to support rapid software changes, but not
|
||||
every piece of code should be tested. The following are general recommendations,
|
||||
not rules.
|
||||
|
||||
- What would be the consequences to the user of a bug in the implementation you
|
||||
are working on?
|
||||
- Can a QA exercise all the branches in the code you changed? Not surprisingly,
|
||||
usually QAs can't test many code paths (it may be nearly impossible), and
|
||||
because PRs are not often tested by reviewers, many PRs can get into `develop`
|
||||
without the necessary quality assurance.
|
||||
- How costly was it for you to verify a function/event/etc was correct? Now
|
||||
consider that this cost will be dispersed to every developer who needs to
|
||||
change the implementation if there are no tests.
|
||||
- Check the number of conditionals, and if they nest as well. Every conditional
|
||||
may require two different assertions, and the number of assertions can grow
|
||||
exponentially.
|
||||
- How complicated are the arguments to the function? If they contain nested maps
|
||||
or data that went through a few transformations, it may be tricky to decipher
|
||||
what they are, unless you are familiar with the code. A test would be able to
|
||||
capture the data, however complex they are.
|
||||
|
||||
### When to unit-test subscriptions?
|
||||
|
||||
Only test [layer-3
|
||||
subscriptions](https://day8.github.io/re-frame/subscriptions/#the-four-layers),
|
||||
i.e. don't bother testing extractor subscriptions (check the related
|
||||
[guideline](https://github.com/status-im/status-mobile/blob/7774c4eac16fdee950a17bf5d07630c45a980f41/doc/new-guidelines.md#subscription-tests)).
|
||||
Some layer-3 subscriptions can still be straightforward and may not be worth
|
||||
testing.
|
||||
|
||||
- Check the number of _inputs_ to the sub (from the graph). The higher this
|
||||
number, the greater the chance the subscription can break if any of the
|
||||
input's implementation changes.
|
||||
|
||||
**Note**: if a tested subscription changes inadvertently, even if its own tests
|
||||
still pass, other subscriptions that depend on it and have tests may still fail.
|
||||
This is why we don't directly test the subscription handler, but instead, use
|
||||
the macro `test-helpers.unit/deftest-sub`.
|
||||
|
||||
### When to unit-test events?
|
||||
|
||||
A good hint is to ask if you and other CCs need to rely on re-frisk, UI, REPL,
|
||||
or FlowStorm to understand the event. If the answer is yes or probably, then a
|
||||
test would be prudent.
|
||||
|
||||
- Many events only receive arguments and pass them along without much or any
|
||||
transformation to an RPC call. These are straightforward and usually don't
|
||||
need tests ([example](https://github.com/status-im/status-mobile/blob/7774c4eac16fdee950a17bf5d07630c45a980f41/src/status_im/contexts/contact/blocking/events.cljs#L79-L85)).
|
||||
- Overall, every event basically returns two effects at most, `:fx` and/or
|
||||
`:db`. Usually, the complicated part lies in the computation to return the new
|
||||
app-db. If the event doesn't perform transformations in the app-db or just
|
||||
does a trivial `assoc`, for example, it may not be worth testing.
|
||||
|
||||
For reference, the re-frame author particularly [suggests testing events and
|
||||
subscriptions](https://github.com/day8/re-frame/blob/09e2d7132c479aa43f2a64164e54e42bf8511902/docs/Testing.md#what-to-test).
|
||||
|
||||
### When to unit-test utility functions?
|
||||
|
||||
Most utility functions in `status-mobile` are pure and can be readily and
|
||||
cheaply tested.
|
||||
|
||||
- If the utility is used in an event/subscription and if the event/subscription
|
||||
has tests, you may prefer to test the event/subscription and not the utility,
|
||||
or the other way around sometimes.
|
||||
- If the utility is tricky to verify, such as functions manipulating time, write
|
||||
tests ([example](https://github.com/status-im/status-mobile/blob/7774c4eac16fdee950a17bf5d07630c45a980f41/src/utils/datetime.cljs#L1)).
|
||||
- Utilities can be particularly hard to verify by QAs because they can be lower
|
||||
level and require very particular inputs. In such cases, consider writing
|
||||
tests.
|
||||
|
||||
### When to write integration/contract tests?
|
||||
|
||||
- You want to make real calls to status-go because you think the unit tests are
|
||||
not enough (test pyramid strategy).
|
||||
- You constantly need to retest the same things on the UI, sometimes over
|
||||
multiple screens.
|
||||
- The flow is too important to rely only on manual QA, which can't always be
|
||||
done due to resource limits, so an integration/contract test fills this gap.
|
||||
- You want to rely less on end-to-end tests, which can be more unreliable and
|
||||
slower to change.
|
||||
- You want automatic verifications for some area of the mobile app whenever
|
||||
status-go is upgraded.
|
||||
|
||||
**Note**: the feedback cycle to write integration tests is longer than unit
|
||||
tests because they are slower and harder to debug. Using the REPL with them is
|
||||
difficult due to their stateful nature.
|
||||
|
||||
### When to test Quo components?
|
||||
|
||||
This is covered in [quo/README.md#component-tests](https://github.com/status-im/status-mobile/blob/7774c4eac16fdee950a17bf5d07630c45a980f41/src/quo/README.md#component-tests).
|
Loading…
Reference in New Issue