Documentation updated (#14933)

This commit is contained in:
Volodymyr Kozieiev 2023-01-31 12:06:29 +00:00 committed by GitHub
parent 01523f3a1d
commit bfb2863774
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
38 changed files with 444 additions and 68 deletions

48
doc/README.md Normal file
View File

@ -0,0 +1,48 @@
## Getting Started
[Starting Guide](starting-guide.md)
[IDE Setup](ide-setup.md)
## Development Process
[Coding guidelines](new-guidelines.md)
[Release Checklist](release-checklist.md)
[Release Guide](release-guide.md)
[Merging PR process](merging-pr-process.md)
[Working on PR together with QA team](pipeline_process.md)
## Testing
[How to run local tests](testing.md)
[End-to-end tests (e2e) overview](how-to-launch-e2e.md)
[Component tests (jest) overview](component-tests-overview.md)
## Misc
[Importing icons from Figma into project](export-icons.md)
[Updating Status APK builds for the F-Droid Android application catalogue](fdroid.md)
[Troubleshooting for known errors](troubleshooting.md)
## Outdated:
[Old guidelines](codebase-structure-and-guidelines.md)
[Post mortem analysis](post-mortem.md)

View File

@ -1,40 +0,0 @@
# Testing
### Unit & integration tests
To run tests:
```
make test
```
To watch the tests:
```
make test-watch
```
To run test in REPL
```
make test
yarn shadow-cljs cljs-repl test # or start the REPL in your editor
```
Then start the test process with
```
node --require ./test-resources/override.js target/test/test.js --repl
```
You can run single test in REPL like this
```clojure
(require 'cljs.test)
(cljs.test/test-var #'status-im.data-store.chats-test/normalize-chat-test)
```
Tests will use the bindings in `modules/react-native-status/nodejs`, if you make any changes to these you will need to restart the watcher.

View File

@ -94,7 +94,7 @@ These guidelines make db.cljs namespaces the place to go when making changes to
## Enabling debug logs
Calls to `log/debug` will not be printed to the console by default. It can be enabled under "Advanced settings" in the app:
![Enable Debug Logs](./log-settings.png)
![Enable Debug Logs](images/codebase-structure-and-guidelines/log-settings.png)
## Translations
The app relies on system locale to select a language from the [list of supported languages](https://github.com/status-im/status-mobile/blob/bda73867471cf2bb8a68b1cc27c9f94b92d9a58b/src/status_im/i18n_resources.cljs#L9). It falls back to English in cash the system locale is not supported.

View File

@ -0,0 +1,55 @@
# Component Tests
The component tests are using React Native Testing Library - https://callstack.github.io/react-native-testing-library/
and Jest - https://jestjs.io/
It is highly recommended to read some advice from Kent C.Dodds on how to write tests and use these tools correctly.
https://kentcdodds.com/blog/common-mistakes-with-react-testing-library
https://www.youtube.com/watch?v=ahrvE062Kv4
Both of these links are showing it for React-Testing-Library (not Native) however the approach is for the most part considered the same.
## Running the tests
To run these tests there are two methods.
`make component-test`
setups and runs the test suite once.
`make component-test-watch`
setups and runs the test suite and watches for code changes will then retrigger the test suite.
## Writing Tests
New test files will need their namespace added to either the file "src/quo2/core_spec.cljs" or "src/status_im2/core_spec.cljs. These locations may update overtime but it is dependent on the entrypoint in shadowcljs config discussed below.
### Best practices
For the moment we will keep best practices for tests in our other guidelines document:
To that point these guidelines will follow the conventions of Jest and React Native Testing Library recomendations and Status mobile will just stack their preferences on top.
### Utilities
There is a file of utility functions defined in "src/test_helpers/component.cljs" and "src/test_helpers/component.clj". It will be great to use these utilities and to add any common testing tools to these files as it should make writing tests easier and faster.
## Configuration
Status Mobile has a bespoke tech stack, as such there is more complexities to configuring the tests.
### Shadow-CLJS
the configuration for compiling our tests are defined in the "shadow-cljs.edn" file.
The three main parts of this are
`:target :npm-module`
Needed for the configuration we are using
`:entries`
a vector of entry points for the test files.
and the `ns-regexp` to specify what tests to find. Since we have multiple forms of tests we decided that "component-spec" is the least likely to detect the wrong file type.
It's worth knowing that our tests are compiled to JS and then run in the temporary folder `component-tests`.
### Jest
There is also further configuration for Jest in "test/jest". There is a jest config file which has some mostly standard configuration pieces, where the tests live, what environment variables are set etc. This is documented by Jest here: https://jestjs.io/docs/configuration
There is also a setup file which is used to set some global and default values. Additionally this file is used to mock some of the react native (among other) dependencies

View File

@ -2,7 +2,7 @@
## Export icons
![](./export-icons.gif)
![](images/export-icons/export-icons.gif)
1. Export from figma 2 pngs 2x and 3x put them in `./resources/images/icons`
2. if necessary, rename file so that filename contains only lower case chars, e.g. `"Icon-Name@2x.png"` should be renamed to `"icon_name@2x.png"`.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.9 KiB

103
doc/how-to-launch-e2e.md Normal file
View File

@ -0,0 +1,103 @@
How to Launch E2E
===
## Overview for how automated test structured for Status app
As a part of CI for Status mobile app and in order to ensure there are no regressions appear after the changes in code (bug fix, new/updated feature) we are using automated tests (e2e tests).
- Automated tests written on Python 3.9 and pytest.
- Appium (server) and Selenium WebDriver (protocol) are the base of test automation framework.
TestRail is a test case management system tool where we have test cases.
Each of the test case gets a priority (Critical/High/Medium)
**SauceLabs** - is a cloud based mobile application test platform. We are using Android emulators (Android 10.0) for test script execution there. We have 16 session be running at the same time max.
For now we support e2e for Android only.
## What's happening when any e2e job is running
Whenever we need to push set of test scripts we create 16 parallel sessions (max, but depending on amount of cases that are included in job) and each thread: 1) uploads Android .apk file to SauceLabs -> 2) runs through the test steps -> 3) receives results whether test failed on particular step or succeeded with no errors -> 3) Parse test results and push them as a Github comment (if the suite ran against respective PR) and into TestRail.
We push **whole automation test suite (currently 155, amout is changing)** against each nightly build (if the nightly builds job succeeded). Results of the test run are saved in TestRail.
And also we push set of autotests whenever PR with successful builds got moved in to `E2E Tests` column from [Pipeline for QA dashboard ](https://github.com/status-im/status-react/projects/7).
In that case we save results in TestRail as well and push a comment with test results in a respective PR.
For example: https://github.com/status-im/status-react/pull/9147#issuecomment-540008770
![](images/how-to-launch-e2e/how-to-launch-e2e-1.png)
The test_send_stt_from_wallet opens link in TestRail https://ethstatus.testrail.net/index.php?/tests/view/890885 where performed steps could be found
List of all runs performed by test jobs could be found here https://ethstatus.testrail.net/index.php?/runs/overview/14
**For credentials for TestRail to see results ping Chu in DM**:
Opening any test run navigates you to list of test cases with results:
![](images/how-to-launch-e2e/how-to-launch-e2e-2.png)
## What about launching e2e manually
To manage e2e there are several jobs in https://ci.status.im/job/status-mobile/job/e2e :
1) [nightly](https://ci.status.im/job/status-mobile/job/e2e/job/status-app-nightly/) - running automatically after building nightly apk e2e build. QA running it manually for results of e2e when testing release.
2) [upgrade](https://ci.status.im/job/status-mobile/job/e2e/job/status-app-upgrade/) - running manually by QA in release testing for smoke upgrade tests.
3) [prs](https://ci.status.im/job/status-mobile/job/e2e/job/status-app-prs/) - running **only automatically** when PR moves into `e2e column`
4) [prs-rerun](https://ci.status.im/job/status-mobile/job/e2e/job/status-app-prs-rerun/) - for manual run, can be run on request by anyone. **If you need to launch e2e against your build, use this job.**
Params to specify:
- apk: [url_to_apk_build_here]
- pr_id: pull request number (e.g. 1234)
- branch: branch name from which the test are taken (in most of cases `develop`)
- keyword expression: tests by area (let's say `ens` or `chat`, thay can be combined`ens or chat or send_tx`. All keywords can be found in testrail, ping Chu for details)
- test_marks: tests by priorities (by default: `critical or high or medium`, which corresponds the whole suite; to lauch the same suite as in PRs, use `critical or high`)
- testrail_case_id: here is the list of test cases which you may find in test rail (4-digit value)
For easier access you can hit `Rerun tests` in GH comment and testrail_case_id/ apk_name/ pr_id will be filled automatically. For making sure that tests are being rerun on most recent e2e build it is recommended to paste link to the last e2e build in apk_name field. The list of PR builds can be found in Jenkins Builds block on PR page.
![](images/how-to-launch-e2e/how-to-launch-e2e-3.png)
And then hit Build.
Once the job starts it picks up specified tests, runs them against provided apk and sends results to pull request.
Even we have 16 parallel sessions for testing its a time consuming operation (whole test suite we have automated at the moment takes ~140 minutes to finish).
So for PRs we pick only set of `critical or high` (you can also use this in TEST_MARKS param for job)
tests (otherwise some PRs could wait their turn of the scheduled Jenkins job till the next day).
## Analysing test results (and why test fails to pass)
After automated test run finished test results could be found in GH comment (if the test suite ran agaist PR) and TestRail. There are two states of the test: Passed and Failed. Test failure happens when certain condition of test step has not met or automated test can not proceed execution because it can not find the respective element on screen it expects should be there.
Several examples of when test fails to succeed:
- Test clicked on element which should load new screen (or pop-up) and awaits some element on this screen. But test did not wait enough allowing the new screen to appear and so it fails with “Could not find element XYZ” (this case is more app issue in our opinion rather then test issue, but we just can not spend our and dev time with too specific random places which happens once causing app lags in different moments)
- Test sent transaction to address but it was not mined in time (we have a limit to wait until balance is changed on recipient side up to ~6 mins now). We classify this as False Fail, because its not the app issue but more network issue.
- Test infrastructure issues, - anything related to infrastructure including SauceLabs side issues (apk failed to install - rare case, or LTE connection was set by default instead WiFi or unexpected pop-up appeared preventing test to going further)
- Failure due to changed feature which has not been taken into account in some test after code merge (for instance: some element on screen has been removed, and we want to locate another element on this screen via XPath which is different now)
- **Valid issue in the automated test scripts** - that's what we're looking for
Example: here is the test results https://github.com/status-im/status-react/pull/13015#issuecomment-1016495043 where one test failed.
1. Open the test in TestRail and open session recorded for this test in SauceLabs
![](images/how-to-launch-e2e/how-to-launch-e2e-4.png)
In TestRail you may find all the steps performed by the test.
In SauceLabs testrun page you may find useful: video of the session, step logs, logcat.log of the session
2. Analyze step where test was failed
For particular example it was failed on `Recover access(password:qwerty, keycard:False)` and unexpected error appeared.
## Limits for e2e tests coverage
Not all features of the app could be covered by e2e at the moment:
- Colours or place of an element on UI.
- Real ETH/token transactions. Thats the main reason we have separate .apk build for automation needs - it defaults to Goerli network. Also it has enabled keycard test menu, ENS names and chat commands are also on Goerli network (the same in PR builds, but not in nightlies / release)
- Autologin/Biometric related actions (autologin available when device meets certain conditions like the it has set unlock password and device is not rooted: all emulators are rooted in SauceLabs)
## Brief flow for test to be automated
Whenever there is a need to have a new test:
1) Create a test scenario in TestRail.
2) If certain item could be checked in scope of existing test case we update existing one (otherwise we may have thousands of test cases which is overkill to manage in TestRail as well as in automated test scripts). And also complex autotests increase probability to not catch regressions by stopping test execution (due to valid bug or changed feature) keeping the rest test steps uncovered. So here we need to balance when it makes sense to update existing test case with more checks.
3) Then we create test script based on the test case, ensure test passes for the build and pushing the changes to repo.

View File

@ -33,13 +33,13 @@ See https://cursive-ide.com/userguide/index.html
- https://gist.github.com/Samyoul/f71a0593ba7a12d24dd0d5ef986ebbec
- Right click and "add as leiningen project"
<img src="images/IDE_SETUP/1_fake_project_file.png" width=75% />
<img src="images/ide-setup/1_fake_project_file.png" width=75% />
## I get a lot of `cannot be resolved`
Are you getting problems where you get a lot of `cannot be resolved` on everything?
<img src="images/IDE_SETUP/2_resolve.jpeg" width=75% />
<img src="images/ide-setup/2_resolve.jpeg" width=75% />
See https://cursive-ide.com/userguide/macros.html
@ -56,21 +56,21 @@ I had a number of problems connecting to REPL, the solution is as follows:
At the top of IntelliJ IDEA click on the `Add Configuration...` option:
<img src="images/IDE_SETUP/3_REPL_1.png" width=75% />
<img src="images/ide-setup/3_REPL_1.png" width=75% />
This will load the following menu:
<img src="images/IDE_SETUP/4_REPL_2.png" width=75% />
<img src="images/ide-setup/4_REPL_2.png" width=75% />
Click on the `+` icon in the top left corner of the menu.
Select `Clojure REPL > Remote`
<img src="images/IDE_SETUP/5_REPL_3.png" width=75% />
<img src="images/ide-setup/5_REPL_3.png" width=75% />
Which will load the following menu
<img src="images/IDE_SETUP/6_REPL_4.png" width=75% />
<img src="images/ide-setup/6_REPL_4.png" width=75% />
Enter the below options:
@ -81,14 +81,14 @@ Enter the below options:
- Host = 127.0.0.1
- Port = 7888
<img src="images/IDE_SETUP/7_REPL_5.png" width=75% />
<img src="images/ide-setup/7_REPL_5.png" width=75% />
Press `OK`
Now the below option will be visible.
Press the green run button
<img src="images/IDE_SETUP/8_REPL_6.png" width=75% />
<img src="images/ide-setup/8_REPL_6.png" width=75% />
You should now see an dialog with the following message:
@ -113,7 +113,7 @@ Which should output
See below:
<img src="images/IDE_SETUP/9_REPL_7.png" width=75% />
<img src="images/ide-setup/9_REPL_7.png" width=75% />
#### Connecting REPL and IntelliJ to `status-mobile`
@ -138,7 +138,7 @@ Next go back to the REPL input and enter the following commands:
See Below
<img src="images/IDE_SETUP/10_REPL_8.png" width="75%" />
<img src="images/ide-setup/10_REPL_8.png" width="75%" />
Which should switch the clj file type target to cljs as shown above
@ -146,13 +146,13 @@ Finally you are ready to test REPL.
Create a sample function to evaluate something simple like `(prn "I'm working")`, move your cursor to one of the outer parentheses. Right or `control` click and select the `REPL` option. From there select `Sync files in REPL` and then `Send '...' to REPL'`.
<img src="images/IDE_SETUP/11_REPL_9.png" width="75%" />
<img src="images/ide-setup/11_REPL_9.png" width="75%" />
Alternatively you can use the shortcut commands `⇧⌘M` to sync your files and `⇧⌘P` to send the statement to REPL. You may also need to switch the REPL namespace to match the current file, which can be done manually from the dialogue box or using the `⇧⌘N` shortcut key.
Following the above should give you the below result:
<img src="images/IDE_SETUP/12_REPL_10.png" width="75%" />
<img src="images/ide-setup/12_REPL_10.png" width="75%" />
🎉 Tada! Working! 🎉

View File

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 33 KiB

View File

Before

Width:  |  Height:  |  Size: 231 KiB

After

Width:  |  Height:  |  Size: 231 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 138 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 68 KiB

View File

Before

Width:  |  Height:  |  Size: 80 KiB

After

Width:  |  Height:  |  Size: 80 KiB

View File

Before

Width:  |  Height:  |  Size: 439 KiB

After

Width:  |  Height:  |  Size: 439 KiB

View File

Before

Width:  |  Height:  |  Size: 285 KiB

After

Width:  |  Height:  |  Size: 285 KiB

View File

Before

Width:  |  Height:  |  Size: 180 KiB

After

Width:  |  Height:  |  Size: 180 KiB

View File

Before

Width:  |  Height:  |  Size: 290 KiB

After

Width:  |  Height:  |  Size: 290 KiB

View File

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 54 KiB

View File

Before

Width:  |  Height:  |  Size: 655 KiB

After

Width:  |  Height:  |  Size: 655 KiB

View File

Before

Width:  |  Height:  |  Size: 203 KiB

After

Width:  |  Height:  |  Size: 203 KiB

View File

Before

Width:  |  Height:  |  Size: 583 KiB

After

Width:  |  Height:  |  Size: 583 KiB

View File

Before

Width:  |  Height:  |  Size: 585 KiB

After

Width:  |  Height:  |  Size: 585 KiB

View File

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

Before

Width:  |  Height:  |  Size: 118 KiB

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

44
doc/merging-pr-process.md Normal file
View File

@ -0,0 +1,44 @@
## PR process
1) Create a PR in status-mobile
2) Add some reviewers to the PR and wait for feedback
3) Address feedback
4) Make sure builds and tests are green (run `make test` locally, `make lint-fix` to fix any indentation issue and `make lint`)
5) Once the PR has been reviewed by the dev team you can run e2e tests on it by going to https://github.com/status-im/status-mobile/projects/7 and move the pr under the column E2E tests. This will trigger tests.
6) Once e2e tests have run, they will report the result on the PR, if it's less than 100%, ask QA to take a look to make sure everything is in order (some might fail for legitimate reasons)
7) Ask QA for manual testing if the PR requires it
8) Once it has been tested successfully, squash everything into one commit. rebase and merge. The commands we use:
```
git checkout develop
git pull develop
git checkout your-feature-branch
git rebase develop
git checkout develop
git rebase your-feature-branch
git push
```
## Status-go changes
If you are introducing status-go changes, the PR process is pretty similar, with some differences.
The most important thing is that
status-mobile code that makes it to the `develop` branch, should always point to a tagged version of status-go in the `develop` branch of status-go.
In practice this means:
1) Create a PR in status-go, get it reviewed by status-go devs
2) Create a PR in status-mobile, get it reviewed by devs, then go through manual testing (if necessary)
3) Once ready to merge, merge status-go PR first, make sure you bump the `VERSION` file in status-go
4) Once merged, tag the version with the new version and push the tag:
```
git checkout develop
git pull develop
git tag vx.y.z
git push origin vx.y.z
```
5) Update status-mobile with the new status-go version, using the new tag `scripts/update-status-go.sh "vx.y.z"`
6) Push, make sure it's rebased and go through the merge process as above.

View File

@ -62,6 +62,17 @@ with the source file using it. For a real example, see
(do-something)]])
```
### Don't use percents to define width/height
We shouldn't use percentage:
- because 100% doesn't make sense in flexbox
- because we always have fixed margins or paddings in the design. For example, Instead of using `80%` we should use `padding-horizontal 20`. This is because `%` will be different on different devices in pixels, but we should always have same the paddings in pixels on all devices.
#### Styles def vs defn
Always use `def` over `defn`, unless the style relies on dynamic values, such as

80
doc/pipeline_process.md Normal file
View File

@ -0,0 +1,80 @@
# Pipeline process
[Pipeline for QA](https://github.com/status-im/status-mobile/projects/7) is a project board for developers and testers used to track the status of a pull request, get reviews and manual testing, _and run autotests_ (_temporary disabled_).
The generally accepted recommendations for its use are described below:
## Opening a PR
- Once a PR is created, it moves to the ```REVIEW``` column where a review will be requested automatically.
- You can also request a review inside the PR from a particular person if needed.
- When creating a PR, do not forget to assign it to yourself.
- Also in case the PR adds new functionality, a short description would be appreciated.
### What if the work is still in progress?
- If PR work is not finished yet, please mark it as a draft or add [WIP] to the title and keep it in the `CONTRIBUTOR` column until it's ready to be reviewed/tested.
### When is a PR considered to be Ready for testing by QA team?
Ready for testing PR should meet the following criteria:
1. Reviewed and has at least 1 approval
2. Rebased to `develop` branch (both `status-mobile` and `status-go` if needed, depending on what part has changes)
3. All possible conflicts have been resolved
4. Has the label: `request-manual-qa`
**From the perspective of a developer it means that once work on PR is finished:**
1. It should be rebased to the latest `develop`. If there are conflicts - they should be resolved if possible.
2. If the PR was in the `Contributor` column - it should be moved to `Review` column.
3. Wait for the review.
4. Make sure that after review and before requesting manual QA your PR is rebased to current develop.
5. Once the PR has been approved by reviewer(s) - label `request-manual-qa` should be applied to the PR
6. Move PR to the E2E column when it is ready for testing. That will also trigger e2e tests run. QAs are monitoring PRs from E2E column and take it into test.
After that - PR will be taken into manual testing by the QA team.
## Testing PR
### Manual testing
- If you think PR needs and is ready for manual testing, please add the ```request-manual-qa``` label.
- QA engineer picks up one of PRs with the ```request-manual-qa``` label, drags the item to the ```IN TESTING``` column and assigns it to themselves.
- During testing, QA will add comments describing the issues found, and also review automation tests results.
Usually found issues are numbered as "Issue 1, Issue 2", etc.
When the first round of testing is completed and all issues for this stage are found, tester can add the ```Tested - Issues``` label and drag the card to the ```CONTRIBUTOR``` column. These two actions are optional.
- When manual testing of PR is fully completed and all issues are fixed, QA adds the ```Tested - OK``` label and drags the card to the ```MERGE``` column, after which developer merges PR into develop.
If manual testing was not carried out, developer drags PR to the ```MERGE``` column themselves.
**Notes:**
- If your PR has a long story and started from `develop` branch several days ago, please rebase it to current develop before adding label
- if PR can be tested by developer (in case of small changes) and/or developer is sure that the changes made cannot introduce a regression, then PR can be merged without manual testing. Also, currently, PRs are not manually tested if the changes relate only the design (creation of components, etc.) and do not affect the functionality.
#### Why my PR is in `Contributor` column?
PR can be moved to this column by the ```status-github-bot``` or by QA engineer with label `Tested-issues`.
In the first case most often this happens due to conflicting files in PR.
In the second case - after fixing of all found issues, the developer should ping the QA in the PR comments for retesting.
#### Why is my PR in `To Rebase` column?
PR is moved to the "To Rebase" column in two cases:
- automatically by github bot if PR branch has conflicts that should be resolved
- manually (by QAs) if PR branch is out-of-date with the base branch and requires rebasing to the latest develop
If PR appeared in the "To Rebase" column dev who is working on the PR should resolve conflicts/rebase branch to the latest develop. After resolving conflicts/rebasing PR should be moved by developer to the right column depending on PR work progress.
## Merging a PR
**Merge conditions:**
1. Required number of reviews received
2. All commits are squashed into one.
3. No conflicting files in PR
4. No issues from lint
5. Pay attention to automation checks (some of them are not blockers, best to check before merge anyway)
![](images/pipeline-process/automation-checks.png)
6. In case of manual testing - the label ```Tested - OK``` from QA
You can merge your PR into develop - some useful clues you can find [here](https://notes.status.im/setup-e2e#3-Merging-PR)
HAPPY DEVELOPMENT! :tada:

54
doc/testing.md Normal file
View File

@ -0,0 +1,54 @@
# Local testing
## Unit & integration tests
To run tests:
```
make test
```
Also test watcher can be launched. It will re-run the entire test suite when any file is modified
```
make test-watch
```
Developers can also manually change the shadow-cljs option `:ns-regex` to control which namespaces the test runner should pick.
## Testing with REPL
The most convenient way to develop and run test locally is using REPL:
1. Run command `make test-watch-for-repl`.
3. Once you see the message `[repl] shadow-cljs - #3 ready!` you can connect a REPL to the `:test` target from VS Code, Emacs, etc.
4. In any test namespace, run [cljs.test/run-tests](https://cljs.github.io/api/cljs.test/#run-tests) or your preferred method to run tests in the current namespace.
You can run single test in REPL like this
```clojure
(require 'cljs.test)
(cljs.test/test-var #'status-im.data-store.chats-test/normalize-chat-test)
```
Tests will use the bindings in `modules/react-native-status/nodejs`, if you make any changes to these you will need to restart the watcher.
### Example in Emacs
In the video below, you can see two buffers side-by-side. On the left the source implementation, on the right the REPL buffer. Whenever a keybinding is pressed, **tests in the current namespace instantly run**. You can achieve this exact flow in VS Code, IntelliJ, Vim, etc.
[2022-12-19 12-46.webm](https://user-images.githubusercontent.com/46027/208465927-4ad9a935-5494-45e7-85b0-8134dc32d1a1.webm)
### Example in terminal emulator
Here I'm showing a terminal-only experience using Tmux (left pane Emacs, right pane the output coming from running the make target).
[2022-12-19 13-17.webm](https://user-images.githubusercontent.com/46027/208471199-1909c446-c82d-42a0-9350-0c15ca562713.webm)

View File

@ -1,6 +1,27 @@
# [DEPRECATED] Undefined is not an object evaluating `register_handler_fx`
## `yarn add` is not working
## Deprecation note
While running any yarn add command like `yarn add react-native-share@7.0.2`, it is showing error
```
error status-mobile/node_modules/better-sqlite3: Command failed.
```
### Cause
Local `node` version can be different from the one needed by Status project.
### Solution
Before running `yarn add`, nix shell should be started:
```
make shell
```
## [DEPRECATED] Undefined is not an object evaluating `register_handler_fx`
### Deprecation note
This type of error should not occur anymore now that we require the namespace in the `fx.cljs` file.
@ -14,7 +35,7 @@ That way you don't need to use any magical call like `find-ns` or inline `requir
You also want to make sure users are using the macro by using a aliased namespace defined in require statement rather than require-macro and refer to the macro directly. Otherwise it won't require the cljs file and the require statement of the namespace in the macroexpension might not be there.
## Stacktrace
### Stacktrace
```
13:25:22, Requiring: hi-base32
@ -34,19 +55,19 @@ _callTimer@http://localhost:8081/index.bundle?platform=ios&dev=true&minify=false
_callImmediatesPass@http://localhost:8081/index.bundle?pla<…>
```
## Cause
### Cause
- stacktrace mentions `register_handler_fx`,
- common cause is when requires have been cleaned up and a require of `status-im.utils.handlers` namespace was removed because it looked like it was unused but was actually used through a fx/defn macro
## Solution
### Solution
go through known faulty commit looking for deleted requires
# Git "unable to access" errors during `yarn install`
## Git "unable to access" errors during `yarn install`
## Description
### Description
Developer updates `package.json` file with a new dependency using a GitHub URL. So it looks like this:
```
"react-native-status-keycard": "git+https://github.com/status-im/react-native-status-keycard.git#feature/exportKeyWithPath",
@ -60,10 +81,10 @@ fatal: unable to access 'https://github.com/status-im/react-native-status-keycar
info Visit https://yarnpkg.com/en/docs/cli/install for documentation about this command.
```
## Cause
### Cause
`yarn.lock` is not updated to be in sync with `package.json`.
## Solution
### Solution
Update yarn.lock file. In order to do this, perform the following steps on a clean `status-mobile` repo:
```
cd status-mobile
@ -72,9 +93,9 @@ yarn install
and don't forget to commit updated `yarn.lock` together with `package.json`.
# adb server/client version mismatch errors
## adb server/client version mismatch errors
## Description
### Description
Running some adb commands, e.g. `adb devices` or `make android-ports` (in turn invokes `adb reverse`/`adb forward` commands) may display the following message:
```
adb server version (40) doesn't match this client (41); killing...
@ -90,10 +111,10 @@ This might cause all kinds of difficult-to-debug errors, e.g.:
- `make run-android` throwing `- Error: Command failed: ./gradlew app:installDebug -PreactNativeDevServerPort=8081 Unable to install /status-mobile/android/app/build/outputs/apk/debug/app-debug.apk com.android.ddmlib.InstallException: EOF`
- dropped CLJS repl connections (that have been enabled previously with the help of `make android-ports`)
## Cause
### Cause
System's local adb and Nix's adb differ. As adb include of server/client processes, this can cause subtle version errors that cause adb to kill mismatching server processes.
## Solution
### Solution
Always use respective `make` commands, e.g. `make android-ports`, `make android-devices`, etc.
Alternatively, run adb commands only from `make shell TARGET=android` shell. Don't forget the `TARGET=android` env var setting - otherwise `adb` will still be selected from the system's default location. You can double-check this by running `which adb`.
@ -125,4 +146,4 @@ For x86 CPU architecture Android Devices, Hermes is creating the issue and the a
### Solution
Disable Hermes while building the app
`make run-android DISABLE_HERMES=true`
`make run-android DISABLE_HERMES=true`