status-desktop/test/e2e
Anastasiya Semenkevich 1cc8807174 chore: replace waitFor with waitForObjectExists
There is a problem with is_visible method now, it is not raising any assertions in case it did not find any object, therefore i suspect it as a potential endless loop (always returns True it seems). I will try to fix that, however it is being used across all the framework so it takes time.

The main idea here is to get rid of endless loop and 15 minutes if waiting for nothing in this certain test
2023-12-22 16:52:55 +03:00
..
ci ci: run critical tests by default 2023-12-18 16:22:27 +03:00
configs chore: added random selection of pictures (diff types) 2023-12-21 23:10:47 +03:00
constants test: test_ens_name_purchase added 2023-12-12 10:43:20 +07:00
driver feat(aut): wait for AUT comms port to become available 2023-12-07 10:28:57 +01:00
ext chore: add verification for identicon ring on the profile screen 2023-12-21 23:10:47 +03:00
fixtures chore(all) rename _logger to LOG to match refactor 2023-12-05 22:56:49 +01:00
gui chore: replace waitFor with waitForObjectExists 2023-12-22 16:52:55 +03:00
scripts chore(all): rename _logger to LOG to match changes 2023-12-01 14:58:22 +01:00
tests chore: add validation for emoji hash across screens 2023-12-21 23:10:47 +03:00
.gitignore feat(driver): add squish.ini config file to repo 2023-12-01 14:48:45 +01:00
README.md chore: update marks and readMe file 2023-12-20 17:38:27 +03:00
conftest.py feat(conftest): set log level using LOG_LEVEL env var 2023-12-05 23:31:44 +01:00
img.png chore: update readme 2023-12-14 10:45:45 +03:00
img_1.png chore: update readme 2023-12-14 10:45:45 +03:00
pytest.ini chore: update marks and readMe file 2023-12-20 17:38:27 +03:00
requirements.txt chore: introduce flaky mark 2023-12-15 16:07:13 +03:00
squish.ini feat(driver): add squish.ini config file to repo 2023-12-01 14:48:45 +01:00

README.md

This repository manages UI tests for desktop application

How to set up your environment

  1. MacOS: https://www.notion.so/Mac-arch-x64-and-Intel-50ea48dae1d4481b882afdbfad38e95a
  2. Linux: https://www.notion.so/Linux-21f7abd2bb684a0fb10057848760a889
  3. Windows: https://www.notion.so/Windows-fbccd2b09b784b32ba4174233d83878d

NOTE: when MacOS and Linux are proven to be working, Windows guide could be outdated (no one yet set up Windows)

Which build to use

  1. you can use your local dev build but sometimes tests hag there. To use it, just place a path to the executable to AUT_PATH in your _local.py config, for example AUT_PATH = "/Users/anastasiya/status-desktop/bin/nim_status_client"

  2. normally, please use CI build. Grab recent one from Jenkins job https://ci.status.im/job/status-desktop/job/nightly/

    2.1 Linux and Windows could be taken from nightly job img.png

    2.2 Mac requires entitlements for Squish which we don't add by default, so please go here https://ci.status.im/job/status-desktop/job/systems/job/macos/ and select architecture you need (arm or intel), click Build with parameters and select Squish entitlements. Select a branch if u like (master is default) img_1.png

Pytest marks used

You can run tests by mark, just use it like this in command line:

python3 -m pytest -m critical

or directly in pycharm terminal:

pytest -m critical

You can obtain the list of all marks we have by running this pytest --markers

  • critical, mark used to select the most important checks we do for PRs in desktop repository (the same for our repo PRs)
  • xfail, used to link tests to existing tickets in desktop, so if test fails it will be marked as expected to fail in report with a reference to the ticket. At the same time, if such test is passing, it will be shown as XPASS (unexpectedly passing) in report, which will indicate the initial bug is gone
  • skip, used to just skip tests for various reasons, normally with a ticket linked
  • flaky, used to mark the tests that are normally passing but sometimes fail. If such test passes, then if will be shown as passed in report normally. If the test fails, then the total run wont be failed, but the corresponding test will be marked as xfail in the report. It is done for a few tests that are not super stable yet, but passes most of the time. This mark should be used with caution and in case of real need only.