mirror of
https://github.com/status-im/status-go.git
synced 2025-02-12 14:58:37 +00:00
author shashankshampi <shashank.sanket1995@gmail.com> 1729780155 +0530 committer shashankshampi <shashank.sanket1995@gmail.com> 1730274350 +0530 test: Code Migration from status-cli-tests fix_: functional tests (#5979) * fix_: generate on test-functional * chore(test)_: fix functional test assertion --------- Co-authored-by: Siddarth Kumar <siddarthkay@gmail.com> feat(accounts)_: cherry-pick Persist acceptance of Terms of Use & Privacy policy (#5766) (#5977) * feat(accounts)_: Persist acceptance of Terms of Use & Privacy policy (#5766) The original GH issue https://github.com/status-im/status-mobile/issues/21113 came from a request from the Legal team. We must show to Status v1 users the new terms (Terms of Use & Privacy Policy) right after they upgrade to Status v2 from the stores. The solution we use is to create a flag in the accounts table, named hasAcceptedTerms. The flag will be set to true on the first account ever created in v2 and we provide a native call in mobile/status.go#AcceptTerms, which allows the client to persist the user's choice in case they are upgrading (from v1 -> v2, or from a v2 older than this PR). This solution is not the best because we should store the setting in a separate table, not in the accounts table. Related Mobile PR https://github.com/status-im/status-mobile/pull/21124 * fix(test)_: Compare addresses using uppercased strings --------- Co-authored-by: Icaro Motta <icaro.ldm@gmail.com> test_: restore account (#5960) feat_: `LogOnPanic` linter (#5969) * feat_: LogOnPanic linter * fix_: add missing defer LogOnPanic * chore_: make vendor * fix_: tests, address pr comments * fix_: address pr comments fix(ci)_: remove workspace and tmp dir This ensures we do not encounter weird errors like: ``` + ln -s /home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907 /home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907@tmp/go/src/github.com/status-im/status-go ln: failed to create symbolic link '/home/jenkins/workspace/go_prs_linux_x86_64_main_PR-5907@tmp/go/src/github.com/status-im/status-go': File exists script returned exit code 1 ``` Signed-off-by: Jakub Sokołowski <jakub@status.im> chore_: enable windows and macos CI build (#5840) - Added support for Windows and macOS in CI pipelines - Added missing dependencies for Windows and x86-64-darwin - Resolved macOS SDK version compatibility for darwin-x86_64 The `mkShell` override was necessary to ensure compatibility with the newer macOS SDK (version 11.0) for x86_64. The default SDK (10.12) was causing build failures because of the missing libs and frameworks. OverrideSDK creates a mapping from the default SDK in all package categories to the requested SDK (11.0). fix(contacts)_: fix trust status not being saved to cache when changed (#5965) Fixes https://github.com/status-im/status-desktop/issues/16392 cleanup added logger and cleanup review comments changes fix_: functional tests (#5979) * fix_: generate on test-functional * chore(test)_: fix functional test assertion --------- Co-authored-by: Siddarth Kumar <siddarthkay@gmail.com> feat(accounts)_: cherry-pick Persist acceptance of Terms of Use & Privacy policy (#5766) (#5977) * feat(accounts)_: Persist acceptance of Terms of Use & Privacy policy (#5766) The original GH issue https://github.com/status-im/status-mobile/issues/21113 came from a request from the Legal team. We must show to Status v1 users the new terms (Terms of Use & Privacy Policy) right after they upgrade to Status v2 from the stores. The solution we use is to create a flag in the accounts table, named hasAcceptedTerms. The flag will be set to true on the first account ever created in v2 and we provide a native call in mobile/status.go#AcceptTerms, which allows the client to persist the user's choice in case they are upgrading (from v1 -> v2, or from a v2 older than this PR). This solution is not the best because we should store the setting in a separate table, not in the accounts table. Related Mobile PR https://github.com/status-im/status-mobile/pull/21124 * fix(test)_: Compare addresses using uppercased strings --------- Co-authored-by: Icaro Motta <icaro.ldm@gmail.com> test_: restore account (#5960) feat_: `LogOnPanic` linter (#5969) * feat_: LogOnPanic linter * fix_: add missing defer LogOnPanic * chore_: make vendor * fix_: tests, address pr comments * fix_: address pr comments chore_: enable windows and macos CI build (#5840) - Added support for Windows and macOS in CI pipelines - Added missing dependencies for Windows and x86-64-darwin - Resolved macOS SDK version compatibility for darwin-x86_64 The `mkShell` override was necessary to ensure compatibility with the newer macOS SDK (version 11.0) for x86_64. The default SDK (10.12) was causing build failures because of the missing libs and frameworks. OverrideSDK creates a mapping from the default SDK in all package categories to the requested SDK (11.0). fix(contacts)_: fix trust status not being saved to cache when changed (#5965) Fixes https://github.com/status-im/status-desktop/issues/16392 test_: remove port bind chore(wallet)_: move route execution code to separate module chore_: replace geth logger with zap logger (#5962) closes: #6002 feat(telemetry)_: add metrics for message reliability (#5899) * feat(telemetry)_: track message reliability Add metrics for dial errors, missed messages, missed relevant messages, and confirmed delivery. * fix_: handle error from json marshal chore_: use zap logger as request logger iterates: status-im/status-desktop#16536 test_: unique project per run test_: use docker compose v2, more concrete project name fix(codecov)_: ignore folders without tests Otherwise Codecov reports incorrect numbers when making changes. https://docs.codecov.com/docs/ignoring-paths Signed-off-by: Jakub Sokołowski <jakub@status.im> test_: verify schema of signals during init; fix schema verification warnings (#5947) fix_: update defaultGorushURL (#6011) fix(tests)_: use non-standard port to avoid conflicts We have observed `nimbus-eth2` build failures reporting this port: ```json { "lvl": "NTC", "ts": "2024-10-28 13:51:32.308+00:00", "msg": "REST HTTP server could not be started", "topics": "beacnde", "address": "127.0.0.1:5432", "reason": "(98) Address already in use" } ``` https://ci.status.im/job/nimbus-eth2/job/platforms/job/linux/job/x86_64/job/main/job/PR-6683/3/ Signed-off-by: Jakub Sokołowski <jakub@status.im> fix_: create request logger ad-hoc in tests Fixes `TestCall` failing when run concurrently. chore_: configure codecov (#6005) * chore_: configure codecov * fix_: after_n_builds
125 lines
5.3 KiB
Python
125 lines
5.3 KiB
Python
import logging
|
|
from uuid import uuid4
|
|
from constants import *
|
|
from src.libs.common import delay
|
|
from src.node.status_node import StatusNode, logger
|
|
from src.steps.common import StepsCommon
|
|
from src.libs.common import create_unique_data_dir, get_project_root
|
|
from validators.contact_request_validator import ContactRequestValidator
|
|
|
|
|
|
class TestContactRequest(StepsCommon):
|
|
def test_contact_request_baseline(self):
|
|
timeout_secs = 180
|
|
num_contact_requests = NUM_CONTACT_REQUESTS
|
|
project_root = get_project_root()
|
|
nodes = []
|
|
|
|
for index in range(num_contact_requests):
|
|
first_node = StatusNode(name=f"first_node_{index}")
|
|
second_node = StatusNode(name=f"second_node_{index}")
|
|
|
|
data_dir_first = create_unique_data_dir(os.path.join(project_root, "tests-functional/local"), index)
|
|
data_dir_second = create_unique_data_dir(os.path.join(project_root, "tests-functional/local"), index)
|
|
|
|
delay(2)
|
|
first_node.start(data_dir=data_dir_first)
|
|
second_node.start(data_dir=data_dir_second)
|
|
|
|
account_data_first = {
|
|
"rootDataDir": data_dir_first,
|
|
"displayName": f"test_user_first_{index}",
|
|
"password": f"test_password_first_{index}",
|
|
"customizationColor": "primary"
|
|
}
|
|
account_data_second = {
|
|
"rootDataDir": data_dir_second,
|
|
"displayName": f"test_user_second_{index}",
|
|
"password": f"test_password_second_{index}",
|
|
"customizationColor": "primary"
|
|
}
|
|
first_node.create_account_and_login(account_data_first)
|
|
second_node.create_account_and_login(account_data_second)
|
|
|
|
delay(5)
|
|
first_node.start_messenger()
|
|
second_node.start_messenger()
|
|
|
|
first_node.pubkey = first_node.get_pubkey(account_data_first["displayName"])
|
|
second_node.pubkey = second_node.get_pubkey(account_data_second["displayName"])
|
|
|
|
first_node.wait_fully_started()
|
|
second_node.wait_fully_started()
|
|
|
|
nodes.append((first_node, second_node, account_data_first["displayName"], index))
|
|
|
|
# Validate contact requests
|
|
missing_contact_requests = []
|
|
for first_node, second_node, display_name, index in nodes:
|
|
result = self.send_and_wait_for_message((first_node, second_node), display_name, index, timeout_secs)
|
|
timestamp, message_id, contact_request_message, response = result
|
|
|
|
if not response:
|
|
missing_contact_requests.append((timestamp, contact_request_message, message_id))
|
|
else:
|
|
validator = ContactRequestValidator(response)
|
|
validator.run_all_validations(
|
|
expected_chat_id=first_node.pubkey,
|
|
expected_display_name=display_name,
|
|
expected_text=f"contact_request_{index}"
|
|
)
|
|
|
|
if missing_contact_requests:
|
|
formatted_missing_requests = [
|
|
f"Timestamp: {ts}, Message: {msg}, ID: {mid}" for ts, msg, mid in missing_contact_requests
|
|
]
|
|
raise AssertionError(
|
|
f"{len(missing_contact_requests)} contact requests out of {num_contact_requests} didn't reach the peer node: "
|
|
+ "\n".join(formatted_missing_requests)
|
|
)
|
|
|
|
def send_and_wait_for_message(self, nodes, display_name, index, timeout=45):
|
|
first_node, second_node = nodes
|
|
first_node_pubkey = first_node.get_pubkey(display_name)
|
|
contact_request_message = f"contact_request_{index}"
|
|
|
|
timestamp, message_id = self.send_with_timestamp(
|
|
second_node.send_contact_request, first_node_pubkey, contact_request_message
|
|
)
|
|
|
|
response = second_node.send_contact_request(first_node_pubkey, contact_request_message)
|
|
|
|
expected_event_started = {"requestId": "", "peerId": "", "batchIndex": 0, "numBatches": 1}
|
|
expected_event_completed = {"requestId": "", "peerId": "", "batchIndex": 0}
|
|
|
|
try:
|
|
first_node.wait_for_signal("history.request.started", expected_event_started, timeout)
|
|
first_node.wait_for_signal("history.request.completed", expected_event_completed, timeout)
|
|
except TimeoutError as e:
|
|
logging.error(f"Signal validation failed: {str(e)}")
|
|
return timestamp, message_id, contact_request_message, None
|
|
|
|
first_node.stop()
|
|
second_node.stop()
|
|
|
|
return timestamp, message_id, contact_request_message, response
|
|
|
|
def test_contact_request_with_latency(self):
|
|
with self.add_latency():
|
|
self.test_contact_request_baseline()
|
|
|
|
def test_contact_request_with_packet_loss(self):
|
|
with self.add_packet_loss():
|
|
self.test_contact_request_baseline()
|
|
|
|
def test_contact_request_with_low_bandwidth(self):
|
|
with self.add_low_bandwidth():
|
|
self.test_contact_request_baseline()
|
|
|
|
def test_contact_request_with_node_pause(self, start_2_nodes):
|
|
with self.node_pause(self.second_node):
|
|
message = str(uuid4())
|
|
self.first_node.send_contact_request(self.second_node_pubkey, message)
|
|
delay(10)
|
|
assert self.second_node.wait_for_signal("history.request.completed")
|