* Add REST API traffic bypass for network conditions manipulation
- Introduced methods to apply packet loss only to P2P traffic, excluding REST API traffic.
- Simplified test cases to leverage new differentiated packet loss handling.
- Removed unused and legacy metrics/tests for cleaner configuration and coverage.
* Refactor network conditions setup to streamline command execution
* Pin priomap so libp2p traffic actually hits netem
The default prio qdisc priomap routes SO_PRIORITY 6 and 7 to band 0,
which is our REST bypass class 1:1. libp2p/gossipsub packets set a high
SO_PRIORITY on their sockets, so they were silently escaping the netem
impairment via the priomap rather than through the u32 filter. The
result: test_relay_packet_loss_correlated_vs_uncorrelated became green
by accident because no loss was ever applied to relay traffic.
Forcing priomap to 1 1 1 1 ... on all 16 slots routes every SO_PRIORITY
value to band 1 (netem). The u32 filter remains the only path to 1:1,
so REST stays isolated and libp2p now takes the configured loss.
Verified in alpine netns: with SO_PRIORITY=6, 50 packets to a non-REST
port ended up in 1:1 under the old rules (0 drops); with the forced
priomap they land in 1:2 and see the expected ~50% drop rate.
* Refactor P2P traffic loss handling; isolate REST API traffic
- Added `_p2p_iface` to dynamically detect libp2p interface tied to the Waku network.
- Introduced `add_packet_loss_p2p_only` and `add_packet_loss_correlated_p2p_only` for targeted packet loss on libp2p traffic.
- Replaced REST API traffic bypass logic with simplified P2P interface-based tc rules.
- Updated tests to use `clear_p2p` for cleanup, ensuring REST traffic remains unaffected.
---------
Co-authored-by: Egor Rachkovskii <egorrachkovskii@status.im>
* Fix auto and static sharding subscribe/unsubscribe tests - use a safe un-used cluster-id ever (cluster id 2 is now defaults to logos.dev with its settings), also adapted static sharding unsubscribe to PR#3732
* Adjust cluster_id to pubsub_topics
* Fix uncertain rate limit hit of filter subscribes - this is a planned behavior of current rate limiting, as we are trying our best to serve requests within reasonanble flexibility, thus we mint new tokens over time, so it can be seen as we are able to serve more requests as configured, those are not hard limits.
* fix test_relay_2_nodes_bandwidth_low_vs_high_drain_time flaky result, eliminate jitter and localhost test optimization can appear on docker networking.
* Adding bandwidth tests
* Adding more bandwidth tests
* bandwidth &packet reorder
* Add packet loss new test
* comments enhancements
* Fix error in test
Fix test cases where message does not reach relay peer fails due to braking change coming into LMN - when publish would return zero peer instead NoPeersToPublish error is propagated. Fix covers such checks properly. (#157)
* Fix CI issue
* rename waku_px_peers_cached to waku_px_peers to match current software
* Fix CI issues
* Fix additional failing tests
* make relay = true
* Adding first test
* Adding more latency tests
* packet loss tests & fix old tests
* Adding packet loss tests
* new patch of packet loss tests
* Making PR ready for review
* remove docker.io from required packages
* Adding store scripts
* Fix old scripts & add new store stress scripts
* Add filter stress scenarios & store
* Adding last set of tests
---------
Co-authored-by: fbarbu15 <florin@status.im>
* work on the rest of tests
* Add debug tests
* Add rest debug levels tests
* Add rest APIs tests
* Fix non working tests
* Adding more tests
* Add final set of tests
* Add changes to enable sync
* Adding new test
* Make changes to allow store-sync to work
* Checkout the image with the bug fix
* Ad change to make test work
* Adding store-sync test scenarios
* Adding more tests
* Adding new set of tests
* Adding extensive tests
* Make PR ready for review
* revert changes