Merge branch 'master' into fix-url-encoding

This commit is contained in:
Vishwanath Martur 2024-12-09 22:27:33 +05:30 committed by GitHub
commit 83ac1cac8b
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 154 additions and 27 deletions

View File

@ -1,3 +1,66 @@
## v0.34.0 (2024-10-29)
### Notes:
* The `--protected-topic` CLI configuration has been removed. Equivalent flag, `--protected-shard`, shall be used instead.
### Features
- change latency buckets ([#3153](https://github.com/waku-org/nwaku/issues/3153)) ([956fde6e](https://github.com/waku-org/nwaku/commit/956fde6e))
- libwaku: ping peer ([#3144](https://github.com/waku-org/nwaku/issues/3144)) ([de11e576](https://github.com/waku-org/nwaku/commit/de11e576))
- initial windows support ([#3107](https://github.com/waku-org/nwaku/issues/3107)) ([ff21c01e](https://github.com/waku-org/nwaku/commit/ff21c01e))
- circuit relay support ([#3112](https://github.com/waku-org/nwaku/issues/3112)) ([cfde7eea](https://github.com/waku-org/nwaku/commit/cfde7eea))
### Bug Fixes
- peer exchange libwaku response handling ([#3141](https://github.com/waku-org/nwaku/issues/3141)) ([76606421](https://github.com/waku-org/nwaku/commit/76606421))
- add more logs, stagger intervals & set prune offset to 10% for waku sync ([#3142](https://github.com/waku-org/nwaku/issues/3142)) ([a386880b](https://github.com/waku-org/nwaku/commit/a386880b))
- add log and archive message ingress for sync ([#3133](https://github.com/waku-org/nwaku/issues/3133)) ([80c7581a](https://github.com/waku-org/nwaku/commit/80c7581a))
- add a limit of max 10 content topics per query ([#3117](https://github.com/waku-org/nwaku/issues/3117)) ([c35dc549](https://github.com/waku-org/nwaku/commit/c35dc549))
- avoid segfault by setting a default num peers requested in Peer eXchange ([#3122](https://github.com/waku-org/nwaku/issues/3122)) ([82fd5dde](https://github.com/waku-org/nwaku/commit/82fd5dde))
- returning peerIds in base 64 ([#3105](https://github.com/waku-org/nwaku/issues/3105)) ([37edaf62](https://github.com/waku-org/nwaku/commit/37edaf62))
- changing libwaku's error handling format ([#3093](https://github.com/waku-org/nwaku/issues/3093)) ([2e6c299d](https://github.com/waku-org/nwaku/commit/2e6c299d))
- remove spammy log ([#3091](https://github.com/waku-org/nwaku/issues/3091)) ([1d2b910f](https://github.com/waku-org/nwaku/commit/1d2b910f))
- avoid out connections leak ([#3077](https://github.com/waku-org/nwaku/issues/3077)) ([eb2bbae6](https://github.com/waku-org/nwaku/commit/eb2bbae6))
- rejecting excess relay connections ([#3065](https://github.com/waku-org/nwaku/issues/3065)) ([8b0884c7](https://github.com/waku-org/nwaku/commit/8b0884c7))
- static linking negentropy in ARM based mac ([#3046](https://github.com/waku-org/nwaku/issues/3046)) ([256b7853](https://github.com/waku-org/nwaku/commit/256b7853))
### Changes
- support ping with multiple multiaddresses and close stream ([#3154](https://github.com/waku-org/nwaku/issues/3154)) ([3665991a](https://github.com/waku-org/nwaku/commit/3665991a))
- liteprotocoltester: easy setup fleets ([#3125](https://github.com/waku-org/nwaku/issues/3125)) ([268e7e66](https://github.com/waku-org/nwaku/commit/268e7e66))
- saving peers enr capabilities ([#3127](https://github.com/waku-org/nwaku/issues/3127)) ([69d9524f](https://github.com/waku-org/nwaku/commit/69d9524f))
- networkmonitor: add missing field on RlnRelay init, set default for num of shard ([#3136](https://github.com/waku-org/nwaku/issues/3136)) ([edcb0e15](https://github.com/waku-org/nwaku/commit/edcb0e15))
- add to libwaku peer id retrieval proc ([#3124](https://github.com/waku-org/nwaku/issues/3124)) ([c5a825e2](https://github.com/waku-org/nwaku/commit/c5a825e2))
- adding to libwaku dial and disconnect by peerIds ([#3111](https://github.com/waku-org/nwaku/issues/3111)) ([25da8102](https://github.com/waku-org/nwaku/commit/25da8102))
- dbconn: add requestId info as a comment in the database logs ([#3110](https://github.com/waku-org/nwaku/issues/3110)) ([30c072a4](https://github.com/waku-org/nwaku/commit/30c072a4))
- improving get_peer_ids_by_protocol by returning the available protocols of connected peers ([#3109](https://github.com/waku-org/nwaku/issues/3109)) ([ed0ee5be](https://github.com/waku-org/nwaku/commit/ed0ee5be))
- remove warnings ([#3106](https://github.com/waku-org/nwaku/issues/3106)) ([c861fa9f](https://github.com/waku-org/nwaku/commit/c861fa9f))
- better store logs ([#3103](https://github.com/waku-org/nwaku/issues/3103)) ([21b03551](https://github.com/waku-org/nwaku/commit/21b03551))
- Improve binding for waku_sync ([#3102](https://github.com/waku-org/nwaku/issues/3102)) ([c3756e3a](https://github.com/waku-org/nwaku/commit/c3756e3a))
- improving and temporarily skipping flaky rln test ([#3094](https://github.com/waku-org/nwaku/issues/3094)) ([a6ed80a5](https://github.com/waku-org/nwaku/commit/a6ed80a5))
- update master after release v0.33.1 ([#3089](https://github.com/waku-org/nwaku/issues/3089)) ([54c3083d](https://github.com/waku-org/nwaku/commit/54c3083d))
- re-arrange function based on responsibility of peer-manager ([#3086](https://github.com/waku-org/nwaku/issues/3086)) ([0f8e8740](https://github.com/waku-org/nwaku/commit/0f8e8740))
- waku_keystore: give some more context in case of error ([#3064](https://github.com/waku-org/nwaku/issues/3064)) ([3ad613ca](https://github.com/waku-org/nwaku/commit/3ad613ca))
- bump negentropy ([#3078](https://github.com/waku-org/nwaku/issues/3078)) ([643ab20f](https://github.com/waku-org/nwaku/commit/643ab20f))
- Optimize store ([#3061](https://github.com/waku-org/nwaku/issues/3061)) ([5875ed63](https://github.com/waku-org/nwaku/commit/5875ed63))
- wrap peer store ([#3051](https://github.com/waku-org/nwaku/issues/3051)) ([729e63f5](https://github.com/waku-org/nwaku/commit/729e63f5))
- disabling metrics for libwaku ([#3058](https://github.com/waku-org/nwaku/issues/3058)) ([b358c90f](https://github.com/waku-org/nwaku/commit/b358c90f))
- test peer connection management ([#3049](https://github.com/waku-org/nwaku/issues/3049)) ([711e7db1](https://github.com/waku-org/nwaku/commit/711e7db1))
- updating upload and download artifact actions to v4 ([#3047](https://github.com/waku-org/nwaku/issues/3047)) ([7c4a9717](https://github.com/waku-org/nwaku/commit/7c4a9717))
- Better database query logs and logarithmic scale in grafana store panels ([#3048](https://github.com/waku-org/nwaku/issues/3048)) ([d68b06f1](https://github.com/waku-org/nwaku/commit/d68b06f1))
- extending store metrics ([#3042](https://github.com/waku-org/nwaku/issues/3042)) ([fd83b42f](https://github.com/waku-org/nwaku/commit/fd83b42f))
This release supports the following [libp2p protocols](https://docs.libp2p.io/concepts/protocols/):
| Protocol | Spec status | Protocol id |
| ---: | :---: | :--- |
| [`11/WAKU2-RELAY`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/11/relay.md) | `stable` | `/vac/waku/relay/2.0.0` |
| [`12/WAKU2-FILTER`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/12/filter.md) | `draft` | `/vac/waku/filter/2.0.0-beta1` <br />`/vac/waku/filter-subscribe/2.0.0-beta1` <br />`/vac/waku/filter-push/2.0.0-beta1` |
| [`13/WAKU2-STORE`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/13/store.md) | `draft` | `/vac/waku/store/2.0.0-beta4` |
| [`19/WAKU2-LIGHTPUSH`](https://github.com/vacp2p/rfc-index/blob/main/waku/standards/core/19/lightpush.md) | `draft` | `/vac/waku/lightpush/2.0.0-beta1` |
| [`66/WAKU2-METADATA`](https://github.com/waku-org/specs/blob/master/standards/core/metadata.md) | `raw` | `/vac/waku/metadata/1.0.0` |
| [`WAKU-SYNC`](https://github.com/waku-org/specs/blob/master/standards/core/sync.md) | `draft` | `/vac/waku/sync/1.0.0` |
## v0.33.1 (2024-10-03) ## v0.33.1 (2024-10-03)
### Bug fixes ### Bug fixes
@ -31,7 +94,7 @@ This release supports the following [libp2p protocols](https://docs.libp2p.io/co
- `volume` must be an integer value, representing number of requests over the period of time allowed. - `volume` must be an integer value, representing number of requests over the period of time allowed.
- `period <time-unit>` must be an integer with defined unit as one of h|m|s|ms - `period <time-unit>` must be an integer with defined unit as one of h|m|s|ms
- If not set, no rate limit will be applied to request/response protocols, except for the filter protocol. - If not set, no rate limit will be applied to request/response protocols, except for the filter protocol.
### Release highlights ### Release highlights
@ -362,7 +425,7 @@ Release highlights:
* Store V3 has been merged * Store V3 has been merged
* Implemented an enhanced and more robust node health check mechanism * Implemented an enhanced and more robust node health check mechanism
* Introduced the Waku object to libwaku in order to setup a node and its protocols * Introduced the Waku object to libwaku in order to setup a node and its protocols
### Features ### Features

View File

@ -64,7 +64,7 @@ The current setup procedure is as follows:
#### Nim Runtime #### Nim Runtime
This repository is bundled with a Nim runtime that includes the necessary dependencies for the project. This repository is bundled with a Nim runtime that includes the necessary dependencies for the project.
Before you can utilise the runtime you'll need to build the project, as detailed in a previous section. Before you can utilize the runtime you'll need to build the project, as detailed in a previous section.
This will generate a `vendor` directory containing various dependencies, including the `nimbus-build-system` which has the bundled nim runtime. This will generate a `vendor` directory containing various dependencies, including the `nimbus-build-system` which has the bundled nim runtime.
After successfully building the project, you may bring the bundled runtime into scope by running: After successfully building the project, you may bring the bundled runtime into scope by running:
@ -82,7 +82,7 @@ make test
### Building single test files ### Building single test files
During development it is handful to build and run a single test file. During development it is helpful to build and run a single test file.
To support this make has a specific target: To support this make has a specific target:
targets: targets:

View File

@ -59,7 +59,7 @@ type
MbMessageHandler = proc(jsonNode: JsonNode) {.async.} MbMessageHandler = proc(jsonNode: JsonNode) {.async.}
################### ###################
# Helper funtions # # Helper functions #
###################S ###################S
proc containsOrAdd(sequence: var seq[Hash], hash: Hash): bool = proc containsOrAdd(sequence: var seq[Hash], hash: Hash): bool =

View File

@ -44,8 +44,9 @@ proc allPeers(pm: PeerManager): string =
var allStr: string = "" var allStr: string = ""
for idx, peer in pm.wakuPeerStore.peers(): for idx, peer in pm.wakuPeerStore.peers():
allStr.add( allStr.add(
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | protos: " & " " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
$peer.protocols & " | caps: " & $peer.enr.map(getCapabilities) & "\n" peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
$peer.enr.map(getCapabilities) & "\n"
) )
return allStr return allStr

View File

@ -73,7 +73,9 @@ proc maintainSubscription(
if subscribeRes.isErr(): if subscribeRes.isErr():
noFailedSubscribes += 1 noFailedSubscribes += 1
lpt_service_peer_failure_count.inc(labelValues = ["receiver"]) lpt_service_peer_failure_count.inc(
labelValues = ["receiver", actualFilterPeer.getAgent()]
)
error "Subscribe request failed.", error "Subscribe request failed.",
err = subscribeRes.error, err = subscribeRes.error,
peer = actualFilterPeer, peer = actualFilterPeer,
@ -150,11 +152,17 @@ proc setupAndSubscribe*(
let interval = millis(20000) let interval = millis(20000)
var printStats: CallbackFunc var printStats: CallbackFunc
# calculate max wait after the last known message arrived before exiting
# 20% of expected messages times the expected interval but capped to 10min
let maxWaitForLastMessage: Duration =
min(conf.messageInterval.milliseconds * (conf.numMessages div 5), 10.minutes)
printStats = CallbackFunc( printStats = CallbackFunc(
proc(udata: pointer) {.gcsafe.} = proc(udata: pointer) {.gcsafe.} =
stats.echoStats() stats.echoStats()
if conf.numMessages > 0 and waitFor stats.checkIfAllMessagesReceived(): if conf.numMessages > 0 and
waitFor stats.checkIfAllMessagesReceived(maxWaitForLastMessage):
waitFor unsubscribe(wakuNode, conf.pubsubTopics[0], conf.contentTopics[0]) waitFor unsubscribe(wakuNode, conf.pubsubTopics[0], conf.contentTopics[0])
info "All messages received. Exiting." info "All messages received. Exiting."

View File

@ -5,7 +5,7 @@ MESSAGE_INTERVAL_MILLIS=1000
MIN_MESSAGE_SIZE=15Kb MIN_MESSAGE_SIZE=15Kb
MAX_MESSAGE_SIZE=145Kb MAX_MESSAGE_SIZE=145Kb
PUBSUB=/waku/2/rs/16/32 PUBSUB=/waku/2/rs/16/32
CONTENT_TOPIC=/tester/2/light-pubsub-test/fleet CONTENT_TOPIC=/tester/2/light-pubsub-test-at-infra/status-prod
CLUSTER_ID=16 CLUSTER_ID=16
LIGHTPUSH_BOOTSTRAP=enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0 LIGHTPUSH_BOOTSTRAP=enr:-QEKuED9AJm2HGgrRpVaJY2nj68ao_QiPeUT43sK-aRM7sMJ6R4G11OSDOwnvVacgN1sTw-K7soC5dzHDFZgZkHU0u-XAYJpZIJ2NIJpcISnYxMvim11bHRpYWRkcnO4WgAqNiVib290LTAxLmRvLWFtczMuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfACw2JWJvb3QtMDEuZG8tYW1zMy5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaEC3rRtFQSgc24uWewzXaxTY8hDAHB8sgnxr9k8Rjb5GeSDdGNwgnZfg3VkcIIjKIV3YWt1Mg0
FILTER_BOOTSTRAP=enr:-QEcuED7ww5vo2rKc1pyBp7fubBUH-8STHEZHo7InjVjLblEVyDGkjdTI9VdqmYQOn95vuQH-Htku17WSTzEufx-Wg4mAYJpZIJ2NIJpcIQihw1Xim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmdjLXVzLWNlbnRyYWwxLWEuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuZ2MtdXMtY2VudHJhbDEtYS5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaECxjqgDQ0WyRSOilYU32DA5k_XNlDis3m1VdXkK9xM6kODdGNwgnZfg3VkcIIjKIV3YWt1Mg0 FILTER_BOOTSTRAP=enr:-QEcuED7ww5vo2rKc1pyBp7fubBUH-8STHEZHo7InjVjLblEVyDGkjdTI9VdqmYQOn95vuQH-Htku17WSTzEufx-Wg4mAYJpZIJ2NIJpcIQihw1Xim11bHRpYWRkcnO4bAAzNi5ib290LTAxLmdjLXVzLWNlbnRyYWwxLWEuc3RhdHVzLnByb2Quc3RhdHVzLmltBnZfADU2LmJvb3QtMDEuZ2MtdXMtY2VudHJhbDEtYS5zdGF0dXMucHJvZC5zdGF0dXMuaW0GAbveA4Jyc40AEAUAAQAgAEAAgAEAiXNlY3AyNTZrMaECxjqgDQ0WyRSOilYU32DA5k_XNlDis3m1VdXkK9xM6kODdGNwgnZfg3VkcIIjKIV3YWt1Mg0

View File

@ -177,7 +177,9 @@ proc publishMessages(
continue continue
else: else:
noFailedPush += 1 noFailedPush += 1
lpt_service_peer_failure_count.inc(labelValues = ["publisher"]) lpt_service_peer_failure_count.inc(
labelValues = ["publisher", actualServicePeer.getAgent()]
)
if not preventPeerSwitch and noFailedPush > maxFailedPush: if not preventPeerSwitch and noFailedPush > maxFailedPush:
info "Max push failure limit reached, Try switching peer." info "Max push failure limit reached, Try switching peer."
let peerOpt = selectRandomServicePeer( let peerOpt = selectRandomServicePeer(

View File

@ -36,7 +36,7 @@ declarePublicCounter lpt_publisher_failed_messages_count,
declarePublicCounter lpt_publisher_sent_bytes, "number of total bytes sent" declarePublicCounter lpt_publisher_sent_bytes, "number of total bytes sent"
declarePublicCounter lpt_service_peer_failure_count, declarePublicCounter lpt_service_peer_failure_count,
"number of failure during using service peer [publisher/receiever]", ["role"] "number of failure during using service peer [publisher/receiever]", ["role", "agent"]
declarePublicCounter lpt_change_service_peer_count, declarePublicCounter lpt_change_service_peer_count,
"number of times [publisher/receiver] had to change service peer", ["role"] "number of times [publisher/receiver] had to change service peer", ["role"]
@ -44,6 +44,6 @@ declarePublicCounter lpt_change_service_peer_count,
declarePublicGauge lpt_px_peers, declarePublicGauge lpt_px_peers,
"Number of peers PeerExchange discovered and can be dialed" "Number of peers PeerExchange discovered and can be dialed"
declarePublicGauge lpt_dialed_peers, "Number of peers successfully dialed" declarePublicGauge lpt_dialed_peers, "Number of peers successfully dialed", ["agent"]
declarePublicGauge lpt_dial_failures, "Number of dial failures by cause" declarePublicGauge lpt_dial_failures, "Number of dial failures by cause", ["agent"]

View File

@ -126,21 +126,29 @@ proc tryCallAllPxPeers*(
if connOpt.value().isSome(): if connOpt.value().isSome():
okPeers.add(randomPeer) okPeers.add(randomPeer)
info "Dialing successful", info "Dialing successful",
peer = constructMultiaddrStr(randomPeer), codec = codec peer = constructMultiaddrStr(randomPeer),
lpt_dialed_peers.inc() agent = randomPeer.getAgent(),
codec = codec
lpt_dialed_peers.inc(labelValues = [randomPeer.getAgent()])
else: else:
lpt_dial_failures.inc() lpt_dial_failures.inc(labelValues = [randomPeer.getAgent()])
error "Dialing failed", peer = constructMultiaddrStr(randomPeer), codec = codec error "Dialing failed",
peer = constructMultiaddrStr(randomPeer),
agent = randomPeer.getAgent(),
codec = codec
else: else:
lpt_dial_failures.inc() lpt_dial_failures.inc(labelValues = [randomPeer.getAgent()])
error "Timeout dialing service peer", error "Timeout dialing service peer",
peer = constructMultiaddrStr(randomPeer), codec = codec peer = constructMultiaddrStr(randomPeer),
agent = randomPeer.getAgent(),
codec = codec
var okPeersStr: string = "" var okPeersStr: string = ""
for idx, peer in okPeers: for idx, peer in okPeers:
okPeersStr.add( okPeersStr.add(
" " & $idx & ". | " & constructMultiaddrStr(peer) & " | protos: " & " " & $idx & ". | " & constructMultiaddrStr(peer) & " | agent: " &
$peer.protocols & " | caps: " & $peer.enr.map(getCapabilities) & "\n" peer.getAgent() & " | protos: " & $peer.protocols & " | caps: " &
$peer.enr.map(getCapabilities) & "\n"
) )
echo "PX returned peers found callable for " & codec & " / " & $capability & ":\n" echo "PX returned peers found callable for " & codec & " / " & $capability & ":\n"
echo okPeersStr echo okPeersStr

View File

@ -126,6 +126,11 @@ proc addMessage*(
lpt_receiver_sender_peer_count.set(value = self.len) lpt_receiver_sender_peer_count.set(value = self.len)
proc lastMessageArrivedAt*(self: Statistics): Option[Moment] =
if self.receivedMessages > 0:
return some(self.helper.prevArrivedAt)
return none(Moment)
proc lossCount*(self: Statistics): uint32 = proc lossCount*(self: Statistics): uint32 =
self.helper.maxIndex - self.receivedMessages self.helper.maxIndex - self.receivedMessages
@ -274,16 +279,49 @@ proc jsonStats*(self: PerPeerStatistics): string =
"{\"result:\": \"Error while generating json stats: " & getCurrentExceptionMsg() & "{\"result:\": \"Error while generating json stats: " & getCurrentExceptionMsg() &
"\"}" "\"}"
proc checkIfAllMessagesReceived*(self: PerPeerStatistics): Future[bool] {.async.} = proc lastMessageArrivedAt*(self: PerPeerStatistics): Option[Moment] =
var lastArrivedAt = Moment.init(0, Millisecond)
for stat in self.values:
let lastMsgFromPeerAt = stat.lastMessageArrivedAt().valueOr:
continue
if lastMsgFromPeerAt > lastArrivedAt:
lastArrivedAt = lastMsgFromPeerAt
if lastArrivedAt == Moment.init(0, Millisecond):
return none(Moment)
return some(lastArrivedAt)
proc checkIfAllMessagesReceived*(
self: PerPeerStatistics, maxWaitForLastMessage: Duration
): Future[bool] {.async.} =
# if there are no peers have sent messages, assume we just have started. # if there are no peers have sent messages, assume we just have started.
if self.len == 0: if self.len == 0:
return false return false
# check if numerically all messages are received.
# this suggest we received at least one message already from one peer
var isAlllMessageReceived = true
for stat in self.values: for stat in self.values:
if (stat.allMessageCount == 0 and stat.receivedMessages == 0) or if (stat.allMessageCount == 0 and stat.receivedMessages == 0) or
stat.helper.maxIndex < stat.allMessageCount: stat.helper.maxIndex < stat.allMessageCount:
isAlllMessageReceived = false
break
if not isAlllMessageReceived:
# if not all message received we still need to check if last message arrived within a time frame
# to avoid endless waiting while publishers are already quit.
let lastMessageAt = self.lastMessageArrivedAt()
if lastMessageAt.isNone():
return false return false
# last message shall arrived within time limit
if Moment.now() - lastMessageAt.get() < maxWaitForLastMessage:
return false
else:
info "No message since max wait time", maxWait = $maxWaitForLastMessage
## Ok, we see last message arrived from all peers, ## Ok, we see last message arrived from all peers,
## lets check if all messages are received ## lets check if all messages are received
## and if not let's wait another 20 secs to give chance the system will send them. ## and if not let's wait another 20 secs to give chance the system will send them.

View File

@ -49,7 +49,7 @@ type LiteProtocolTesterConf* = object
logFormat* {. logFormat* {.
desc: desc:
"Specifies what kind of logs should be written to stdout. Suported formats: TEXT, JSON", "Specifies what kind of logs should be written to stdout. Supported formats: TEXT, JSON",
defaultValue: logging.LogFormat.TEXT, defaultValue: logging.LogFormat.TEXT,
name: "log-format" name: "log-format"
.}: logging.LogFormat .}: logging.LogFormat

View File

@ -238,7 +238,7 @@ suite "WakuNode2 - Validators":
# Since we have a full mesh with 5 nodes and each one publishes 25+25+25+25+25 msgs # Since we have a full mesh with 5 nodes and each one publishes 25+25+25+25+25 msgs
# there are 625 messages being sent. # there are 625 messages being sent.
# 125 are received ok in the handler (first hop) # 125 are received ok in the handler (first hop)
# 500 are are wrong so rejected (rejected not relayed) # 500 are wrong so rejected (rejected not relayed)
var msgRejected = 0 var msgRejected = 0

View File

@ -58,7 +58,7 @@ type WakuNodeConf* = object
logFormat* {. logFormat* {.
desc: desc:
"Specifies what kind of logs should be written to stdout. Suported formats: TEXT, JSON", "Specifies what kind of logs should be written to stdout. Supported formats: TEXT, JSON",
defaultValue: logging.LogFormat.TEXT, defaultValue: logging.LogFormat.TEXT,
name: "log-format" name: "log-format"
.}: logging.LogFormat .}: logging.LogFormat
@ -491,7 +491,7 @@ hence would have reachability issues.""",
reliabilityEnabled* {. reliabilityEnabled* {.
desc: desc:
"""Adds an extra effort in the delivery/reception of messages by leveraging store-v3 requests. """Adds an extra effort in the delivery/reception of messages by leveraging store-v3 requests.
with the drawback of consuming some more bandwitdh.""", with the drawback of consuming some more bandwidth.""",
defaultValue: false, defaultValue: false,
name: "reliability" name: "reliability"
.}: bool .}: bool

View File

@ -358,3 +358,10 @@ func hasUdpPort*(peer: RemotePeerInfo): bool =
let typedEnr = typedEnrRes.get() let typedEnr = typedEnrRes.get()
typedEnr.udp.isSome() or typedEnr.udp6.isSome() typedEnr.udp.isSome() or typedEnr.udp6.isSome()
proc getAgent*(peer: RemotePeerInfo): string =
## Returns the agent version of a peer
if peer.agent.isEmptyOrWhitespace():
return "unknown"
return peer.agent