* refactor TestRequestMessageFromMailboxAsync to use s.requestHistoricMessages helper
* send p2pRequestResponseCode from mailserver
* send p2p message response to after sending all historic messages
* mailserver sends `whisper.NewSentMessage` as response
* add mailserver Client and p2pRequestAckCode watchers
* send event with envelopeFeed when p2pRequestAckCode is received
* test request completed event in tracker
* rename mailserver response events and code to RequestCompleteCode
* wait for mailserver response in e2e test
* use SendHistoricMessageResponse method name for mailserver response
* fix lint warnings
* add mailserver request expiration
* send mailserver response without envelope
* add `ttl` to Request struct in shhext_requestMessages
* test that tracker calls handler.MailServerRequestExpired
* add geth patch
* rename TTL to Timeout
* split tracker.handleEvent in multiple methods
We now allow the user to override bootnodes.
Only light validation is made in the app (no public key for example).
At the moment status-go panics if an enode is malformed, preventing the
user to login into their account.
This commit changes the behaviour to ignore malformed enodes.
* update enodes for ropsten and main clusters
* leave only 2 static nodes for ropsten and mainnet
* use different static nodes for mainnet
* specify static nodes from different hosts
Other changes:
* needed to patch that loop implementation in Discover V5 implementation in go-ethereum,
* fixed TestStatusNodeReconnectStaticPeers,
* fixed TestBackendAccountsConcurrently.
Previously we were doing 5 concurrent requests, thus it was
safe to tolerate 2 errors. With 4 it doesn't make sense to use
only 2 responses, because even if 1 of them is skewed - we will set
incorrect time.
We need to allow the user to specify custom BootNodes, the code has been
changed so that it will use the provided ones if passed through config
(or if they are empty). Othewise fallback on default ones.
* Heap queue stores only peers that were not added to p2p server
The primary goal of this change is to keep whitelist of peers
that are managed by topic pool while also preventing same peer
from being selected from heap queue multiple times.
This change makes invalidation mechanism more aggressive. With a primary goal to invalidate short living nodes faster. In current setup any node that became known in terms of discovery will stay in this state until it will fail to respond to 5 queries. Removing them earlier from a table allows to reduce latency for finding required nodes.
The second change, one adds a version for discovery, separates status dht from ethereum dht.
After we rolled out discovery it became obvious that our boot nodes became spammed with irrelevant nodes. And this made discovery process very long, for example with separate dht discovery takes ~2s, with mutual dht - it can take 1m-10m and there is still no guarantee to find a max amount of peers, cause status nodes is a very small part of whole ethereum infra.
In my understanding, we don't need to be a part of ethereum dht, and lower latency is way more important for us.
Closes: #941
Partially closes: #960 (960 requires futher investigations on devices)
add mailserver cleaner
use memstorage for leveldb in tests
avoid write if batch size is 0
add comments
add cmd/statusd-prune
rmeove batch size var in prune method
validate range values
pass only flag name to missingFlag
refactor Cleaner.prune method
update batch not to be a pointer
removed extra batch counter increment
don't increment counter if batch returns errors
add README