Update vendor
Integrate rendezvous into status node
Add a test with failover using rendezvous
Use multiple servers in client
Use discovery V5 by default and test that node can be started with rendezvous discovet
Fix linter
Update rendezvous client to one with instrumented stream
Address feedback
Fix test with updated topic limits
Apply several suggestions
Change log to debug for request errors because we continue execution
Remove web3js after rebase
Update rendezvous package
This change makes invalidation mechanism more aggressive. With a primary goal to invalidate short living nodes faster. In current setup any node that became known in terms of discovery will stay in this state until it will fail to respond to 5 queries. Removing them earlier from a table allows to reduce latency for finding required nodes.
The second change, one adds a version for discovery, separates status dht from ethereum dht.
After we rolled out discovery it became obvious that our boot nodes became spammed with irrelevant nodes. And this made discovery process very long, for example with separate dht discovery takes ~2s, with mutual dht - it can take 1m-10m and there is still no guarantee to find a max amount of peers, cause status nodes is a very small part of whole ethereum infra.
In my understanding, we don't need to be a part of ethereum dht, and lower latency is way more important for us.
Closes: #941
Partially closes: #960 (960 requires futher investigations on devices)
add mailserver cleaner
use memstorage for leveldb in tests
avoid write if batch size is 0
add comments
add cmd/statusd-prune
rmeove batch size var in prune method
validate range values
pass only flag name to missingFlag
refactor Cleaner.prune method
update batch not to be a pointer
removed extra batch counter increment
don't increment counter if batch returns errors
add README
* Make it possible to explicitly disable discovery
Discovery will be disabled in following cases:
- if there are not bootnodes - v5 server will be disabled
because there is no point in running it
- if user defined in config NoDiscovery=true this value will be preserved
even if we have bootnodes
So, basically discovery will be always enabled by default on mobile, unless
it is explicitly specified otherwise.
When statusd is used current behavior is that discovery is disabled by default.
I kept it in this change, but it would be better to change it.
* Fix leftovers
* Add wait group to peer pool to protect from races with p2p.Server
* Change fields only when all goroutines finished
* Turn off discovery after topic searches are stopped
* Don't set period to nil to avoid race with SearchTopic
* Close period chan only when all writers are finished
* add `-status` flag to enable the Status service
* remove status from default APIModules and add it only from statusd if specified
* remove AddAPIModule method
* allow -status flag values to be http or ipc
* add LogEnabled attribute to NodeConfig, used in the call from status-react
* fix use of OverrideRootLogin in tests
* enable logger in tests based on LogLevel
* move LogEnabled before LogFile
This change will greatly simplify writing unit tests when a node is required but data persistence is irrelevant.
I also Introduced some refactoring and unit tests for `StatusNode`.
- [x] [#797] : Remove unused methods PopulateStaticPeers, ReconnectStaticPeers, removeStaticPeers, removePeer
- [x] [#797] : Rename node.Manager to node. StatusNode and simplify its public api
- [x] [#797] : Rename all references to nodeManager to statusNode
* Rename bootnode to static peers
Also add some groupings between types.
* Remove not needed genesisHash
* More cleanup of bootnodes with static peers
* Add option of cluster configuration file
* New generated bindata.go
* Changes after npm install
* Add argument for cluster configuration file
* Add test for dynamic cluster loading
Not yet sure with name "cluster config".
* Solved conflicts
* Renaming of static peers
* Remove static peers population
* Missing argument for config
* Renaming of static peers to boot nodes for consistency
* Fix of name change
* Cluster config is now cluster data
* Load static nodes from configuration
* Final renaming of var for file content
* Update project to use Whisper v6. Part of #638
* Revert "Add patch to downgrade usage of Whisper v6 to v5 in some geth 1.8.1 vendor files. Part of #665" - this reverts commit 6aefb4c8fd02dbcfffac6b69e8bb22b13ef86b6b.
* Enable light mode on Whisper v6 for non-mail servers. Part of #638
* Fix race condition in whisperv6/peer.go. Part of #665 (PR already accepted upstream for 1.8.2)
* Update bootnode addresses in staticnodes.json. Part of #638
* Add `shh.lightclient` flag and tests for bloom filter setting logic. Part of #638
* Move MakeTestNodeConfig to utils. Part of #638
* Reduce PoW in `whisper_jail_test.go` to fix flaky test. Part of #638
Summary:
Filter out gas linter error checks for fmt.Fprintf commands. This required defining a custom linter around gas that additionally included the offending code.
Notes:
Gas format, without piping it through gometalinter, gives output like this:
$ gas -fmt=csv geth/jail/console/console.go
geth/jail/console/console.go,21,Errors unhandled.,LOW,HIGH,"fmt.Fprintf(w, ""%s: %s"", consoleEventName, formatForConsole(fn.ArgumentList))"
Gometalinter, by default, does not grab the line of code when it filters gas errors. To resolve this, I created a wrapper around gas (I wasn't sure what to call this "gas wrapper", I opted for gasv2, open to other names).
The first part of the regular expression was taken directly from gometalinter (see https://github.com/alecthomas/gometalinter/blob/master/linters.go#L236), and I then appended ,\".*\" to additionally grab the line of code of the offending line. Lastly, I excluded ".*Errors unhandled.*fmt.Fprintf.*" to filter out only fmt.Fprintf errors around omitted errors.
Also as a result of this change, gas lint output will now include the offending code.
Closes#590