* Rebase on 1.8.5
* Remove outdated patches and apply all others
* Use shh_post that returns hash
* Use bloom filter for request to mailserver
* Remove tests for sending messages without subbing first
* Fix deadlock in ethdb
* Expect null if receipt is not yet created
* Subscribe to messages before sending them in whisper test
* Initial stab at CodeClimate configuration file
* Enabling everything to analyze usefulness of output
* Disabling some useless metrics
* Ignore static directory
* Disable similar code metric
* Exclude identical code in test files
* glob
* Reduce max file lines to 750
* Fix exclude pattern
* Exclude t/
* Up max file length + remove t directory
* Ignore t/ directory only for identical code metrics
* Exlcude patterns to exclude paths
* testing
* more testing
* reindent
* Revert back to excluding t directory
* Add default peer limits configuration
If discovery is enabled for a given cluster - we will set a default
expected number of peers for each enabled service. For example:
- if cluster is rinkeby has a discovery enabled we will
check which services are enabled
- if whisper is enabled we will set min and max limits by default
- if les is enabled and infura is not used we will set limits too
When statusd is used - configuration must be provided using configuration
supported by statusd.
* Fix deadlock in les peer set
It could happen when we found multiple relevant peers and they were
processed before discv5 closed. For example we have max limit of 1:
- found peer A and B in the same kademlia cycle
- process peer A
- max limit is reached -> closed discv5 and set it to nil
- process peer B
- panic!!!
After running gometalinter in debug mode, it was found megacheck was being killed by travis due to reaching its memory limits. For more information, see this comment.
Run "Lint & Vendor Check" using a fully virtualized environment instead of a container-based one.
* Start enabling to test Mainnet
* Minor corrections found during code scan
* Set mainnet blocker in E2E again
* Introduced securing of mainnet transaction tests
* Fix typing error
* Typo led to follow-up errors
* Linter problem
* Change to individual test skips after review
* More flexible skip control for mainnet and status chain
* Fix double space
* Change of skipping method and fixing wrong skipped networks
This change will greatly simplify writing unit tests when a node is required but data persistence is irrelevant.
I also Introduced some refactoring and unit tests for `StatusNode`.
Multiple concurrent topic pool stops could result in the close of the closed
quit channel. Fixed by using atomic compare and swap and closing only if swap
happened.
Added events feed to peer pool for tests purpose. Otherwise it is impossible to run
simulation with -race flag enabled. In the essence it happens because we are managing global
object , which is server.Discv5, but unfortunately there is no way around it.
* Implement shh api extension that allows to confirm that message is sent
* Add a patch
* Fix linter
* Add readme
* Add tests for tracker
* Address review
We need to be able to sign more than just transactions to make DApps
work properly. This change separates signing requests from the
transactions and make it more general to prepare to intoduce different
types of signing requests.
This change is designed to preserve status APIs, so it is
backward-comparible with the current API bindings.
- [x] [#797] : Remove unused methods PopulateStaticPeers, ReconnectStaticPeers, removeStaticPeers, removePeer
- [x] [#797] : Rename node.Manager to node. StatusNode and simplify its public api
- [x] [#797] : Rename all references to nodeManager to statusNode