Every peer must be subscribed to the topic that is used to send messages.
In the test Alice was communicating with Bob and Charlie over custom topic, but
that topic wasn't added to a bloom filter, thus a certain flake was possible.
Normally it wasn't causing problems because syncAllowance in whisper, which is 10s:
- we set bloom filter to all zeros
- but we still will accept all envelopes for 10s
- in case we send first envelope into such channel after sync allowance - we will get an error such
that envelope doesn't match the bloom filter
* add LogEnabled attribute to NodeConfig, used in the call from status-react
* fix use of OverrideRootLogin in tests
* enable logger in tests based on LogLevel
* move LogEnabled before LogFile
* Add RequestMessage to sshext
* E2E tests now use shhext_requestMessages
* Typo in comment
* Enhanced maintainability
* Drop former mailservice
* Code reorg after review
* Fix missed changes after update to 1.8.5
* Rebase on 1.8.5
* Remove outdated patches and apply all others
* Use shh_post that returns hash
* Use bloom filter for request to mailserver
* Remove tests for sending messages without subbing first
* Fix deadlock in ethdb
* Expect null if receipt is not yet created
* Subscribe to messages before sending them in whisper test
* Initial stab at CodeClimate configuration file
* Enabling everything to analyze usefulness of output
* Disabling some useless metrics
* Ignore static directory
* Disable similar code metric
* Exclude identical code in test files
* glob
* Reduce max file lines to 750
* Fix exclude pattern
* Exclude t/
* Up max file length + remove t directory
* Ignore t/ directory only for identical code metrics
* Exlcude patterns to exclude paths
* testing
* more testing
* reindent
* Revert back to excluding t directory
* Add default peer limits configuration
If discovery is enabled for a given cluster - we will set a default
expected number of peers for each enabled service. For example:
- if cluster is rinkeby has a discovery enabled we will
check which services are enabled
- if whisper is enabled we will set min and max limits by default
- if les is enabled and infura is not used we will set limits too
When statusd is used - configuration must be provided using configuration
supported by statusd.
* Fix deadlock in les peer set
It could happen when we found multiple relevant peers and they were
processed before discv5 closed. For example we have max limit of 1:
- found peer A and B in the same kademlia cycle
- process peer A
- max limit is reached -> closed discv5 and set it to nil
- process peer B
- panic!!!
After running gometalinter in debug mode, it was found megacheck was being killed by travis due to reaching its memory limits. For more information, see this comment.
Run "Lint & Vendor Check" using a fully virtualized environment instead of a container-based one.
* Start enabling to test Mainnet
* Minor corrections found during code scan
* Set mainnet blocker in E2E again
* Introduced securing of mainnet transaction tests
* Fix typing error
* Typo led to follow-up errors
* Linter problem
* Change to individual test skips after review
* More flexible skip control for mainnet and status chain
* Fix double space
* Change of skipping method and fixing wrong skipped networks
This change will greatly simplify writing unit tests when a node is required but data persistence is irrelevant.
I also Introduced some refactoring and unit tests for `StatusNode`.
Multiple concurrent topic pool stops could result in the close of the closed
quit channel. Fixed by using atomic compare and swap and closing only if swap
happened.
Added events feed to peer pool for tests purpose. Otherwise it is impossible to run
simulation with -race flag enabled. In the essence it happens because we are managing global
object , which is server.Discv5, but unfortunately there is no way around it.
* Implement shh api extension that allows to confirm that message is sent
* Add a patch
* Fix linter
* Add readme
* Add tests for tracker
* Address review