Now if Add is to be called it will be called before Wait, this is achieved
by doing following:
- if cancel() gets lock first and closes channelCh before spawnSync is called
we will exit right away
- if not than we will ensure that we hold a lock until syncers are spawned
so that cancel() will be blocked for this time and it will prevent whole Terminate() from
progressing
This change adds adds an ability to use different source of time for whisper:
when envelope is created it is used to set expiry
to track when envelope needs to be expired
This time is then used to check validity of the envelope when it is received. Currently If we receive an envelope that is sent from future - peer will get disconnected. If envelope that was received has an expiry less then now it will be simply dropped, if expiry is less than now + 10*2 seconds peer will get dropped.
So, it is clear that whisper depends on time. And any time we get a skew with peers that is > 20s reliability will be grealy reduced.
In this change another source of time for whisper will be used. This time source will use ntp servers from pool.ntp.org to compute offset. When whisper queries time - this offset will be added/substracted from current time.
Query is executed every 2 mins, queries 5 different servers, cut offs min and max and the computes mean value. pool.ntp.org is resolved to different servers and according to documentation you will rarely hit the same.
Closes: #687
Every peer must be subscribed to the topic that is used to send messages.
In the test Alice was communicating with Bob and Charlie over custom topic, but
that topic wasn't added to a bloom filter, thus a certain flake was possible.
Normally it wasn't causing problems because syncAllowance in whisper, which is 10s:
- we set bloom filter to all zeros
- but we still will accept all envelopes for 10s
- in case we send first envelope into such channel after sync allowance - we will get an error such
that envelope doesn't match the bloom filter
* add LogEnabled attribute to NodeConfig, used in the call from status-react
* fix use of OverrideRootLogin in tests
* enable logger in tests based on LogLevel
* move LogEnabled before LogFile
* Add RequestMessage to sshext
* E2E tests now use shhext_requestMessages
* Typo in comment
* Enhanced maintainability
* Drop former mailservice
* Code reorg after review
* Fix missed changes after update to 1.8.5
* Rebase on 1.8.5
* Remove outdated patches and apply all others
* Use shh_post that returns hash
* Use bloom filter for request to mailserver
* Remove tests for sending messages without subbing first
* Fix deadlock in ethdb
* Expect null if receipt is not yet created
* Subscribe to messages before sending them in whisper test
* Initial stab at CodeClimate configuration file
* Enabling everything to analyze usefulness of output
* Disabling some useless metrics
* Ignore static directory
* Disable similar code metric
* Exclude identical code in test files
* glob
* Reduce max file lines to 750
* Fix exclude pattern
* Exclude t/
* Up max file length + remove t directory
* Ignore t/ directory only for identical code metrics
* Exlcude patterns to exclude paths
* testing
* more testing
* reindent
* Revert back to excluding t directory
* Add default peer limits configuration
If discovery is enabled for a given cluster - we will set a default
expected number of peers for each enabled service. For example:
- if cluster is rinkeby has a discovery enabled we will
check which services are enabled
- if whisper is enabled we will set min and max limits by default
- if les is enabled and infura is not used we will set limits too
When statusd is used - configuration must be provided using configuration
supported by statusd.
* Fix deadlock in les peer set
It could happen when we found multiple relevant peers and they were
processed before discv5 closed. For example we have max limit of 1:
- found peer A and B in the same kademlia cycle
- process peer A
- max limit is reached -> closed discv5 and set it to nil
- process peer B
- panic!!!
After running gometalinter in debug mode, it was found megacheck was being killed by travis due to reaching its memory limits. For more information, see this comment.
Run "Lint & Vendor Check" using a fully virtualized environment instead of a container-based one.
* Start enabling to test Mainnet
* Minor corrections found during code scan
* Set mainnet blocker in E2E again
* Introduced securing of mainnet transaction tests
* Fix typing error
* Typo led to follow-up errors
* Linter problem
* Change to individual test skips after review
* More flexible skip control for mainnet and status chain
* Fix double space
* Change of skipping method and fixing wrong skipped networks
This change will greatly simplify writing unit tests when a node is required but data persistence is irrelevant.
I also Introduced some refactoring and unit tests for `StatusNode`.