As part of a performance profiling of mailserver we noticed that most of
the resources on a query are spend decoding the whisper envelope.
This PR changes the way we store envelopes encoding the Topic into the
database key, so we can check that and we are able to publish the
envelope rawValue if it matches.
The change is backward compatible as only newly added envelopes will
have the new key, while old ones will have to be unmarshaled.
* Notify users that envelope was discarded and retry sending it
* Update Gopkg files with released whisper version
* Forgot to remove signal after refactoring
Currently PFS messages are decrypted and therefore modified before being
passed to the client. This make IDs computation difficult, as we pass
the whole object to the client and expect the object be passed back once
confirmed.
This changes the behavior allowing confirmation by ID, which is passed
to the client instead of the raw object.
This is a breaking change, but status-react is already forward
compatible.
These changes add a support for syncing data between two Mail Servers. It does not give an easy way to start syncing. This will be solved in the next PR.
This change implements connection manager that monitors 3 types of events:
1. update of the selected mail servers
2. disconnect from a mail server
3. errors for requesting mail history
When selected mail servers provided we will try to connect with as many as possible, and later disconnect the surplus. For example if we want to connect with one mail server and 3 were selected, we try to connect with all (3), and later disconnect with 2. It will to establish connection with live mail server faster.
If mail server disconnects we will choose any other mail server from the list of selected. Unless we have only one mail server. In such case we don't have any other choice and we will leave things as is.
If request for history was expired we will disconnect such peer and try to find another one. We will follow same rules as described above.
We will have two components that will rely on this logic:
1. requesting history
If target peer is provided we will use that peer, otherwise we will request history from any selected mail server that is connected at the time of request.
2. confirmation from selected mail server
Confirmation from any selected mail server will bee used to send a feedback that envelope was sent.
I will add several extensions, but probably in separate PRs:
1. prioritize connection with mail server that was used before reboot
2. disconnect from mail servers if history request wasn't expired but failed.
3. wait some time in RequestsMessage RPC to establish connection with any mail server
Currently this feature is hidden, as certain changes will be necessary in status-react.
partially implements: https://github.com/status-im/status-go/issues/1285
This commit updates geth to 1.8.17 and adds a possibility to enable metrics during compilation time.
The cascade of issues forced us to upgrade geth to 1.8.17 in order to allow enabling metrics during compilation time. 1.8.17 introduced `NodeID` refactoring and `enode` package which affected our peers pool and integration with Discovery V5.
- Skipped keys
The purpose of limiting the number of skipped keys generated is to avoid a dos
attack whereby an attacker would send a large N, forcing the device to
compute all the keys between currentN..N .
Previously the logic for handling skipped keys was:
- If in the current receiving chain there are more than maxSkip keys,
throw an error
This is problematic as in long-lived session dropped/unreceived messages starts
piling up, eventually reaching the threshold (1000 dropped/unreceived
messages).
This logic has been changed to be more inline with signals spec, and now
it is:
- If N is > currentN + maxSkip, throw an error
The purpose of limiting the number of skipped keys stored is to avoid a dos
attack whereby an attacker would force us to store a large number of
keys, filling up our storage.
Previously the logic for handling old keys was:
- Once you have maxKeep ratchet steps, delete any key from
currentRatchet - maxKeep.
This, in combination with the maxSkip implementation, capped the number of stored keys to
maxSkip * maxKeep.
The logic has been changed to:
- Keep a maximum of MaxMessageKeysPerSession
and additionally we delete any key that has a sequence number <
currentSeqNum - maxKeep
- Version
We check now the version of the bundle so that when we get a bundle from
the same installationID with a higher version, we mark the previous
bundle as expired and use the new bundle the next time a message is sent
* Update to geth v1.8.14
* Remove patches that were merged upstream
* Apply patches before 0016
* Fix 0016 and apply it
* Apply everything else
* Pass gas limit as a second argument to simulated backend
Update vendor
Integrate rendezvous into status node
Add a test with failover using rendezvous
Use multiple servers in client
Use discovery V5 by default and test that node can be started with rendezvous discovet
Fix linter
Update rendezvous client to one with instrumented stream
Address feedback
Fix test with updated topic limits
Apply several suggestions
Change log to debug for request errors because we continue execution
Remove web3js after rebase
Update rendezvous package
- Replace deprecated common.Hex with hexutil.Encode.
- Remove upstreamed 0010-geth-17-fix-npe-in-filter-system.patch.
- Remove upstreamed 0020-discv5-metrics.patch.
- Remove upstreamed 0026-ethdb-error-deadlock.patch.
- Update goleveldb to same version used by geth 1.8.11.
- Update PublicTransactionPoolAPI.GasPrice return type to match type in internal geth interface.
This change adds adds an ability to use different source of time for whisper:
when envelope is created it is used to set expiry
to track when envelope needs to be expired
This time is then used to check validity of the envelope when it is received. Currently If we receive an envelope that is sent from future - peer will get disconnected. If envelope that was received has an expiry less then now it will be simply dropped, if expiry is less than now + 10*2 seconds peer will get dropped.
So, it is clear that whisper depends on time. And any time we get a skew with peers that is > 20s reliability will be grealy reduced.
In this change another source of time for whisper will be used. This time source will use ntp servers from pool.ntp.org to compute offset. When whisper queries time - this offset will be added/substracted from current time.
Query is executed every 2 mins, queries 5 different servers, cut offs min and max and the computes mean value. pool.ntp.org is resolved to different servers and according to documentation you will rarely hit the same.
Closes: #687
* Rebase on 1.8.5
* Remove outdated patches and apply all others
* Use shh_post that returns hash
* Use bloom filter for request to mailserver
* Remove tests for sending messages without subbing first
* Fix deadlock in ethdb
* Expect null if receipt is not yet created
* Subscribe to messages before sending them in whisper test
* Update `github.com/ethereum/go-ethereum` package to 1.8.1 branch. Part of #638
* Fix code due to some signature changes. Part of #638
* use upstream for whisper backend
* Add patch to downgrade usage of Whisper v6 to v5 in some geth 1.8.1 vendor files. Part of #638
* Take into account the DNS rebinding protection introduced in 1.8.0 by adding exception for localhost. Part of #638
* Add patches required for cross-compiled builds starting with geth 1.8.0. Only applied during build. Part of #638
* Update expected JSON result in `TestRegressionGetTransactionReceipt()` and `TestCallRawResultGetTransactionReceipt()`. Part of #665
* Fix some failing e2e tests. Part of #638
* Address comments in PR #702. Part of #638
Network disconnect is introduced by removing default gateway, easily reversible condition.
On my local machine it takes 30 seconds for peers to reconnect after connectivity is restored. As you guess this is not an accident, and there is 30 seconds timeout for dial expiration. This dial expiration is used in p2p.Server to guarantee that peers are not dialed too often.
Additionally I added small script to Makefile to run such tests in docker environment, usage example:
```
make docker-test ARGS="./t/destructive/ -v -network=4"
```