This commit introduces a few changes regarding users accessing
communities:
While the APIs still exist, community invites should no longer be
used, instead communities should merely be "shared".
Sharing a community to users allows users to "join" the community,
which in reality makes them request access to that community.
This means, users have to request access to any community, even if
the community has permissions set to NO_MEMBERSHIP
Only difference between ON_REQUEST and NO_MEMBERSHIP is that
ON_REQUEST communities require manual approval of the owner/admin
to access a community. NO_MEMBERSHIP communities accept
automatically (as soon as owner/admin receives the request).
This also implies that users are no longer optimistically added to the
member list of communities, but only after they have been accepted.
This introduces a bit of a message ping-pong for users to know that
someone is now part of a community
If a contact update or a legacy contact request is sent, we create a
notification for the user in the activity center so it can be replied
to.
If at a later date a new contact request is received from the same user,
this will replace it, so the proper message can be displayed.
This commit introduces a new `clock` field in the
`communities_settings` table so that it can be leveraged for syncing
community settings across devices.
It alsoe exends existing `syncCommunity` APIs to generate
`SyncCommunitySettings` as well, avoiding sending additional sync messages
for community settings.
When editing communities however, we still sync community settings
explicitly are we aren't syncing the community itself in that case.
* Version bump
* Implemented lan connection string functionality
Also added more robust testing
* Added ConnectionParams struct and related funcs
* Add server mode to ConnectionParams
* Added function to get preffered network IP
Also done some refactor work oon server package to make a lot more reusable
* Added server.Option and simplified handler funcs
* Added serial number deterministically generated from pk
* Debugging TLS server connection
* Implemented configurable server ip
When accessing over the network the server needs to listen on the network port and not localhost or 127.0.0.1 . Also the cert can now have a dedicated IP
* Refactor of URL funcs to use the url package
* Removed redundant Options pattern in favour of config param
* Added full server test using GetOutboundIP
* Remove references and usage of Server.port
The application does not need to set the port, we rely on the net.Listener to pick a port.
* Version bump
* Added ToECDSA func and improved cert testing
* Added error check in test
* Split Server types, embedding raw Server funcs into specialised server types
* localhost
* Implemented DNS and IP based cert gen
ios doesn't allow for restricted ip addresses to be used in a valid tls cert
* Replace listener handling with original port store
Also added handlers as a parameter of the Server
* Sync Settings
* Added valueHandlers and Database singleton
Some issues remain, need a way to comparing incoming sql.DB to check if the connection is to a different file or not. Maybe make singleton instance per filename
* Added functionality to check the sqlite filename
* Refactor of Database.SaveSyncSettings to be used as a handler
* Implemented inteface for setting sync protobuf factories
* Refactored and completed adhoc send setting sync
* Tidying up
* Immutability refactor
* Refactor settings into dedicated package
* Breakout structs
* Tidy up
* Refactor of bulk settings sync
* Bug fixes
* Addressing feedback
* Fix code dropped during rebase
* Fix for db closed
* Fix for node config related crashes
* Provisional fix for type assertion - issue 2
* Adding robust type assertion checks
* Partial fix for null literal db storage and json encoding
* Fix for passively handling nil sql.DB, and checking if elem has len and if len is 0
* Added test for preferred name behaviour
* Adding saved sync settings to MessengerResponse
* Completed granular initial sync and clock from network on save
* add Settings to isEmpty
* Refactor of protobufs, partially done
* Added syncSetting receiver handling, some bug fixes
* Fix for sticker packs
* Implement inactive flag on sync protobuf factory
* Refactor of types and structs
* Added SettingField.CanSync functionality
* Addressing rebase artifact
* Refactor of Setting SELECT queries
* Refactor of string return queries
* VERSION bump and migration index bump
* Deactiveate Sync Settings
* Deactiveated preferred_name and send_status_updates
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
This commit enables mailserver cycle logic by default and make a few
changes:
1) Nodes are graylisted instead of being blacklisted for a set amount of
time. The reason is that if we blacklist, any cut in connectivity
might result in long delays before reconnecting, especially on spotty
connections.
2) Fixes an issue on the devp2p server, whereby the node would not
connect to one of the static nodes since all the connection slots
where filled. The fix is a bit inelegant, it always connects to
static nodes, ignoring maxpeers, but it's tricky to get it to work
since the code is clearly not written to select a specific node.
3) Adds support to pinned mailservers
4) Add retries to mailservers requests. It uses a closure for now, I
think we should eventually have a channel etc, but I'd leave that for
later.
This commit fixes one source of flakyness in the tests, which was an
actual bug.
If 1 device is registering with a push notification server, if there's
another device with the same public key, both would mark themselves as
registered, while maybe only one has been actually registered.
To fix this, we keep track of the request ids we send (in memory for
now), and only mark it as registered if the request was originating on
this device.
* feat: enable wallet without network binding
* feat: make transfer network aware
* feat: allow to pass initial networks via config
* fix: nil check and feed
* feat: Add documentation with better function name
* fix: do not init the manager more than once
* fix: PR feedbacks
* Bump version
* Update Jenkinsfile.tests
* Convert int to string
Co-authored-by: RichΛrd <info@richardramos.me>
* Protobufs and adapters
* Added basic anon metric service and config init
* Added fibonacci interval incrementer
* Added basic Client.Start func and integrated interval incrementer
* Added new processed field to app metrics table
* Added id column to app metrics table
* Added migration clean up
* Added appmetrics GetUnprocessed and SetToProcessedByIDs and tests
There was a wierd bug where metrics in the db that did not explicitly insert a value would be NULL, so could not be found by . In addition I've added a new primary id field to the app_metrics table so that updates could be done against very specific metric rows.
* Updated adaptors and db to handle proto_id
I need a way to distinguish individual metric items from each other so that I can ignore the ones that have been seen before.
* Moved incrementer into dedicated file
* Resolve incrementer test fail
* Finalised the main loop functionality
* Implemented delete loop framework
* Updated adaptors file name
* Added delete loop delay and quit, and tweak on RawMessage gen
* Completed delete loop logic
* Added DBLock to prevent deletion during mainLoop
* Added postgres DB connection, integrated into anonmetrics.Server
* Removed proto_id from SQL migration and model
* Integrated postgres with Server and updated adaptors
* Function name update
* Added sample config files for client and server
* Fixes and testing for low level e2e
* make generate
* Fix lint
* Fix for receiving an anonMetricBatch not in server mode
* Postgres test fixes
* Tidy up, make vendor and make generate
* delinting
* Fixing database tests
* Attempted fix of does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error on sql resource loas
* Moved all anon metric postgres migration logic and sources into a the protocol/anonmetrics package or sub packages. I don't know if this will fix the does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error that happens in Jenkins but this could work
* Lint for the lint god
* Why doesn't the linter list all its problems at once?
* test tweaks
* Fix for wakuV2 change
* DB reset change
* Fix for postgres db migrations fails
* More robust implementation of postgres test setup and teardown
* Added block for anon metrics functionality
* Version Bump to 0.84.0
* Added test to check anon metrics broadcast is deactivated
* Protobufs and adapters
* Added basic anon metric service and config init
* Added new processed field to app metrics table
* Added id column to app metrics table
* Added migration clean up
* Added appmetrics GetUnprocessed and SetToProcessedByIDs and tests
There was a wierd bug where metrics in the db that did not explicitly insert a value would be NULL, so could not be found by . In addition I've added a new primary id field to the app_metrics table so that updates could be done against very specific metric rows.
* Updated adaptors and db to handle proto_id
I need a way to distinguish individual metric items from each other so that I can ignore the ones that have been seen before.
* Added postgres DB connection, integrated into anonmetrics.Server
* Removed proto_id from SQL migration and model
* Integrated postgres with Server and updated adaptors
* Added sample config files for client and server
* Fix lint
* Fix for receiving an anonMetricBatch not in server mode
* Postgres test fixes
* Tidy up, make vendor and make generate
* Moved all anon metric postgres migration logic and sources into a the protocol/anonmetrics package or sub packages. I don't know if this will fix the does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error that happens in Jenkins but this could work
This release does not contain a breaking change. The intention behind
incrementing the minor version is to make a clean break from broken
versions and make it easier to remember the name of recent working
release.
This specifically targets the following fix:
https://github.com/status-im/status-go/pull/2331
Signed-off-by: Jakub Sokołowski <jakub@status.im>
* Added community sync protobuf
* Updated community sync send logic
* Integrated syncCommunity handling
* Added synced_at field and tidied up some other logic
* persistence testing
* Added testing and join functionality
* Fixed issue with empty scan params
* Finshed persistence tests for new db funcs
* Midway debug of description not persisting after sync
* Resolved final issues and tidied up
* Polish
* delint
* Fix error not handled on SetPrivateKey
* fix infinite loop, again
* Added muted option and test fix
* Added Muted to syncing functions, not just in persistence
* Fix bug introduced with Muted property
* Added a couple of notes for future devs
* Added most of the sync RequestToJoin functionality
Tests need to be completed and tests are giving some errors
* Finished tests for getJoinedAndPending
* Added note
* Resolving lint
* Fix of protobuf gen bug
* Fixes to community sync tests
* Fixes to test
* Continued fix of e2e
* Final fix to e2e testing
* Updated migration position
* resolve missing import
* Apparently the linter spellchecks
* Fix bug from #2276 merge
* Bug fix for leaving quirkiness
* Addressed superfluous MessengerResponse field
* Addressed feedback
* VERSION bump
This commit adds a setting for mailserver, that allows the client to
specify the syncing period.
Also fixes a minor issue with deletion of public chats.
This property is useful for clients to know when a channel or chat
was joined so they can use that to calculate the order of channels
and chats shown in applications.
Changes include a new joined property on the Chat struct,
as well as adjustments in the persistence layer to retreive and
update chat data in the database.
In addition there's a migration script that alters the existing
chat table to introduce a new column for the joined field.
It also updates all existing rows in the database to set `joined`
to `0`.
* Add extra event to capture other type of navigations, allow empty screen name, rename cofx to get rid of clj ns, update tests
* Some view ids are greater than 16 characters. Made it 32 to be safe.
* Tab navigation events occur outside nav, add a new validator for them
* Remove navigate to cofx event, capture screens on will focus, get rid of enum and make valid screens a string less than 32 characters
* Run make generate
* Fix test
* Bump version to 0.75.1
* Removed region field from on ramp struct
* Added basic placeholder for latamex
* Added latamex on ramp option
* fee rate change to Ramp
* Updated VERSION
* Bump to major version
Sometimes eth_getBlockByNumber returns txs with chainId which is not
equal to chanin's id. That caused an error and tx fetching was
interrupted. From now on such txs will be skipped.
* Migrations in place, how to run them?
* Remove down migrations and touch database.go
* Database and Database Test package in place, added functions to get and store app metrics
* make generate output
* Minor bug fix on app metrics insert and select
* Add a validation layer to restrict what can be saved in the database
* Make validation more terse, throw error if schema doesn't exist, expose appmetrics service
* service updates
* Compute all errors before sending them out
* Trying to bring a closjure to appmetrics go
* Expose appmetrics via an api, skip fancy
* Address value as Jason Dawt Rawmasage to ease parsing
* Introduce a buffered chan with magic cap of 8 to minimize writes to DB. Tests for service and API. Also expose GetAppMetrics function.
* Lint issues
* Remove autoincrement, undo waku.json changes, fix error being shadowed, return nil where nil ought to be returned, get rid of buffered channel
* Bump migration number
* Fix API factory usage
* Add comment re:json.RawMessage instead of strings
* Get rid of test vars, throw save error inside the loop
* Update version
Co-authored-by: Samuel Hawksby-Robinson <samuel@samyoul.com>
This commit expands the confirmation mechanism to allow private group
chat messages to be confirmed:
Changes:
- Added a separate table for message confirmations as group chat
messages have same messageID but multiple datasyncID
- Removed DataSyncID from raw message (I haven't removed the column name
as it can't be done in sqlite without copying over the table)
It looks like profile chats are created as public chats, and the code
will drop messages for it.
This commit fixes the issue by checking for the "@" prefix in profile
chat names, until we fix the issue by migrating those chats.
- old existing ranges are merged when wallet service is started
- a new range is merged with an existing one if possible
This will decrease the number of entries in blocks_range table as
currently it can grow indefinitely (@flexsurfer reported 23307 entries).
This change is also needed for further optimisations of RPC usage.
We were not locking before accessing the contacts map and it would panic
in some cases.
I have changed the code to pull contacts from db so we move away from
having locks.
There seems to be an issue with version 1.3, querying for topics on
postgres returns
and error:
```
panic: pq: invalid byte sequence for encoding "UTF8"
```
Upgrading pq fixes the issue
¯\_(ツ)_/¯
There was a bug on status-react where it would save filters that were
not listened to.
This commit adds a task to clean up those filters as they might result
in long syncing times.
This commit also returns topics/ranges/mailserves from messenger in
order to make the initialization of the app simpler and start moving
logic to status-go.
It also removes whisper from vendor.
One of the issues we noticed is that the partitioned topic
in push notification is heavy in traffic, as any user using a particular
mailserver will use that partitioned topic to register for PNs.
This commit moves from the partitioned topic to the personal topic of
the PN server, so it does not clash with other users that might happen
to have the same partitioned topic as the mailserver, resulting in long
sync times.
Another issue that will need to be addressed separately is that once you
send a message to a topic, because of the way how waku/whisper works,
you will have to register to that topic, meaning that you will receive
that data. Currently waku does not support unsubscribing from a topic
without logging in and out, so that needs also to be addressed.
The topics field was not passed to the mailserver, which meant that
queries were still using the old bloom filter.
Hopefully this is the last place where we need to pass this.
This commit re-introduces a feature that we lost during the migration to
status-go.
Messages are cached for a couple of days if processed correctly by
status-go, to avoid performance issues.
* Initial work on expanding Local Notifications
Adding functionality to support multiple notification types in Notification.Body. Currently have a bug that I think is caused by a the jsonMarshal func not working as intented, need to resolve this next before proceeding
* Fixed json.Marshaller issue and implemented json.Unmarshaller
* Tweak errors, go convention is errors don't begin with capital letters
* Added notificationMessageBody with un/marshalling
Also removed the Body interface
* Added check for bodyType mismatch
* Implement building and sending new message notifications
* Refactor to remove cycle imports
* Resolved linting issue ... Hopefully
* Resolving an implicit memory aliasing in a for loop
* version bump
* Added Notification.Category consts
In some instances the communities migration would be skipped but not
marked as `dirty`.
This commit addresses the issue by:
- Making sure that if dirty is set the migration is not skipped but
replayed
- If the version is on the communities migration and dirty is false, we
check for the presence of the communities table. If not present we
replay the communities migration.
- Make community_id field in user_messages nullable
It also removes all the `down` migration, as we can't use them
effectively, as explained in the README.md added.
* feat: allow getPreviewLinkData for all links
Only YouTube links were supported in `getPreviewLinkData` previously.
Now, any whitelisted links can have their data retreived using this function, returning the true content type (by examining the header bytes of the content).
feat: add tenor.com and giphy.com to whitelisted urls list
* Fix json format
* bump VERSION
* Lint urls.go
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
There was an issue in using the `Wallet` flag when checking accounts to
watch for transactions.
`Wallet` indicates that it's the default wallet, not whether is a wallet
account.
That can only be checked by looking at the type (and the `Wallet` flag).
If the type is `generated`, `key` or `seed` it should be watched for
transactions.
This commit adds an endpoint to batch the sending of messages.
This is useful to simplify client logic when sending a batch of messages
and ensuring the correct order in the message stream.
It currently implements only what's needed, and naively return an error
if any of the messages fail.
We were not actually passing the topics in the request, therefore we
were using bloom filter for query, which resulted in long syncing times
for some users.
- Wallet service is not started on foreground event on status-go side
anymore, it leaves a client side opportunity to decide whether new
blocks should be watched.
- `watchNewBlocks` parameter is added to `StartWallet`.
- Some requests are removed/moved to the place where they are necessary.
When sending messages in quick succession, it might be that multiple
messages are batched together in datasync, resulting in a single large
payload.
This commit changes the behavior so that we can pass a max-message-size
and we split the message in batches before sending.
A more elegant way would be to split at the transport layer (i.e
waku/whisper), but that would be incompatible with older client.
We can still do that eventually to support larger messages.
Marketing was relying on mailserver entries for checking the time a
give peer spent on the app.
This was not accurate as the assumption was that a peer would "ping" a
mailserver every 15s, which is not the case, it only hit the mailserver
as it comes from an "offline" state.
This commit changes the log level of two entries so that we have
connect/disconnect time for a given peer, which should be enough to
calculate roughly the time a peer has been online if connected to our
fleet.
On logout happens sometimes that `PeersCount` is called when the server
has been removed.
This commit adds a guard to make sure that the server is not nil when
calling `PeersCount`.
If one request failed, the whole batch would fail.
This caused issue as one of the contract is constantly returning an
error now, and essentially there was not way to fetch balance.
Also extend the timeout to 20s as we throw 165 request to Infura in one
go and it takes its time to reply to those, although it seems like we
should batch them on our side instead of sending them all cuncurrently.
This commit does two things:
1) Add an index on seen & update only the not-seen messages in the query
2) Hide long messages in the database, as that's likely spam
For some reason when calling saveChat from desktop with `lastMessage`
set to null, a sigsev is received.
The issue seems to be in logFormat
05280a7ae3/log/format.go (L356)
which for some reason blows up if passed a nil pointer (`lastMessage`).
Can't replicate on any other platform or running it locally, but hey,
this fixes the issue.
previously FullNode would only result in setting a bloom filter to full.
This behavior caused issues as you effectively cannot install a filter
on a FullNode, as it would advertise the new topic/bloom filter and stop
receiving all the messages.
This caused an issue when running push notifications servers together
with mailservers, also the behavior is a bit counter-intuitive as I
would expect the FullNode config to be honored no matter of what filters
are installed.
StartWallet was called before service initialization.
After the recent changes this call was moved after initialization, but
the geth system automatically start services.
This meant that `IsStarted()` returned true, although the reactor was
not started, and only after calling `StopWallet()` and `StartWallet()`
again the system would reach the right state.
This commit changes the behavior so that we only check whether the
reactor has been started when calling `IsStarted()` and we allow
multiple calls to `Start()` on the signal service, which won't return an
error (it's a noop if callled multiple times).
LastMessage in chat was encoded in bytes so that we don't have to
encoded/decode everytime we save to db or pass the client.
An issue with emoji surfaced a problem with this approach.
Chat.LastClockValue represent the last clock value of any type of
message exchanged in a chat (emoji,group membership updates, contact
updates).
So when receving a new message, we should update LastMessage if the
clock of the LastMessage is lower than the received message, and we
should not only check LastClockValue, otherwise the message might be
discarded although it is the most recent.
This commit fixes the issue by keeping LastMessage as an object and
comparing LastMessage.Clock instead of LastClockValue
If a message was inserted before the migration the field
audio_duration_ms would be set to NULL, and would not be serialized into
go correctly, as uint is non-nullable.
this commit fixes the issue by calling COALESCE on the value.
* Replace mclock with time in peer pools
we used mclock as golang before 1.9 did not support monotonic clocks,
https://github.com/gavv/monotime, it does now https://golang.org/pkg/time/
so we can fallback on the system implementation, which will return
nanoseconds with a resolution that is system dependent.
* Handle case where peer have same discovered time
In some system the resolution of the clock is not high enough so
multiple peers are added on the same nanosecond.
This result in the peer just added being immediately removed.
This code adds a check making sure we don't assume that a peer is added.
Another approach would be to make sure to include the peer in the list,
so prevent the peer just being added to be evicted, but it's slightly
more complicated and the resolution is generally accurate enough for our
purpose so that peers will be fresh enough if they have the same
discovered time.
It also adds a regression test, I had to use an interface to stub the
clock.
Fixes: https://github.com/status-im/nim-status-client/issues/522
* bump version to 0.55.3
The index for message was fairly inefficient as it was only using the
cursor, as it was referring to the old `chat_id` field.
This meant that newer messages would be fetched much faster then older
messages.
The index has been changed so that now it includes `local_chat_id`
(which is currently used for filtering), and not using `hide`.
The reason being is that `hide` is a low cardinality index, so there's
no performance benefit to have it in, also it's mostly ignored by the
query planner.
This commit also adds the missing migrations, we generated the file, but
the source was missing, probably I forgot to add them in a rebase. They
have been generated from the migration file, using `RestoreAsset`.
Why make the changes?
Mainly performance, those fields are almost always present in the
database but they are re-calculated on load by the client as it does not
have necessarily access to it.
What has changed?
- Remove `_legacy` persistence namespaces as it's a vestige of the
initial move frmo status-react to status-go
- Pulling chats is now a join with contacts to add contact & alias
During the move to waku/1 we removed handing of a deprecated format.
aa7f591587 (diff-ea5e44cf3db4a12b3a2a246c7fa39602R290)
Turns out that no client is using the new format and the only live
client is using the deprecated format.
This commit re-introduces the functionality.
* Refactor tidy of waku package
* Added deprecation warning on whisper README.md
* Appeasing the lint gods and testing is good
* Place Whisper deprecation warning in the correct package README
:facepalm
* Implementing changes after team feedback
* More offerings to the lint gods
* Remove apparently redundant context params
* Correctly handle concurrent HandlePeer err
* Revert "Remove apparently redundant context params"
This reverts commit 557dbd0d64.
* Added note to waku/api.go about context
* renamed statusoptions and removed unused global
* Removed OnNewP2PEnvelopes() from WakuHost interface
* Matched v1 Peer with new interface sig
Also changed common/helper.go to common/helpers.go
* Formatting of waku tests and some additional error handling
* Changed version to 0.53.0
* Removed redundant type declaration
* Moved TopicToBloom function into a Topic{} method
* Moved GenerateSecureRandomData() into helpers.go
Why make the change?
As discussed previously, the way we will move across versions is to maintain completely separate
codebases and eventually remove those that are not supported anymore.
This has the drawback of some code duplication, but the advantage is that is more
explicit what each version requires, and changes in one version will not
impact the other, so we won't pile up backward compatible code.
This is the same strategy used by `whisper` in go ethereum and is influenced by
https://www.youtube.com/watch?v=oyLBGkS5ICk .
All the code that is used for the networking protocol is now under `v0/`.
Some of the common parts might still be refactored out.
The main namespace `waku` deals with `host`->`waku` interactions (through RPC),
while `v0` deals with `waku`->`remote-waku` interactions.
In order to support `v1`, the namespace `v0` will be copied over, and changed to
support `v1`. Once `v0` will be not used anymore, the whole namespace will be removed.
This PR does not actually implement `v1`, I'd rather get things looked over to
make sure the structure is what we would like before implementing the changes.
What has changed?
- Moved all code for the common parts under `waku/common/` namespace
- Moved code used for bloomfilters in `waku/common/bloomfilter.go`
- Removed all version specific code from `waku/common/const` (`ProtocolVersion`, status-codes etc)
- Added interfaces for `WakuHost` and `Peer` under `waku/common/protocol.go`
Things still to do
Some tests in `waku/` are still testing by stubbing components of a particular version (`v0`).
I started moving those tests to instead of stubbing using the actual component, which increases
the testing surface. Some other tests that can't be easily ported should be likely moved under
`v0` instead. Ideally no version specif code should be exported from a version namespace (for
example the various codes, as those might change across versions). But this will be a work-in-progress.
Some code that will be common in `v0`/`v1` could still be extract to avoid duplication, and duplicated only
when implementations diverge across versions.
If the user deletes/leaves a group chat, the chat is set as not active.
This means that if we are re-invited to the chat it won't be shown to
the user.
This commit changes this behavior so that if we are re-invited to the
chat it is set as active again.
When receiving a message with save a contact in the database in order to
avoid re-calculating image/profile.
This contact is then passed to the client, which can negatively impact
performance.
This commit changes the behavior so that only those contacts that have
some custom fields (have been explicitly added by the user, have been
blocked by the user, have sent a contact request or have a verified ens
name) are passed to the client.
Currently replies to messages are handled in status-react.
This causes some issues with the fact that sometimes replies might come
out of order, they might be offloaded to the database etc.
This commit changes the behavior so that status-go always returns the
replies, and in case a reply comes out of order (first the reply, later
the message being replied to), it will include in the messages the
updated message.
It also adds some fields (RTL,Replace,LineCount) to the database which
were not previously saved, resulting in some potential bugs.
The method that we use to pull replies is currently a bit naive, we just
pull all the message again from the database, but has the advantage of
being simple. It will go through performance testing to make sure
performnace are acceptable, if so I think it's reasonable to avoid some
complexity.
- unused API methods are removed
- some unusued code is removed too
- API docs are updated
That's just a portion of clean up that should be done,
but the rest of it will probably happen in different PR
with changes to the way how we watch to chain updates.
Currently ENS are verified explicitly by status-react, this is not ideal
as if that fails it will have to be explicilty retried in status-react.
This commits changes that behavior so that ENS are verified in a loop
and updated if new messages are received.
Storing absolute path for different configs breaks compatibility on iOS
as app's dir is changed after upgrade. The solution is to store relative
paths and to concatenate it with `backend.rootDataDir`. The only
exception is `LogFile` as it is stored outside `backend.rootDataDir` on
Android. `LogDir` config was added to allow adding of custom dir for log
file.
Configs concerned:
`DataDir`
`LogDir`
`LogFile`
`KeystoreDir`
`BackupDisabledDataDir`
- In order to avoid handling of the reorganized blocks we use an offset
from the latest known block when start listening to new blocks. Before
this commit the offset was 15 blocks for all networks. This offset is
too big for mainnet and causes noticeable delay of marking a transfer as
confirmed in Status (comparing to etherscan). So it was changed to be 5
blocks on mainnet and is still 15 blocks on other networks.
- Also before this commit all new blocks were handled one by one with
network specific interval (10s for mainnet), which means that in case of
lost internet connection or application suspension (happens on iOS)
receiving of new blocks would be paused and then resumed with the same
"speed" - 1 blocks per 10s. In case if that pause is big enough the
application would never catch up with the latest block in the network,
and this also causes the state of transfers to be delayed in the
application. In this commit in case if there was more than 40s delay
after receiving of the previous block the whole history in range between
the previous received block and ("latest"-reorgeSafetyDepth) block is
checked at once and app catches up with a recent state of the chain.
This commit pegs the clock value to maximum + 120 seconds from the whisper
timestamp.
In this way the we avoid the scenario where a client makes the timestamp
increase arbitrarely.