This is needed so that when they are bundled into archives, receiving
nodes can still verify the messages payload using its signature.
This commit introduces a new `waku_messages` table and APIs to store
such messages. Waku message payload is store for any message that has
a topic that matches any of the admin communities chats.
Closes#2566
* Sync Settings
* Added valueHandlers and Database singleton
Some issues remain, need a way to comparing incoming sql.DB to check if the connection is to a different file or not. Maybe make singleton instance per filename
* Added functionality to check the sqlite filename
* Refactor of Database.SaveSyncSettings to be used as a handler
* Implemented inteface for setting sync protobuf factories
* Refactored and completed adhoc send setting sync
* Tidying up
* Immutability refactor
* Refactor settings into dedicated package
* Breakout structs
* Tidy up
* Refactor of bulk settings sync
* Bug fixes
* Addressing feedback
* Fix code dropped during rebase
* Fix for db closed
* Fix for node config related crashes
* Provisional fix for type assertion - issue 2
* Adding robust type assertion checks
* Partial fix for null literal db storage and json encoding
* Fix for passively handling nil sql.DB, and checking if elem has len and if len is 0
* Added test for preferred name behaviour
* Adding saved sync settings to MessengerResponse
* Completed granular initial sync and clock from network on save
* add Settings to isEmpty
* Refactor of protobufs, partially done
* Added syncSetting receiver handling, some bug fixes
* Fix for sticker packs
* Implement inactive flag on sync protobuf factory
* Refactor of types and structs
* Added SettingField.CanSync functionality
* Addressing rebase artifact
* Refactor of Setting SELECT queries
* Refactor of string return queries
* VERSION bump and migration index bump
* Deactiveate Sync Settings
* Deactiveated preferred_name and send_status_updates
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
These are used to store settings for individual communities a
user is part of, either as member or as owner.
This included whether or not the community history archive protocol
is enabled.
This adds a new `CommunitySettings` type and adds
a migration script that introduces a new `communities_settings`
table.
It also extends the `MessengerResponse` type to include
`CommunitySettings` which are honored when communities are being
added, edited, joined or left.
Lastly, this adds a new RPC API to retreive the settings.
Closes#2564
This commit introduces a new `TorrentConfig` as per #2563 with dedicated
default values and new options/flags for the status-go CLI.
Since it's part of `NodeConfig`, which is stored per user in the
database, this commit also adds a migration script that adds a new
`torrent_config` table and database APIs to insert and retreive the
torrent config.
Closes#2563
This commit enables mailserver cycle logic by default and make a few
changes:
1) Nodes are graylisted instead of being blacklisted for a set amount of
time. The reason is that if we blacklist, any cut in connectivity
might result in long delays before reconnecting, especially on spotty
connections.
2) Fixes an issue on the devp2p server, whereby the node would not
connect to one of the static nodes since all the connection slots
where filled. The fix is a bit inelegant, it always connects to
static nodes, ignoring maxpeers, but it's tricky to get it to work
since the code is clearly not written to select a specific node.
3) Adds support to pinned mailservers
4) Add retries to mailservers requests. It uses a closure for now, I
think we should eventually have a channel etc, but I'd leave that for
later.
We periodically check whether we should backup contacts through waku.
Currently it's not tied to any event, it will only schedule a backup on
a tick, or through the provided API endpoint.
* feat: enable wallet without network binding
* feat: make transfer network aware
* feat: allow to pass initial networks via config
* fix: nil check and feed
* feat: Add documentation with better function name
* fix: do not init the manager more than once
* fix: PR feedbacks
* Bump version
* Update Jenkinsfile.tests
* Convert int to string
Co-authored-by: RichΛrd <info@richardramos.me>
* Protobufs and adapters
* Added basic anon metric service and config init
* Added fibonacci interval incrementer
* Added basic Client.Start func and integrated interval incrementer
* Added new processed field to app metrics table
* Added id column to app metrics table
* Added migration clean up
* Added appmetrics GetUnprocessed and SetToProcessedByIDs and tests
There was a wierd bug where metrics in the db that did not explicitly insert a value would be NULL, so could not be found by . In addition I've added a new primary id field to the app_metrics table so that updates could be done against very specific metric rows.
* Updated adaptors and db to handle proto_id
I need a way to distinguish individual metric items from each other so that I can ignore the ones that have been seen before.
* Moved incrementer into dedicated file
* Resolve incrementer test fail
* Finalised the main loop functionality
* Implemented delete loop framework
* Updated adaptors file name
* Added delete loop delay and quit, and tweak on RawMessage gen
* Completed delete loop logic
* Added DBLock to prevent deletion during mainLoop
* Added postgres DB connection, integrated into anonmetrics.Server
* Removed proto_id from SQL migration and model
* Integrated postgres with Server and updated adaptors
* Function name update
* Added sample config files for client and server
* Fixes and testing for low level e2e
* make generate
* Fix lint
* Fix for receiving an anonMetricBatch not in server mode
* Postgres test fixes
* Tidy up, make vendor and make generate
* delinting
* Fixing database tests
* Attempted fix of does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error on sql resource loas
* Moved all anon metric postgres migration logic and sources into a the protocol/anonmetrics package or sub packages. I don't know if this will fix the does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error that happens in Jenkins but this could work
* Lint for the lint god
* Why doesn't the linter list all its problems at once?
* test tweaks
* Fix for wakuV2 change
* DB reset change
* Fix for postgres db migrations fails
* More robust implementation of postgres test setup and teardown
* Added block for anon metrics functionality
* Version Bump to 0.84.0
* Added test to check anon metrics broadcast is deactivated
* Protobufs and adapters
* Added basic anon metric service and config init
* Added new processed field to app metrics table
* Added id column to app metrics table
* Added migration clean up
* Added appmetrics GetUnprocessed and SetToProcessedByIDs and tests
There was a wierd bug where metrics in the db that did not explicitly insert a value would be NULL, so could not be found by . In addition I've added a new primary id field to the app_metrics table so that updates could be done against very specific metric rows.
* Updated adaptors and db to handle proto_id
I need a way to distinguish individual metric items from each other so that I can ignore the ones that have been seen before.
* Added postgres DB connection, integrated into anonmetrics.Server
* Removed proto_id from SQL migration and model
* Integrated postgres with Server and updated adaptors
* Added sample config files for client and server
* Fix lint
* Fix for receiving an anonMetricBatch not in server mode
* Postgres test fixes
* Tidy up, make vendor and make generate
* Moved all anon metric postgres migration logic and sources into a the protocol/anonmetrics package or sub packages. I don't know if this will fix the does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error that happens in Jenkins but this could work
* Added community sync protobuf
* Updated community sync send logic
* Integrated syncCommunity handling
* Added synced_at field and tidied up some other logic
* persistence testing
* Added testing and join functionality
* Fixed issue with empty scan params
* Finshed persistence tests for new db funcs
* Midway debug of description not persisting after sync
* Resolved final issues and tidied up
* Polish
* delint
* Fix error not handled on SetPrivateKey
* fix infinite loop, again
* Added muted option and test fix
* Added Muted to syncing functions, not just in persistence
* Fix bug introduced with Muted property
* Added a couple of notes for future devs
* Added most of the sync RequestToJoin functionality
Tests need to be completed and tests are giving some errors
* Finished tests for getJoinedAndPending
* Added note
* Resolving lint
* Fix of protobuf gen bug
* Fixes to community sync tests
* Fixes to test
* Continued fix of e2e
* Final fix to e2e testing
* Updated migration position
* resolve missing import
* Apparently the linter spellchecks
* Fix bug from #2276 merge
* Bug fix for leaving quirkiness
* Addressed superfluous MessengerResponse field
* Addressed feedback
* VERSION bump
This commit adds a setting for mailserver, that allows the client to
specify the syncing period.
Also fixes a minor issue with deletion of public chats.
This property is useful for clients to know when a channel or chat
was joined so they can use that to calculate the order of channels
and chats shown in applications.
Changes include a new joined property on the Chat struct,
as well as adjustments in the persistence layer to retreive and
update chat data in the database.
In addition there's a migration script that alters the existing
chat table to introduce a new column for the joined field.
It also updates all existing rows in the database to set `joined`
to `0`.
* Added anon metrics send opt in setting
* resolved rebase conflict, renamed migration to use unixtimestamp
Theres always conflicts with migrations using sequential numbers, less so with unix timestamp
* Add extra event to capture other type of navigations, allow empty screen name, rename cofx to get rid of clj ns, update tests
* Some view ids are greater than 16 characters. Made it 32 to be safe.
* Tab navigation events occur outside nav, add a new validator for them
* Remove navigate to cofx event, capture screens on will focus, get rid of enum and make valid screens a string less than 32 characters
* Run make generate
* Fix test
* Bump version to 0.75.1
* Migrations in place, how to run them?
* Remove down migrations and touch database.go
* Database and Database Test package in place, added functions to get and store app metrics
* make generate output
* Minor bug fix on app metrics insert and select
* Add a validation layer to restrict what can be saved in the database
* Make validation more terse, throw error if schema doesn't exist, expose appmetrics service
* service updates
* Compute all errors before sending them out
* Trying to bring a closjure to appmetrics go
* Expose appmetrics via an api, skip fancy
* Address value as Jason Dawt Rawmasage to ease parsing
* Introduce a buffered chan with magic cap of 8 to minimize writes to DB. Tests for service and API. Also expose GetAppMetrics function.
* Lint issues
* Remove autoincrement, undo waku.json changes, fix error being shadowed, return nil where nil ought to be returned, get rid of buffered channel
* Bump migration number
* Fix API factory usage
* Add comment re:json.RawMessage instead of strings
* Get rid of test vars, throw save error inside the loop
* Update version
Co-authored-by: Samuel Hawksby-Robinson <samuel@samyoul.com>
This commit expands the confirmation mechanism to allow private group
chat messages to be confirmed:
Changes:
- Added a separate table for message confirmations as group chat
messages have same messageID but multiple datasyncID
- Removed DataSyncID from raw message (I haven't removed the column name
as it can't be done in sqlite without copying over the table)
There was a bug on status-react where it would save filters that were
not listened to.
This commit adds a task to clean up those filters as they might result
in long syncing times.
This commit also returns topics/ranges/mailserves from messenger in
order to make the initialization of the app simpler and start moving
logic to status-go.
It also removes whisper from vendor.
One of the issues we noticed is that the partitioned topic
in push notification is heavy in traffic, as any user using a particular
mailserver will use that partitioned topic to register for PNs.
This commit moves from the partitioned topic to the personal topic of
the PN server, so it does not clash with other users that might happen
to have the same partitioned topic as the mailserver, resulting in long
sync times.
Another issue that will need to be addressed separately is that once you
send a message to a topic, because of the way how waku/whisper works,
you will have to register to that topic, meaning that you will receive
that data. Currently waku does not support unsubscribing from a topic
without logging in and out, so that needs also to be addressed.
This commit fixes a couple of issues:
1) Emojis were sent to any member of the group chat, regardless of
whether they joined
2) We don't want to wrap emojis, as there's no need to do so, only
messages are to be wrapped
This commit re-introduces a feature that we lost during the migration to
status-go.
Messages are cached for a couple of days if processed correctly by
status-go, to avoid performance issues.
In some instances the communities migration would be skipped but not
marked as `dirty`.
This commit addresses the issue by:
- Making sure that if dirty is set the migration is not skipped but
replayed
- If the version is on the communities migration and dirty is false, we
check for the presence of the communities table. If not present we
replay the communities migration.
- Make community_id field in user_messages nullable
It also removes all the `down` migration, as we can't use them
effectively, as explained in the README.md added.
Currently replies to messages are handled in status-react.
This causes some issues with the fact that sometimes replies might come
out of order, they might be offloaded to the database etc.
This commit changes the behavior so that status-go always returns the
replies, and in case a reply comes out of order (first the reply, later
the message being replied to), it will include in the messages the
updated message.
It also adds some fields (RTL,Replace,LineCount) to the database which
were not previously saved, resulting in some potential bugs.
The method that we use to pull replies is currently a bit naive, we just
pull all the message again from the database, but has the advantage of
being simple. It will go through performance testing to make sure
performnace are acceptable, if so I think it's reasonable to avoid some
complexity.
* Add status-option code
This commits changes the behavior of waku introducing a new status-code,
`2`, that replaces the current single options codes.
* linting
*** How it worked before this PR on multiaccount creation:
- On multiacc creation we scanned chain for eth and erc20 transfers. For
each address of a new empty multiaccount this scan required
1. two `eth_getBalance` requests to find out that there is no any
balance change between zero and the last block, for eth transfers
2. and `chain-size/100000` (currently ~100) `eth_getLogs` requests,
for erc20 transfers
- For some reason we scanned an address of the chat account as well, and
also accounts were not deduplicated. So even for an empty multiacc we
scanned chain twice for each chat and main wallet addresses, in result
app had to execute about 400 requests.
- As mentioned above, `eth_getBalance` requests were used to check if
there were any eth transfers, and that caused empty history in case
if user already used all available eth (so that both zero and latest
blocks show 0 eth for an address). There might have been transactions
but we wouldn't fetch/show them.
- There was no upper limit for the number of rpc requests during the
scan, so it could require indefinite number of requests; the scanning
algorithm was written so that we persisted the whole history of
transactions or tried to scan form the beginning again in case of
failure, giving up only after 10 minutes of failures. In result
addresses with sufficient number of transactions would never be fully
scanned and during these 10 minutes app could use gigabytes of
internet data.
- Failures were caused by `eth_getBlockByNumber`/`eth_getBlockByHash`
requests. These requests return significantly bigger responses than
`eth_getBalance`/`eth_transactionsCount` and it is likely that
execution of thousands of them in parallel caused failures for
accounts with hundreds of transactions. Even for an account with 12k
we could successfully determine blocks with transaction in a few
minutes using `eth_getBalance` requests, but `eth_getBlock...`
couldn't be processed for this acc.
- There was no caching for for `eth_getBalance` requests, and this
caused in average 3-4 times more such requests than is needed.
*** How it works now on multiaccount creation:
- On multiacc creation we scan chain for last ~30 eth transactions and
then check erc20 in the range where these eth transactions were found.
For an empty address in multiacc this means:
1. two `eth_getBalance` transactions to determine that there was no
balance change between zero and the last block; two
`eth_transactionsCount` requests to determine there are no outgoing
transactions for this address; total 4 requests for eth transfers
2. 20 `eth_getLogs` for erc20 transfers. This number can be lowered,
but that's not a big deal
- Deduplication of addresses is added and also we don't scan chat
account, so a new multiacc requires ~25 (we also request latest block
number and probably execute a few other calls) request to determine
that multiacc is empty (comparing to ~400 before)
- In case if address contains transactions we:
1. determine the range which contains 20-25 outgoing eth/erc20
transactions. This usually requires up to 10 `eth_transactionCount`
requests
2. then we scan chain for eth transfers using `eth_getBalance` and
`eth_transactionCount` (for double checking zero balances)
3. we make sure that we do not scan db for more than 30 blocks with
transfers. That's important for accounts with mostly incoming
transactions, because the range found on the first step might
contain any number of incoming transfers, but only 20-25 outgoing
transactions
4. when we found ~30 blocks in a given range, we update initial
range `from` block using the oldest found block
5. and now we scan db for erc20transfers using `eth_getLogs`
`oldest-found-eth-block`-`latest-block`, we make not more than 20 calls
6. when all blocks which contain incoming/outgoing transfers for a
given address are found, we save these blocks to db and mark that
transfers from these blocks are still to be fetched
7. Then we select latest ~30 (the number can be adjusted) blocks from
these which were found and fetch transfers, this requires 3-4
requests per transfer.
8. we persist scanned range so that we know were to start next time
9. we dispatch an event which tells client that transactions are found
10. client fetches latest 20 transfers
- when user presses "fetch more" button we check if app's db contains next
20 transfers, if not we scan chain again and return transfers after
small fixes
Move settings table schema from a key-value store to a one row table with many columns.
We now save the first row with initial data in saveAccountAndLogin and follow up saveSetting calls are only saving one setting at a time.
Co-authored-by: Adam Babik <a.babik@designfortress.com>
* Use a single Message type `v1/message.go` and `message.go` are the same now, and they embed `protobuf.ChatMessage`
* Use `SendChatMessage` for sending chat messages, this is basically the old `Send` but a bit more flexible so we can send different message types (stickers,commands), and not just text.
* Remove dedup from services/shhext. Because now we process in status-protocol, dedup makes less sense, as those messages are going to be processed anyway, so removing for now, we can re-evaluate if bringing it to status-go or not.
* Change the various retrieveX method to a single one:
`RetrieveAll` will be processing those messages that it can process (Currently only `Message`), and return the rest in `RawMessages` (still transit). The format for the response is:
`Chats`: -> The chats updated by receiving the message
`Messages`: -> The messages retrieved (already matched to a chat)
`Contacts`: -> The contacts updated by the messages
`RawMessages` -> Anything else that can't be parsed, eventually as we move everything to status-protocol-go this will go away.
Wallet database refactored so that every query ensures isolation by the network id.
Network id provided when database object is created, thus it is transparent to other parts
of the wallet module.
Additionally every uniqueness index is changed to ensure that it doesn't prevent adding
object with same id but from a different network.