There were a couple of issues on how we handle pinned messages:
1) Clock of the message was only checked when saving, meaning that the
client would receive potentially updates that were not to be
processed.
2) We relied on the client to generate a notification for a pinned
message by sending a normal message through the wire. This PR changes
the behavior so that the notification is generated locally, either on
response to a network event or client event.
3) When deleting a message, we pull all the replies/pinned notifications
and send them over to the client so they know that those messages
needs updating.
- `keypair_name` added to `accounts` table, all accounts derived from the
same master key have the same keypair name and also no two keypairs share
the same keypair name (keypair name is unique per keypair)
- `last_used_derivation_index` added to `accounts` table, cause we need
to maintain the highest index been used for the derivations made within
the same keypair
`type` column is set for all rows to appropriate value. Before this change
accounts which were generated from the keypair created importing seed phrase
had `generated` value for the `type`.
According to above, a function for generating an account sets the `type`
based on the passed derive from address.
prefixes. Changed primary keys and API methods.
Fixed tests and added new ones.
Fixed saved addresses and transaction tests to use ':memory:' sqlite
DB instead of a tmp file to speed up testing by hundred of times.
Fixes#8599
`last_update_clock` will be used later for synchronization.
All keypair functions take clock value in consideration when
making a decision whether to perform an action or not.
Adds a new column named `deleted` to the table `activity_center_notifications`.
Related PR in Mobile https://github.com/status-im/status-mobile/pull/15106 for a lot more details of the feature.
Why? Relying on the `dismissed` column for soft deletion is no longer viable because the mobile & desktop clients should display dismissed notifications (sometimes), hence the need for a new column to truly represent soft deletion.
A migration was added out-of-order, which meant that in clients who
had already run the migration after, it would be skip.
This commit re-adds the migration so it's run, tested against an empty
account and one that had already migrated.
Due to easier maintaining in future the following is done:
- keypairs table removed
- keycard table added, storing only keycards/keypairs
- keycard_accounts table added, storing only accounts migrated to a keycard
Migration is done keeping the current keycard state accurate (no keycard records will be lost).
* feat: Add seen/unseen activity center state
* feat: ActivityCenterState for grouping ActivityCenter unread messages cnt and seen state
* feat: always use messenger's addActivityCenterNotification & add state to the response
* Remove unused activity center endpoints form api and fix test
Changes applied here introduce backing up profile data (display name and identity
images) to waku and fetch them from waku. Information about those data is sent
as a separate signal to a client via `sync.from.waku.profile` signal.
New signal `sync.from.waku.progress` is introduced which will be used to notify a client
about the progress of fetching data from waku.
This commit makes a few changes to the community history archive
download routine to make it more robust:
1. Prior to this commit, even when there were no archives to be
downloaded, we were still trying to extract messages from archive
data.
2. Logs have been improved as they were sometimes showing confusing
information
3. We now handle interruption of ongoing download + data import much
better in case of multiple magnetlinks being processed in roughly the
same time.
4. We now keep track of which archive has been successfully imported
into the database. Without this, Status would consider any downloaded
archives as "done" even though they haven't actually been imported
into the database yet. This way Status should be able to pick up its
work were it left of the last time, in case a user closes the app, or
another magnetlink interrupts the ongoing process.
* feat(ActivityCenter): Add community request AC notification
* feat(ActivityCenter): Add CommunityID to AC notification
* feat(ActivityCenter): Add membership status for community membership AC notifications
* feat(ActivityCenter): Add tests for community notifications and fix naming
* Add notification for kicked from community action
* feat(ActivityCenter): Fix for missing notification objects for tests
Main changes:
- Extend saved addresses DB with sync info: sync timestamp, update timestamp
and soft removed flag
- Create custom protobuf message payload to sync saved addresses
- Cleanup saved addresses on each start of messenger, by deleting
soft removed older entries
- Sync all saved addresses on Messenger.SyncDevices calls
- Sync particular changes to saved addresses
- Add SavedAddressManager instance to messenger
- Note, can't find a clean way to pass the SavedAddressManager to the
messenger, so we create another one
- Add tests for sync and new DB API
Closes: #7229
This adds a new `DiscordMessageAttachment` type which is part of
`DiscordMessage`. Along with that type, there's also a new database
table for `discord_message_attachments` and corresponding persistence
APIs.
This commit also changes how chat messages are retrieved.
Here's why:
`DiscordMessage` can have multiple `DiscordMessageAttachment`.
A chat message can have a `DiscordMessage`.
Because we're `LEFT JOIN`'ing the discord message attachments into the
chat messages, there's a possibility of multiple rows per message.
Hence, this commit ensures we collect queried discord message
attachments on chat messages.
Usually, message IDs are generated by their payload and signature and
in receiving nodes calculated in based on the same data as well.
There's no ID attached to messages in-flight.
This turns out to be a bit of a problem for messages that are being
imported from third party systems like discord, as the conversion
and saving of such messages and handling of their possible assets and
attachments are done in separate steps, which changes the message
payloads after their IDs have been generated.
Hence, we're introducing a `ThirdPartyID` property to `common.Message`
and `protobuf.WakuMessage` so receiving nodes of such messages (via the
archive protocol primarily) can easily detect third party/imported
messages and give them special treatment.
Restore the old, previously renamed, 1662447680_add_keypairs_table.up.sql
file while keeping the current one for those who already migrated to the
new one. The extra migration is noop and saves to keep consistency in
the user data states history.
Remove Favourites APIs and update the saved address APIs
Added up migration scripts that move the favourites from the old table
to the saved_addresses table with true flag and then drop the favourites table.
Required by #6546
`FirstMessageTimestamp` enables members of the community to determine if
there are any messages they can fetch on the community channel(chat).
`FirstMessageTimestamp` is advertised by admin for each community chat
through `CommunityDescription`. It assumes admin is online frequently
enough to capture the first channel message.
For existing communities admin determines first message timestamp by
finding oldest chat message in its local database.
task: status-im/status-desktop#6731
This introduces a new table to store discord message authors.
The main reason this table is being introduce is so that we don't have
to duplicate discord message author information in the `user_messages`
table when importing discord communities (ongoing work).
In addition to the table there are also two new APIs on the messenger
persistence layer (which are later used in the import logic):
- `HasDiscordMessageAuthor`
- `SaveDiscordMessageAuthor`
Closes#2759
As part of the new Discord <-> Status Community Import functionality,
we're adding an API that extracts all discord categories and channels
from a previously exported discord export file.
These APIs can be used in clients to show the user what categories and
channels will be imported later on.
There are two APIs:
1. `Messenger.ExtractDiscordCategoriesAndChannels(filesToimport
[]string) (*MessengerResponse, map[string]*discord.ImportError)`
This takes a list of exported discord export (JSON) files (typically one per
channel), reads them, and extracts the categories and channels into
dedicated data structures (`[]DiscordChannel` and `[]DiscordCategory`)
It also returns the oldest message timestamp found in all extracted
channels.
The API is synchronous and returns the extracted data as
a `*MessengerResponse`. This allows to make the API available
status-go's RPC interface.
The error case is a `map[string]*discord.ImportError` where each key
is a file path of a JSON file that we tried to extract data from, and
the value a `discord.ImportError` which holds an error message and an
error code, allowing for distinguishing between "critical" errors and
"non-critical" errors.
2. `Messenger.RequestExtractDiscordCategoriesAndChannels(filesToImport
[]string)`
This is the asynchronous counterpart to
`ExtractDiscordCategoriesAndChannels`. The reason this API has been
added is because discord servers can have a lot of message and
channel data, which causes `ExtractDiscordCategoriesAndChannels` to
block the thread for too long, making apps potentially feel like they
are stuck.
This API runs inside a go routine, eventually calls
`ExtractDiscordCategoriesAndChannels`, and then emits a newly
introduced `DiscordCategoriesAndChannelsExtractedSignal` that clients
can react to.
Failure of extraction has to be determined by the
`discord.ImportErrors` emitted by the signal.
**A note about exported discord history files**
We expect users to export their discord histories via the
[DiscordChatExporter](https://github.com/Tyrrrz/DiscordChatExporter/wiki/GUI%2C-CLI-and-Formats-explained#exportguild)
tool. The tool allows to export the data in different formats, such as
JSON, HTML and CSV.
We expect users to have their data exported as JSON.
Closes: https://github.com/status-im/status-desktop/issues/6690
This commit introduces a few changes regarding users accessing
communities:
While the APIs still exist, community invites should no longer be
used, instead communities should merely be "shared".
Sharing a community to users allows users to "join" the community,
which in reality makes them request access to that community.
This means, users have to request access to any community, even if
the community has permissions set to NO_MEMBERSHIP
Only difference between ON_REQUEST and NO_MEMBERSHIP is that
ON_REQUEST communities require manual approval of the owner/admin
to access a community. NO_MEMBERSHIP communities accept
automatically (as soon as owner/admin receives the request).
This also implies that users are no longer optimistically added to the
member list of communities, but only after they have been accepted.
This introduces a bit of a message ping-pong for users to know that
someone is now part of a community
fix: add verification request to response
fix: code review
add missing functions and simplify timestamp usage
fix: sync verification requests
feat: add endpoint to fetch all received verification requests
feat: add signal when trusting verification request
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
This introduces a simple garbage collection which checks for all soft
deleted bookmarks (`removed = 1`) which have been marked for garbage
collection more than 30 days ago from the time of bootstapping the
messenger.
Closes#2705
These APIs are being introduced to address #2706 and #2704, provided
that clients will move to using these APIs instead of the currently
provided equivalent APIs in the browser service.
The `bookmarks` table is being extended with a `deleted_at` field which
can later be used for garbage collection, as "removing" a bookmark is
merely soft deletion and doesn't actually remove the data from the
database.
In addition to those APIs adding and soft deleting bookmark entries,
they also automatically perform a sync operation to ensure that
bookmarks are synced in real-time (#2704).
Closes#2706, #2704
This commit introduces a new `clock` field in the
`communities_settings` table so that it can be leveraged for syncing
community settings across devices.
It alsoe exends existing `syncCommunity` APIs to generate
`SyncCommunitySettings` as well, avoiding sending additional sync messages
for community settings.
When editing communities however, we still sync community settings
explicitly are we aren't syncing the community itself in that case.
This allows to store community admin settings that are meant to be propagated
to community members (as opposed to the already existing
`CommunitySettings` which are considered local to every account).
The first setting introduced as part of this commit is one that enables
community admins to configure whether or not members of the community
are allowed to pin messages in community channels.
Prior to this commit, this was not restricted at all on the protocol
level and only enforced by clients via UI (e.g. members don't see an
option to pin messages, although they could).
This config setting now ensures that:
1. If turned off, members cannot send a pin message
2. If turned off, pin messages from members are not handled/processed
This is needed by https://github.com/status-im/status-desktop/issues/5662
This introduces the ability for status notes to handle community
history archive magnetlinks. To make this work, a few things are needed:
1. A new database table has been introduced to store message archive
hashes. This is necessary so status nodes can determine whether or
not they need to download a certain archive
2. The messenger's `handleRetrievedMessages()` has been exteded to take
magnetlink messages into account
3. New APIs were added to download torrent data given a magnetlink and
also to extract messages from downloaded archives, which are then
later fed to `handleRetrievedMessages`
Closes#2568
This introduces logic needed to:
- Create WakuMessageArchives and and indices from store waku messages
- History archive torrent data to disk and create .torrent file from
that
- Seed and unseed history archive torrents as necessary
- Starting/stopping the torrent client
- Enabling/disabling community history support for individual components
and starting/stopping the routine intervals accordingly
This does not yet handle magnet links (#2568)
Closes#2567
This is needed so that when they are bundled into archives, receiving
nodes can still verify the messages payload using its signature.
This commit introduces a new `waku_messages` table and APIs to store
such messages. Waku message payload is store for any message that has
a topic that matches any of the admin communities chats.
Closes#2566
* Sync Settings
* Added valueHandlers and Database singleton
Some issues remain, need a way to comparing incoming sql.DB to check if the connection is to a different file or not. Maybe make singleton instance per filename
* Added functionality to check the sqlite filename
* Refactor of Database.SaveSyncSettings to be used as a handler
* Implemented inteface for setting sync protobuf factories
* Refactored and completed adhoc send setting sync
* Tidying up
* Immutability refactor
* Refactor settings into dedicated package
* Breakout structs
* Tidy up
* Refactor of bulk settings sync
* Bug fixes
* Addressing feedback
* Fix code dropped during rebase
* Fix for db closed
* Fix for node config related crashes
* Provisional fix for type assertion - issue 2
* Adding robust type assertion checks
* Partial fix for null literal db storage and json encoding
* Fix for passively handling nil sql.DB, and checking if elem has len and if len is 0
* Added test for preferred name behaviour
* Adding saved sync settings to MessengerResponse
* Completed granular initial sync and clock from network on save
* add Settings to isEmpty
* Refactor of protobufs, partially done
* Added syncSetting receiver handling, some bug fixes
* Fix for sticker packs
* Implement inactive flag on sync protobuf factory
* Refactor of types and structs
* Added SettingField.CanSync functionality
* Addressing rebase artifact
* Refactor of Setting SELECT queries
* Refactor of string return queries
* VERSION bump and migration index bump
* Deactiveate Sync Settings
* Deactiveated preferred_name and send_status_updates
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
These are used to store settings for individual communities a
user is part of, either as member or as owner.
This included whether or not the community history archive protocol
is enabled.
This adds a new `CommunitySettings` type and adds
a migration script that introduces a new `communities_settings`
table.
It also extends the `MessengerResponse` type to include
`CommunitySettings` which are honored when communities are being
added, edited, joined or left.
Lastly, this adds a new RPC API to retreive the settings.
Closes#2564
This commit introduces a new `TorrentConfig` as per #2563 with dedicated
default values and new options/flags for the status-go CLI.
Since it's part of `NodeConfig`, which is stored per user in the
database, this commit also adds a migration script that adds a new
`torrent_config` table and database APIs to insert and retreive the
torrent config.
Closes#2563
This commit enables mailserver cycle logic by default and make a few
changes:
1) Nodes are graylisted instead of being blacklisted for a set amount of
time. The reason is that if we blacklist, any cut in connectivity
might result in long delays before reconnecting, especially on spotty
connections.
2) Fixes an issue on the devp2p server, whereby the node would not
connect to one of the static nodes since all the connection slots
where filled. The fix is a bit inelegant, it always connects to
static nodes, ignoring maxpeers, but it's tricky to get it to work
since the code is clearly not written to select a specific node.
3) Adds support to pinned mailservers
4) Add retries to mailservers requests. It uses a closure for now, I
think we should eventually have a channel etc, but I'd leave that for
later.
We periodically check whether we should backup contacts through waku.
Currently it's not tied to any event, it will only schedule a backup on
a tick, or through the provided API endpoint.
* feat: enable wallet without network binding
* feat: make transfer network aware
* feat: allow to pass initial networks via config
* fix: nil check and feed
* feat: Add documentation with better function name
* fix: do not init the manager more than once
* fix: PR feedbacks
* Bump version
* Update Jenkinsfile.tests
* Convert int to string
Co-authored-by: RichΛrd <info@richardramos.me>
* Protobufs and adapters
* Added basic anon metric service and config init
* Added fibonacci interval incrementer
* Added basic Client.Start func and integrated interval incrementer
* Added new processed field to app metrics table
* Added id column to app metrics table
* Added migration clean up
* Added appmetrics GetUnprocessed and SetToProcessedByIDs and tests
There was a wierd bug where metrics in the db that did not explicitly insert a value would be NULL, so could not be found by . In addition I've added a new primary id field to the app_metrics table so that updates could be done against very specific metric rows.
* Updated adaptors and db to handle proto_id
I need a way to distinguish individual metric items from each other so that I can ignore the ones that have been seen before.
* Moved incrementer into dedicated file
* Resolve incrementer test fail
* Finalised the main loop functionality
* Implemented delete loop framework
* Updated adaptors file name
* Added delete loop delay and quit, and tweak on RawMessage gen
* Completed delete loop logic
* Added DBLock to prevent deletion during mainLoop
* Added postgres DB connection, integrated into anonmetrics.Server
* Removed proto_id from SQL migration and model
* Integrated postgres with Server and updated adaptors
* Function name update
* Added sample config files for client and server
* Fixes and testing for low level e2e
* make generate
* Fix lint
* Fix for receiving an anonMetricBatch not in server mode
* Postgres test fixes
* Tidy up, make vendor and make generate
* delinting
* Fixing database tests
* Attempted fix of does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error on sql resource loas
* Moved all anon metric postgres migration logic and sources into a the protocol/anonmetrics package or sub packages. I don't know if this will fix the does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error that happens in Jenkins but this could work
* Lint for the lint god
* Why doesn't the linter list all its problems at once?
* test tweaks
* Fix for wakuV2 change
* DB reset change
* Fix for postgres db migrations fails
* More robust implementation of postgres test setup and teardown
* Added block for anon metrics functionality
* Version Bump to 0.84.0
* Added test to check anon metrics broadcast is deactivated
* Protobufs and adapters
* Added basic anon metric service and config init
* Added new processed field to app metrics table
* Added id column to app metrics table
* Added migration clean up
* Added appmetrics GetUnprocessed and SetToProcessedByIDs and tests
There was a wierd bug where metrics in the db that did not explicitly insert a value would be NULL, so could not be found by . In addition I've added a new primary id field to the app_metrics table so that updates could be done against very specific metric rows.
* Updated adaptors and db to handle proto_id
I need a way to distinguish individual metric items from each other so that I can ignore the ones that have been seen before.
* Added postgres DB connection, integrated into anonmetrics.Server
* Removed proto_id from SQL migration and model
* Integrated postgres with Server and updated adaptors
* Added sample config files for client and server
* Fix lint
* Fix for receiving an anonMetricBatch not in server mode
* Postgres test fixes
* Tidy up, make vendor and make generate
* Moved all anon metric postgres migration logic and sources into a the protocol/anonmetrics package or sub packages. I don't know if this will fix the does: cannot open `does' (No such file or directory)
not: cannot open `not' (No such file or directory)
exist: cannot open `exist' (No such file or directory) error that happens in Jenkins but this could work
* Added community sync protobuf
* Updated community sync send logic
* Integrated syncCommunity handling
* Added synced_at field and tidied up some other logic
* persistence testing
* Added testing and join functionality
* Fixed issue with empty scan params
* Finshed persistence tests for new db funcs
* Midway debug of description not persisting after sync
* Resolved final issues and tidied up
* Polish
* delint
* Fix error not handled on SetPrivateKey
* fix infinite loop, again
* Added muted option and test fix
* Added Muted to syncing functions, not just in persistence
* Fix bug introduced with Muted property
* Added a couple of notes for future devs
* Added most of the sync RequestToJoin functionality
Tests need to be completed and tests are giving some errors
* Finished tests for getJoinedAndPending
* Added note
* Resolving lint
* Fix of protobuf gen bug
* Fixes to community sync tests
* Fixes to test
* Continued fix of e2e
* Final fix to e2e testing
* Updated migration position
* resolve missing import
* Apparently the linter spellchecks
* Fix bug from #2276 merge
* Bug fix for leaving quirkiness
* Addressed superfluous MessengerResponse field
* Addressed feedback
* VERSION bump
This commit adds a setting for mailserver, that allows the client to
specify the syncing period.
Also fixes a minor issue with deletion of public chats.
This property is useful for clients to know when a channel or chat
was joined so they can use that to calculate the order of channels
and chats shown in applications.
Changes include a new joined property on the Chat struct,
as well as adjustments in the persistence layer to retreive and
update chat data in the database.
In addition there's a migration script that alters the existing
chat table to introduce a new column for the joined field.
It also updates all existing rows in the database to set `joined`
to `0`.
* Added anon metrics send opt in setting
* resolved rebase conflict, renamed migration to use unixtimestamp
Theres always conflicts with migrations using sequential numbers, less so with unix timestamp
* Add extra event to capture other type of navigations, allow empty screen name, rename cofx to get rid of clj ns, update tests
* Some view ids are greater than 16 characters. Made it 32 to be safe.
* Tab navigation events occur outside nav, add a new validator for them
* Remove navigate to cofx event, capture screens on will focus, get rid of enum and make valid screens a string less than 32 characters
* Run make generate
* Fix test
* Bump version to 0.75.1