- `keypair_name` added to `accounts` table, all accounts derived from the
same master key have the same keypair name and also no two keypairs share
the same keypair name (keypair name is unique per keypair)
- `last_used_derivation_index` added to `accounts` table, cause we need
to maintain the highest index been used for the derivations made within
the same keypair
When community owners accept pending requests manually, they would be
declined in that process if the request doesn't fullfill the required
token permission criteria.
We don't want this to automatically reject those requests anymore,
instead, owners have to manually reject the requests.
When a community permission is edited, we need to revalidate
the token criteria with the existing member list, as members might
no longer fulfill the requirements.
This commit runs the checks in a go routine after the permission has
been updated.
This adds checks to `HandleCommunityRequestToJoin` and
`AcceptRequestToJoinCommunity` that ensure a given user's revealed
wallet addresses own the token funds required by a community.
When community has token permissions of type `BECOME_MEMBER`, the
following happens when the owner receives a request:
1. Upon verifying provided wallet addresses by the requester, the owner
node accumulates all token funds related to the given wallets that
match the token criteria in the configured permissions
2. If the requester does not meet the necessary requirements, the
request to join will be declined. If the requester does have the
funds, he'll either be automatically accepted to the community, or
enters the next stage where an owner needs to manually accept the
request.
3. The the community does not automatically accept users, then the funds
check will happen again, when the owner tries to manually accept the
request. If the necessary funds do not exist at this stage, the
request will be declined
4. Upon accepting, whether automatically or manually, the owner adds the
requester's wallet addresses to the `CommunityDescription`, such that
they can be retrieved later when doing periodic checks or when
permissions have changed.
We need to store the `decimals` of a given token when creating community
permissions so that we can use it later on to do calculations when
checking funds for given wallet addresses.
This commit extends the `CommunityRequestToJoin` with `RevealedAddresses` which represent wallet addresses and signatures provided by the sender, to proof a community owner ownership of those wallet addresses.
**Note: This only works with keystore files maanged by status-go**
At high level, the follwing happens:
1. User instructs Status to send a request to join to a community. By adding a password hash to the instruction, Status will try to unlock the users keystore and verify each wallet account.
2. For every verified wallet account, a signature is created for the following payload, using each wallet's private key
``` keccak256(chatkey + communityID + requestToJoinID) ``` A map of walletAddress->signature is then attached to the community request to join, which will be sent to the community owner
3. The owner node receives the request, and if the community requires users to hold tokens to become a member, it will check and verify whether the given wallet addresses are indeed owned by the sender. If any signature provided by the request cannot be recovered, the request is immediately declined by the owner.
4. The verified addresses are then added to the owner node's database such that, once the request should be accepted, the addresses can be used to check on chain whether they own the necessary funds to fulfill the community's permissions
The checking of required funds is **not** part of this commit. It will be added in a follow-up commit.
Also, make AccountManager a dependency of Messenger.
This is needed for community token permissions as we'll need a way to access wallet accounts
and sign messages when sending requests to join a community.
The APIs have been mostly taken from GethStatusBackend and personal service.
- completely replace social links on save
- respect the order of items and also the URL when comparing
Rationale: for MVP, we'll want the user to be able to add several links
of the same type, and adjust/preserve their order by drag'n'drop
Needed for https://github.com/status-im/status-desktop/issues/9777
The `Edit()` method on `Community` merely updates "primitive" values
that live inside a community description. For any data that is more complex,
we typically have dedicated methods.
Because `Edit()` was expecting `CommunityTokensMetadata`, it would
override it with empty data every time we would edit a community.
This is because we typically don't update that kind of data as part
of `Edit()`.
In addition, `CommunityTokensMetadata` is append-only anyways,
so there wouldn't be any other way to update that field, other than
adding new items to it, which is done in a dedicated method.
`type` column is set for all rows to appropriate value. Before this change
accounts which were generated from the keypair created importing seed phrase
had `generated` value for the `type`.
According to above, a function for generating an account sets the `type`
based on the passed derive from address.
Community tokens has some metadata (image, description) which must be kept in waku(CommunityDescription).
Add CommunityTokenMetadata message to communities.proto.
Add []CommunityTokenMetadata to CommunityDescription.
Issue #9545
prefixes. Changed primary keys and API methods.
Fixed tests and added new ones.
Fixed saved addresses and transaction tests to use ':memory:' sqlite
DB instead of a tmp file to speed up testing by hundred of times.
Fixes#8599
* feat: refactor activity center endpoints
* fix: restore activity center tests using new endpoints
* feat: Remove from activity center endpoints accepted flag
* feat: Activity Center review fixes
Changes applied here introduce backing up keycards data to and fetch them from waku.
Information about received keycards data is sent via `sync.from.waku.keycards` signal.
Adds a new column named `deleted` to the table `activity_center_notifications`.
Related PR in Mobile https://github.com/status-im/status-mobile/pull/15106 for a lot more details of the feature.
Why? Relying on the `dismissed` column for soft deletion is no longer viable because the mobile & desktop clients should display dismissed notifications (sometimes), hence the need for a new column to truly represent soft deletion.
* feat: add fetching notifications by activity center group
* feat: add api for counting notifications by activity center group
* chore: activity center query code refactor
* feat: add endpoint returning ActivityCenterType(s) by ActivityCenterGroup
* chore: Remove activityGroup from status-go level
* feat: add endpoint for counting notifications with different conditions
A migration was added out-of-order, which meant that in clients who
had already run the migration after, it would be skip.
This commit re-adds the migration so it's run, tested against an empty
account and one that had already migrated.
* chore: upgrade go-waku to v0.5
* chore: add println and logs to check what's being stored in the enr, and preemptively delete the multiaddr field (#3219)
* feat: add wakuv2 test (#3218)
* Outgoing contact requests
* Test fix
* Test fix
* Fixes
* Bugfixes
* Bugfixes
* Almost there
* Removed the activity center notification
* Test update
* Almost ready
* Fixes
* Fixes
* feat: Add seen/unseen activity center state
* feat: ActivityCenterState for grouping ActivityCenter unread messages cnt and seen state
* feat: always use messenger's addActivityCenterNotification & add state to the response
* Remove unused activity center endpoints form api and fix test
* fix media-server sleep wake up issue
we now use waku v2 and hence messenger was nil.
Since it was nil, the logic in place responsible for triggering app state events was not firing and hence media server would become un-responsive after a sleep event.
this commit fixes that.
Co-Authored-By: Andrea Maria Piana <andrea.maria.piana@gmail.com>
---------
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
This commit fixes an issue where when accepting a contact request
the other end would display an extra notification.
It also changes WaitOnResponse to collect results. This should make
tests less flaky, since sometimes messages are processed in different
batches.
Now we need to be though exact on what we expect from the response (i.e
use == instead of >, otherwise the same behavior applies)
This uncovered a couple of issues with messenger.Merge, so I have moved
the struct to use a map based collection instead of an array.
Motivated by: https://github.com/status-im/status-desktop/pull/9084
While we don't request preview data from Tenor because the url itself will contain the GIF image, we still need to validate the url in the preview request. This is done using a HEAD request where we expect the response status code to be 200 OK.
At the same time we will add the content type to the preview response.
In general, any time a piece of state is updated in the backend, that
should be propagated to the client through signals.
In this case, when a request was accepted, the client wasn't notified,
requiring them to re-fetch the accepted requests and causing
inconsistent state between status-go and client.
This commit refactors the discord import tool such that,
instead of loading all data to be imported into memory at
once, it will now perform the import on a per file basis.
This improves the memory pressure for the node performing
the import and seems to increase its performance as well.
* fix: Accepting or dismissing contact request should mark AC notification as read
* fix: Accepting or declining community request should mark AC notification as read
* fix: Read notification status fo identity verification requests
* fix: Save whole notification object while handling contact requests
* fix: Remove unused functions from activity_center_persistence
* fix: Use Accepted and Dismissed flags for AC notifications with CTA
* fix: Mark notification as unread when we handle accepted verification request
* fix: Replace Warn with Error on fail to save notification, test fixes
* fix: Fix conditions for fetching AC notifications from the db
* fix: Review fixes for err name and conditions array
There were cases where this caused a crash, as handling magnetlinks would try to close
an already closed tasked channel
See https://github.com/status-im/status-desktop/issues/8996 for more information.
This commit extends the task struct such that it can be marked as cancelled and safely
read and written by multiple go routines.
This introduces an addition constraint to archive generation, in which the payload + signature size of all partitioned message that go into an archive should not exceed a certain
threshold.
This is to ensure that archives won't get too big when they are later read into memory.
Summary
=======
- [x] Changes endpoint ActivityCenterNotificationsBy to support fetching
multiple types of notification in a single query.
- [x] Adds endpoint UnreadAndAcceptedActivityCenterNotificationsCount to
allow the mobile client to fetch the count of unread & accepted
notifications.
- [x] Add `golangci-lint` to Nix shell. This was possible since PR
https://github.com/status-im/status-go/pull/3087 was merged.
Notes
=====
- If you'd like to understand why these changes are needed, please see
the mobile PR https://github.com/status-im/status-mobile/pull/14785,
or issue https://github.com/status-im/status-mobile/issues/14712
- All changes should be completely backwards compatible, and there
should be no impact for the desktop app.
- The mobile client has been already tested using this branch.
This adds the functionality that history archives continue to be imported
in case the import has been interrupted the last time the app/client
was running.
This typically happens when users don't wait for an ongoing import to finish,
which sometimes can take a while. Users then close the app/kill the client
which leaves the database in a state where there's downloaded archives that
haven't been fully imported.
Prior to this change, the node will have to wait until it receives a new
magnetlink that it hasn't seen before, until it processes imports again.
This can take several days.
Now, it will check on startup if there are any archives left to be imported
and resumes the import from there.
Instead of loading the entire torrent file into memory when trying
to extrract active messages, we now only read the chunks that are
necessary to decode any individual archive and then process
extracted messages in chunks.
This doesn't introduce a max cap of allowed memory yet, since the
chunk size depends entirely on the size of the archive, but this
will be done soon.
Because `QuotedMessage` doesn't include imported message data,
some of the author information in imported messages is lost in
frontends.
This commit adds a `discordMessage` (soon replaced by `importedMessage`)
to `Quotedmessage`, although only hydrated with a subset of data,
namely author display name and avatar URL, as those are the only ones
needed by front-end atm.
* feat(ActivityCenter): Add missing AC notifications for verification requests
* fix(Contacts): Fix updates for trusted and untrustworthy statuses
* feat(ActivityCenter): Add test for trusted verification request and AC notifications
* feat(Contacts): Trusted and untrustworthy statuses should be unknown for verificated side
Changes applied here introduce backing up profile data (display name and identity
images) to waku and fetch them from waku. Information about those data is sent
as a separate signal to a client via `sync.from.waku.profile` signal.
New signal `sync.from.waku.progress` is introduced which will be used to notify a client
about the progress of fetching data from waku.
When discovery fails to be seeded with bootstrap/fallback nodes, it
never recovers.
This commit changes the behavior so that status-go retries fetching
bootnodes, and restarts discovery when that happens.
This is done so that when member join a community by being accepted by
the community owner, they also receive the most up-to-date magnetlink
along with it.
This commit makes a few changes to the community history archive
download routine to make it more robust:
1. Prior to this commit, even when there were no archives to be
downloaded, we were still trying to extract messages from archive
data.
2. Logs have been improved as they were sometimes showing confusing
information
3. We now handle interruption of ongoing download + data import much
better in case of multiple magnetlinks being processed in roughly the
same time.
4. We now keep track of which archive has been successfully imported
into the database. Without this, Status would consider any downloaded
archives as "done" even though they haven't actually been imported
into the database yet. This way Status should be able to pick up its
work were it left of the last time, in case a user closes the app, or
another magnetlink interrupts the ongoing process.
This creates a smoother experience for users when they leave a community since they can see the exact same messages they had before without having to rely on the mailserver.
In order to give clients more insights about archive messages being
processed, we're adding this additional signal that informs clients when
the import of downloaded history archive messages has started.
updates
When channels where removed from communities by the community owner,
community members would not remove the chats in their own databases.
See https://github.com/status-im/status-desktop/issues/8000 for more
info.
This commit ensures that nodes delete removed chats from their local
chats table and also deregister the corresponding transport filters.
We don't allow multiple channels with the same name in communities.
Discord allows for multiple channels with the same name (living in
different categories), so this is an error case in our import tool.
This commit improves the user facing error message of this scenario.
We've been limiting the amount of errors being emitted to clients
to reduce payload pressure and also due to the fact that we won't be
rendering more than 3 error items in the UI.
We want to let actual errors through (as opposed to warnings), even if
the limit was reached.
* feat(ActivityCenter): Add community request AC notification
* feat(ActivityCenter): Add CommunityID to AC notification
* feat(ActivityCenter): Add membership status for community membership AC notifications
* feat(ActivityCenter): Add tests for community notifications and fix naming
* Add notification for kicked from community action
* feat(ActivityCenter): Fix for missing notification objects for tests
Prior to this commit we had a `CreateHistoryArchiveTorrent()` API which
takes a `startDate`, an `endDate` and a `partition` to create a bunch of
message archives, given a certain time range.
The function expects the messages to live in the database, which means,
all messages that need to be archived have to be saved there at some
point.
This turns out to be an issue when importing communities from third
party services, where, sometimes, there are several thousands of messages
including attachment payloads, that have to be save to the database
first.
There are only two options to get the messages into the database:
1. Make one write operation with all messages - this slow, takes a long
time and blocks the database until done
2. Create message chunks and perform multiple write operations - this is
also slow, takes long but makes the database a bit more responsive as
it's many smaller operations instead of one big one
Option 2) turned out to not be super feasible either as sometimes,
inserting even a single such message can take up to 10 seconds
(depending on payload)
Which brings me to the third option.
**A third option** is to not store those imported messages as waku
message into the database, just to later query them again to create the
archives, but instead create the archives right away from all the
messages that have been loaded into memory.
This is significantly faster and doesn't block the database.
To make this possible, this commit introduces
a `CreateHistoryArchiveTorrentFromMessages()` API, and
a `CreateHistoryArchiveTorrentFromDB()` API which can be used for
different use cases.
settings
Turns out `UpdateCommunitySettings()` has never worked. Two parameters
where in the wrong order, cause the SQL statement to never find the row
it has to update.
When fetching torrent info after receiving a magnet link,
it can happen that the request times out.
We want to retry downloading the data again at least once more
before giving up
The default logger writes to `geth.log`, which makes debugging
the archive protocol pretty hard.
This adds an additional logger that logs to stdout, while keeping
the default logger intact for production.
Main changes:
- Extend saved addresses DB with sync info: sync timestamp, update timestamp
and soft removed flag
- Create custom protobuf message payload to sync saved addresses
- Cleanup saved addresses on each start of messenger, by deleting
soft removed older entries
- Sync all saved addresses on Messenger.SyncDevices calls
- Sync particular changes to saved addresses
- Add SavedAddressManager instance to messenger
- Note, can't find a clean way to pass the SavedAddressManager to the
messenger, so we create another one
- Add tests for sync and new DB API
Closes: #7229
needed
Unfortunately, this one slipped through when introducing helper
functions to retrieve messages.
Sometimes, queries need an additional empty `cursor` value.
- added `SpectateCommunity` endpoint, it is supposed to be used in
scenarios where we want to "Go to public Community" and see its
content without joining
- added `spectated` field to `Community`, it means we are observing the
community and its chats but we are not members
Use case:
https://github.com/status-im/status-desktop/issues/7072#issuecomment-1246560885
When initially creating settings, properties 'initialize profile_pictures_show_to'
and 'profile_pictures_visibility' are set according to the provided Setting object.
This adds a new `DiscordMessageAttachment` type which is part of
`DiscordMessage`. Along with that type, there's also a new database
table for `discord_message_attachments` and corresponding persistence
APIs.
This commit also changes how chat messages are retrieved.
Here's why:
`DiscordMessage` can have multiple `DiscordMessageAttachment`.
A chat message can have a `DiscordMessage`.
Because we're `LEFT JOIN`'ing the discord message attachments into the
chat messages, there's a possibility of multiple rows per message.
Hence, this commit ensures we collect queried discord message
attachments on chat messages.
Usually, message IDs are generated by their payload and signature and
in receiving nodes calculated in based on the same data as well.
There's no ID attached to messages in-flight.
This turns out to be a bit of a problem for messages that are being
imported from third party systems like discord, as the conversion
and saving of such messages and handling of their possible assets and
attachments are done in separate steps, which changes the message
payloads after their IDs have been generated.
Hence, we're introducing a `ThirdPartyID` property to `common.Message`
and `protobuf.WakuMessage` so receiving nodes of such messages (via the
archive protocol primarily) can easily detect third party/imported
messages and give them special treatment.
This might look like a weird requirement at a fist glance.
The reason this is needed, is because some message signals require
admin rights to take effect (e.g. PinMessage).
When messages are imported from third-party services,
translated to status messages, signed by the community, and eventually distributed
via the archive protocol, we need to ensure that messages signed
by the community itself are considered as admin privileges as well,
so they can be correctly replayed into the database.
Restore the old, previously renamed, 1662447680_add_keypairs_table.up.sql
file while keeping the current one for those who already migrated to the
new one. The extra migration is noop and saves to keep consistency in
the user data states history.
This adds a new `DownloadingHistoryArchivesFinished` signal to the
family of community archive signals. It's emitted when all to be
downloaded archives have been downloaded and handled.
shelfing it at the moment, if it becomes a problem in the future then may be we can address it then. For the moment I've just left the refactoring I already did.
`FirstMessageTimestamp` enables members of the community to determine if
there are any messages they can fetch on the community channel(chat).
`FirstMessageTimestamp` is advertised by admin for each community chat
through `CommunityDescription`. It assumes admin is online frequently
enough to capture the first channel message.
For existing communities admin determines first message timestamp by
finding oldest chat message in its local database.
task: status-im/status-desktop#6731
Add image_payload column to chats table.
Add Base64Image to chat struct.
Add ImageChange event to propagate image.
Change EditChat API - use CroppedImage.
Process and crop image in EditGroupChat.
This is so that we can control whether we want to publish the community
when it, or it's categories and channels, are created.
This is needed for the discord import so that we can create communities,
channels and categories without publishing the community and have it
show up in UIs too early.