* feat: Marking Mentions and Replies AC notifications as read also marks corresponding message as seen
* feat: Marking message as seen marks as read corresponding notification (if there is so)
* chore: make messenger activity center test less flaky
* Update VERSION
* feat: Add profile showcase messaging part with ecrypted data
* feat: Separate profile showcase categories to provide ablity to store custom data
* fix: review fixes
* feat: move profile showcase out of contact data
* fix: create index on contact id for profile tables
* chore: remove logger from link preview
Which specifies that if a user is not a community member & a
chat member, he can't post, react or pin messages in that chat.
Notes:
- also fix&cleanup associated failing tests.
- refactor Community.CanPost() to reflect the new requirement.
- grant code is not fully implemented and is to be removed later.
Fixes https://github.com/status-im/status-desktop/issues/11915
- use protected topics for communities
- associate chats to pubsub topics and populate these depending if the chat belongs to a community or not
- mailserver functions should be aware of pubsub topics
- generate private key for pubsub topic protection when creating a community
- add shard cluster and index to communities
- setup shards for existing communities
- distribute pubsubtopic password
- fix: do not send the requests to join and cancel in the protected topic
- fix: undefined shard values for backward compatibility
- refactor: use shard message in protobuffers
New contracts and contract go functions.
Adjust owner&master tokens deployment flow.
Create deployment signature.
CommunityTokens API for handling signer pubkey.
Issue #11954
* sync preferred name;
remove settings.usernames
* update account name when handle settings.preferred_name from backup message
* fix Error:Field validation for 'KeycardPairingDataFile' failed on the 'required' tag
* bump version
* rebase
refactor: associate chats to pubsub topics and populate these depending if the chat belongs to a community or not
refactor: add pubsub topic to mailserver batches
chore: ensure default relay messages continue working as they should
refactor: mailserver functions should be aware of pubsub topics
fix: use []byte for communityIDs
The only place where appDB is used in wallet is activity,
which refers to `keycards_accounts` table. So a temporary
table `keycards_accounts` is created in wallet db and updated
before each activity query.
If a message is sent with only 1 image, the album is not generated (no albumID), so then, in the notification handling code, it didn't use the right ID, because it thought it had to use the AlbumID for the message ID
- distribute ratchet keys at both community and channel levels
- use explicit `HashRatchetGroupID` in ecryption layer, instead of
inheriting `groupID` from `CommunityID`
- populate `HashRatchetGroupID` with `CommunityID+ChannelID` for
channels, and `CommunityID` for whole community
- hydrate channels with members; channel members are now subset of
community members
- include channel permissions in periodic permissions check
closes: status-im/status-desktop#10998
This component decouples key distribution from the Messenger, enhancing
code maintainability, extensibility and testability.
It also alleviates the need to impact all methods potentially affecting
encryption keys.
Moreover, it allows key distribution inspection for integration tests.
part of: status-im/status-desktop#10998
* chore: make the owner without the community private key behave like an admin
* Add test for the owner without community key
* chore: refactor Community fn names related to the roles
If user followed onboarding flow to recover his account using seed phrase or keycard,
then `ProcessBackedupMessages` property of node config json object should be set to
`true`, otherwise it should be set to `false` or be omitted.
* feat: add api to remove private key and separete owner from private key ownership
For https://github.com/status-im/status-desktop/issues/11475
* feat: introduce IsControlNode for Community
* feat: remove community private key from syncing
* feat: add IsControlNode flag to Community json serialisation
* Update protocol/protobuf/pairing.proto
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
---------
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
* mute and unmute all community chats when community mute status changes
* unmute community when atleast one channel is unmuted
* fix: save community, extend the function to save muted state and mute duration
chore:
- add CommunityEventsMessage
- refactor community_admin_event to accept a list of events and patch a CommunityDescription
- save/read community events into/from database
- publish and handle community events message
- fixed admin category tests
- rename AdminEvent to Events or CommunityEvents
This is the second step of improvements over keypairs/keycards/accounts.
- `SyncKeycardAction` protobuf removed
- `SyncKeypair` protobuf is used for syncing keycards state as well as for all
keycards related changes
- `last_update_clock` column removed from `keypairs` table cause as well as
for accounts, any keycard related change is actually a change made on a related
keypair, thus a keypair's clock keeps the clock of the last change
- `position` column added to `keypairs` table, needed to display keycards in
the same order accross devices
This is the first step of improvements over keypairs/keycards/accounts.
- `SyncKeypairFull` protobuf removed
- `SyncKeypair` protobuf is used for syncing all but the watch only accounts
- `SyncAccount` is used only for syncing watch only accounts
- related keycards are synced together with a keypair
- on any keypair change (either it's just a keypair name or any change made over an
account which belongs to that keypair) entire keypair is synced including related keycards
- on any watch only account related change, that account is synced with all its details
Sometimes confirmation for raw messages are received before the record
is actually saved in the database.
In this case, the code will preserve the Sent status.
It happens that an envelope is sent before it's tracked, resulting in
long delays before the envelope is marked as sent.
This commit changes the behavior of the code so that order is now
irrelevant.
There's two scenarios in which we're leaving a community:
We either get kicked or we leave ourselves.
In case of leaving ourselves it's fine to unsubscribe from further
community updates be cause we deliberately chose to leave.
In case of being kicked however, this is different.
Say I'm kicked from a community because its token permissions have
changed, in this case we don't want clients to manually re-subscribe to
the community to get informed when there were further changes.
status-go should rather not unsubscribe if we know for sure we've been
kicked by someone else.
fix flaky test: TestRetrieveBlockedContact
resolve conflict when rebase origin/develop
Feat/sync activity center notification (#3581)
* feat: sync activity center notification
* add test
* fix lint issue
* fix failed test
* addressed feedback from sale
* fix failed test
* addressed feedback from ilmotta
go generate ./protocol/migrations/sqlite/...
feat: add updated_at for syncing activity center notification
Add support for unfurling a wider range of websites. Most code changes are
related to the implementation of a new Unfurler, an OEmbedUnfurler, which is
necessary to get metadata for Reddit URLs using oEmbed, since Reddit does not
support OpenGraph meta tags. The new unfurler will also be useful for other
websites, like Twitter. Also the user agent was changed, and now more websites
consider status-go reasonably human.
Related to issue https://github.com/status-im/status-mobile/issues/15918
Example hostnames that are now unfurleable: reddit.com, open.spotify.com,
music.youtube.com
Other improvements:
- Better error handling, especially because I wasn't wrapping errors correctly.
I also removed the unnecessary custom error UnfurlErr.
- I made tests truly deterministic by parameterizing the http.Client instance
and by customizing its Transport field (except for some failing conditions
where it's even good to hit the real servers).
* fix(community): stop re-joining comm when receiving a sync community msg
Fixes an issue with chats being reset. Since joining a community resaves the chats with the synced default value, it resets the sate of the chats, losing the unread messages, the muted state and more.
The solution is to block the re-joining of the community. In the case of the sync, we catch that error and just continue on.
* fix(import): fix HandleImport not saving the chat
Doesn't change much, but it could have caused issues in the future, so since we might have modified the chat, we make sure to save them
Also adds a test
* fix tests
- old `accounts` table is moved/mapped to `keypairs` and `keypairs_accounts`
- `keycards` table has foreign key which refers to `keypairs.key_uid`
- `Keypair` introduced as a new type
- api endpoints updated according to this change
This commit does a few things:
1) Extend create/import account endpoint to get wallet config, some of
which has been moved to the backend
2) Set up a loop for retrieving balances every 10 minutes, caching the
balances
3) Return information about which checks are not passing when trying to
join a token gated community
4) Add tests to the token gated communities
5) Fixes an issue with addresses not matching when checking for
permissions
The move to the wallet as a background task is not yet complete, I need
to publish a signal, and most likely I will disable it before merging
for now, as it's currently not used by desktop/mobile, but the PR was
getting to big
This is the initial implementation for the new URL unfurling requirements. The
most important one is that only the message sender will pay the privacy cost for
unfurling and extracting metadata from websites. Once the message is sent, the
unfurled data will be stored at the protocol level and receivers will just
profit and happily decode the metadata to render it.
Further development of this URL unfurling capability will be mostly guided by
issues created on clients. For the moment in status-mobile:
https://github.com/status-im/status-mobile/labels/url-preview
- https://github.com/status-im/status-mobile/issues/15918
- https://github.com/status-im/status-mobile/issues/15917
- https://github.com/status-im/status-mobile/issues/15910
- https://github.com/status-im/status-mobile/issues/15909
- https://github.com/status-im/status-mobile/issues/15908
- https://github.com/status-im/status-mobile/issues/15906
- https://github.com/status-im/status-mobile/issues/15905
### Terminology
In the code, I've tried to stick to the word "unfurl URL" to really mean the
process of extracting metadata from a website, sort of lower level. I use "link
preview" to mean a higher level structure which is enriched by unfurled data.
"link preview" is also how designers refer to it.
### User flows
1. Carol needs to see link previews while typing in the chat input field. Notice
from the diagram nothing is persisted and that status-go endpoints are
essentially stateless.
```
#+begin_src plantuml :results verbatim
Client->>Server: Call wakuext_getTextURLs
Server-->>Client: Normalized URLs
Client->>Client: Render cached unfurled URLs
Client->>Server: Unfurl non-cached URLs.\nCall wakuext_unfurlURLs
Server->>Website: Fetch metadata
Website-->>Server: Metadata (thumbnail URL, title, etc)
Server->>Website: Fetch thumbnail
Server->>Website: Fetch favicon
Website-->>Server: Favicon bytes
Website-->>Server: Thumbnail bytes
Server->>Server: Decode & process images
Server-->>Client: Unfurled data (thumbnail data URI, etc)
#+end_src
```
```
,------. ,------. ,-------.
|Client| |Server| |Website|
`--+---' `--+---' `---+---'
| Call wakuext_getTextURLs | |
| ---------------------------------------> |
| | |
| Normalized URLs | |
| <- - - - - - - - - - - - - - - - - - - - |
| | |
|----. | |
| | Render cached unfurled URLs | |
|<---' | |
| | |
| Unfurl non-cached URLs. | |
| Call wakuext_unfurlURLs | |
| ---------------------------------------> |
| | |
| | Fetch metadata |
| | ------------------------------------>
| | |
| | Metadata (thumbnail URL, title, etc)|
| | <- - - - - - - - - - - - - - - - - -
| | |
| | Fetch thumbnail |
| | ------------------------------------>
| | |
| | Fetch favicon |
| | ------------------------------------>
| | |
| | Favicon bytes |
| | <- - - - - - - - - - - - - - - - - -
| | |
| | Thumbnail bytes |
| | <- - - - - - - - - - - - - - - - - -
| | |
| |----. |
| | | Decode & process images |
| |<---' |
| | |
| Unfurled data (thumbnail data URI, etc)| |
| <- - - - - - - - - - - - - - - - - - - - |
,--+---. ,--+---. ,---+---.
|Client| |Server| |Website|
`------' `------' `-------'
```
2. Carol sends the text message with link previews in the RPC request
wakuext_sendChatMessages. status-go assumes the link previews are good
because it can't and shouldn't attempt to re-unfurl them.
```
#+begin_src plantuml :results verbatim
Client->>Server: Call wakuext_sendChatMessages
Server->>Server: Transform link previews to\nbe proto-marshalled
Server->DB: Write link previews serialized as JSON
Server-->>Client: Updated message response
#+end_src
```
```
,------. ,------. ,--.
|Client| |Server| |DB|
`--+---' `--+---' `+-'
| Call wakuext_sendChatMessages| |
| -----------------------------> |
| | |
| |----. |
| | | Transform link previews to |
| |<---' be proto-marshalled |
| | |
| | |
| | Write link previews serialized as JSON|
| | -------------------------------------->
| | |
| Updated message response | |
| <- - - - - - - - - - - - - - - |
,--+---. ,--+---. ,+-.
|Client| |Server| |DB|
`------' `------' `--'
```
3. The message was sent over waku and persisted locally in Carol's device. She
should now see the link previews in the chat history. There can be many link
previews shared by other chat members, therefore it is important to serve the
assets via the media server to avoid overloading the ReactNative bridge with
lots of big JSON payloads containing base64 encoded data URIs (maybe this
concern is meaningless for desktop). When a client is rendering messages with
link previews, they will have the field linkPreviews, and the thumbnail URL
will point to the local media server.
```
#+begin_src plantuml :results verbatim
Client->>Server: GET /link-preview/thumbnail (media server)
Server->>DB: Read from user_messages.unfurled_links
Server->Server: Unmarshal JSON
Server-->>Client: HTTP Content-Type: image/jpeg/etc
#+end_src
```
```
,------. ,------. ,--.
|Client| |Server| |DB|
`--+---' `--+---' `+-'
| GET /link-preview/thumbnail (media server)| |
| ------------------------------------------> |
| | |
| | Read from user_messages.unfurled_links|
| | -------------------------------------->
| | |
| |----. |
| | | Unmarshal JSON |
| |<---' |
| | |
| HTTP Content-Type: image/jpeg/etc | |
| <- - - - - - - - - - - - - - - - - - - - - |
,--+---. ,--+---. ,+-.
|Client| |Server| |DB|
`------' `------' `--'
```
### Some limitations of the current implementation
The following points will become separate issues in status-go that I'll work on
over the next couple weeks. In no order of importance:
- Improve how multiple links are fetched; retries on failure and testing how
unfurling behaves around the timeout limits (deterministically, not by making
real HTTP calls as I did). https://github.com/status-im/status-go/issues/3498
- Unfurl favicons and store them in the protobuf too.
- For this PR, I added unfurling support only for websites with OpenGraph
https://ogp.me/ meta tags. Other unfurlers will be implemented on demand. The
next one will probably be for oEmbed https://oembed.com/, the protocol
supported by YouTube, for example.
- Resize and/or compress thumbnails (and favicons). Often times, thumbnails are
huge for the purposes of link previews. There is already support for
compressing JPEGs in status-go, but I prefer to work with compression in a
separate PR because I'd like to also solve the problem for PNGs (probably
convert them to JPEGs, plus compress them). This would be a safe choice for
thumbnails, favicons not so much because transparency is desirable.
- Editing messages is not yet supported.
- I haven't coded any artificial limit on the number of previews or on the size
of the thumbnail payload. This will be done in a separate issue. I have heard
the ideal solution may be to split messages into smaller chunks of ~125 KiB
because of libp2p, but that might be too complicated at this stage of the
product (?).
- Link preview deletion.
- For the moment, OpenGraph metadata is extracted by requesting data for the
English language (and fallback to whatever is available). In the future, we'll
want to unfurl by respecting the user's local device language. Some websites,
like GoDaddy, are already localized based on the device's IP, but many aren't.
- The website's description text should be limited by a certain number of
characters, especially because it's outside our control. Exactly how much has
not been decided yet, so it'll be done separately.
- URL normalization can be tricky, so I implemented only the basics to help with
caching. For example, the url https://status.im and HTTPS://status.im are
considered identical. Also, a URL is considered valid for unfurling if its TLD
exists according to publicsuffix.EffectiveTLDPlusOne. This was essential,
otherwise the default Go url.Parse approach would consider many invalid URLs
valid, and thus the server would waste resources trying to unfurl the
unfurleable.
### Other requirements
- If the message is edited, the link previews should reflect the edited text,
not the original one. This has been aligned with the design team as well.
- If the website's thumbnail or the favicon can't be fetched, just ignore them.
The only mandatory piece of metadata is the website's title and URL.
- Link previews in clients should be generated in near real-time, that is, as
the user types, previews are updated. In mobile this performs very well, and
it's what other clients like WhatsApp, Telegram, and Facebook do.
### Decisions
- While the user typing in the input field, the client is constantly (debounced)
asking status-go to parse the text and extract normalized URLs and then the
client checks if they're already in its in-memory cache. If they are, no RPC
call is made. I chose this approach to achieve the best possible performance
in mobile and avoid the whole RPC overhead, since the chat experience is
already not smooth enough. The mobile client uses URLs as cache keys in a
hashmap, i.e. if the key is present, it means the preview is readily available
(naive, but good enough for now). This decision also gave me more flexibility
to find the best UX at this stage of the feature.
- Due to the requirement that users should be able to see independent loading
indicators for each link preview, when status-go can't unfurl a URL, it
doesn't return it in the response.
- As an initial implementation, I added the BLOB column unfurled_links to the
user_messages table. The preview data is then serialized as JSON before being
stored in this column. I felt that creating a separate table and the related
code for this initial PR would be inconvenient. Is that reasonable to you?
Once things stabilize I can create a proper table if we want to avoid this
kind of solution with serialized columns.
* sync local deleted messages
* rebase
* add REPLACE
* fix lint
* defer rows.Close() / rename function
* add local pair test
* replace unused clock with _
There were a couple of issues on how we handle pinned messages:
1) Clock of the message was only checked when saving, meaning that the
client would receive potentially updates that were not to be
processed.
2) We relied on the client to generate a notification for a pinned
message by sending a normal message through the wire. This PR changes
the behavior so that the notification is generated locally, either on
response to a network event or client event.
3) When deleting a message, we pull all the replies/pinned notifications
and send them over to the client so they know that those messages
needs updating.
- setting clock when adding/updating accounts
- syncing/backing up accounts with the time they actually were updated instead
the time when dispatching is done as it was before
Changes applied here introduce:
- improvements to sync wallet accounts among devices (including all account types)
- backing up wallet accounts to and fetch them from waku (an information about received
wallet accounts is sent via `waku.backedup.wallet-account` signal to a client)
* Community request to join changes
* Fix read state for request to join notification
* Bring back deleted notification when updated with response
* Update Request timeout to 7 days
* Update VERSION
* fix(image-album): make sure to delete all images part of an album
* test(delete): add test that deletes a message part of an album
* test(delete): add test where the signal to delete images is after
also adds the handling of deleteForMe
* fix(delete): add album deletion handling for deleteForMe
This adds an additional check for collectibles when community
permissions are validated.
Specifically this uses opensea to request all NFTs given an
owner wallet and a list of contract addresses (collectibles).
This adds checks to `HandleCommunityRequestToJoin` and
`AcceptRequestToJoinCommunity` that ensure a given user's revealed
wallet addresses own the token funds required by a community.
When community has token permissions of type `BECOME_MEMBER`, the
following happens when the owner receives a request:
1. Upon verifying provided wallet addresses by the requester, the owner
node accumulates all token funds related to the given wallets that
match the token criteria in the configured permissions
2. If the requester does not meet the necessary requirements, the
request to join will be declined. If the requester does have the
funds, he'll either be automatically accepted to the community, or
enters the next stage where an owner needs to manually accept the
request.
3. The the community does not automatically accept users, then the funds
check will happen again, when the owner tries to manually accept the
request. If the necessary funds do not exist at this stage, the
request will be declined
4. Upon accepting, whether automatically or manually, the owner adds the
requester's wallet addresses to the `CommunityDescription`, such that
they can be retrieved later when doing periodic checks or when
permissions have changed.
This commit extends the `CommunityRequestToJoin` with `RevealedAddresses` which represent wallet addresses and signatures provided by the sender, to proof a community owner ownership of those wallet addresses.
**Note: This only works with keystore files maanged by status-go**
At high level, the follwing happens:
1. User instructs Status to send a request to join to a community. By adding a password hash to the instruction, Status will try to unlock the users keystore and verify each wallet account.
2. For every verified wallet account, a signature is created for the following payload, using each wallet's private key
``` keccak256(chatkey + communityID + requestToJoinID) ``` A map of walletAddress->signature is then attached to the community request to join, which will be sent to the community owner
3. The owner node receives the request, and if the community requires users to hold tokens to become a member, it will check and verify whether the given wallet addresses are indeed owned by the sender. If any signature provided by the request cannot be recovered, the request is immediately declined by the owner.
4. The verified addresses are then added to the owner node's database such that, once the request should be accepted, the addresses can be used to check on chain whether they own the necessary funds to fulfill the community's permissions
The checking of required funds is **not** part of this commit. It will be added in a follow-up commit.
Also, make AccountManager a dependency of Messenger.
This is needed for community token permissions as we'll need a way to access wallet accounts
and sign messages when sending requests to join a community.
The APIs have been mostly taken from GethStatusBackend and personal service.
* Outgoing contact requests
* Test fix
* Test fix
* Fixes
* Bugfixes
* Bugfixes
* Almost there
* Removed the activity center notification
* Test update
* Almost ready
* Fixes
* Fixes
* feat: Add seen/unseen activity center state
* feat: ActivityCenterState for grouping ActivityCenter unread messages cnt and seen state
* feat: always use messenger's addActivityCenterNotification & add state to the response
* Remove unused activity center endpoints form api and fix test
* fix media-server sleep wake up issue
we now use waku v2 and hence messenger was nil.
Since it was nil, the logic in place responsible for triggering app state events was not firing and hence media server would become un-responsive after a sleep event.
this commit fixes that.
Co-Authored-By: Andrea Maria Piana <andrea.maria.piana@gmail.com>
---------
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
This commit fixes an issue where when accepting a contact request
the other end would display an extra notification.
It also changes WaitOnResponse to collect results. This should make
tests less flaky, since sometimes messages are processed in different
batches.
Now we need to be though exact on what we expect from the response (i.e
use == instead of >, otherwise the same behavior applies)
This uncovered a couple of issues with messenger.Merge, so I have moved
the struct to use a map based collection instead of an array.
* fix: Accepting or dismissing contact request should mark AC notification as read
* fix: Accepting or declining community request should mark AC notification as read
* fix: Read notification status fo identity verification requests
* fix: Save whole notification object while handling contact requests
* fix: Remove unused functions from activity_center_persistence
* fix: Use Accepted and Dismissed flags for AC notifications with CTA
* fix: Mark notification as unread when we handle accepted verification request
* fix: Replace Warn with Error on fail to save notification, test fixes
* fix: Fix conditions for fetching AC notifications from the db
* fix: Review fixes for err name and conditions array
This adds the functionality that history archives continue to be imported
in case the import has been interrupted the last time the app/client
was running.
This typically happens when users don't wait for an ongoing import to finish,
which sometimes can take a while. Users then close the app/kill the client
which leaves the database in a state where there's downloaded archives that
haven't been fully imported.
Prior to this change, the node will have to wait until it receives a new
magnetlink that it hasn't seen before, until it processes imports again.
This can take several days.
Now, it will check on startup if there are any archives left to be imported
and resumes the import from there.
Because `QuotedMessage` doesn't include imported message data,
some of the author information in imported messages is lost in
frontends.
This commit adds a `discordMessage` (soon replaced by `importedMessage`)
to `Quotedmessage`, although only hydrated with a subset of data,
namely author display name and avatar URL, as those are the only ones
needed by front-end atm.