Prior to this commit a control node would add the revealed addresses to
the member struct on the community description, which exposes all those
addresses to the public.
We don't want that. Revealed addresses are exclusively shared with
control nodes and should stay there (although, they might be privately
shared among token masters, see
https://github.com/status-im/status-desktop/issues/11610).
In this commit, we no longer add the revealed addresses to the community
description. The addresses are already stored in the requestToJoin
database table so we can take them from there if we need them.
Closes: https://github.com/status-im/status-desktop/issues/11573
This is a bigger change in how community membership requests are handled
among admins, token masters, owners, and control nodes.
Prior to this commit, all privileged users, also known as
`EventSenders`, were able to accept and reject community membership
requests and those changes would be applied by all users.
This commit changes this behaviour such that:
1. EventSenders can make a decision (accept, reject), but merely forward
their decision to the control node, which ultimately has to confirm
it
2. EventSenders are no longer removing or adding members to and from
communities
3. When an eventsender signaled a decision, the membership request will
enter a pending state (acceptedPending or rejectedPending)
4. Once a decision was made by one eventsender, no other eventsender can
override that decision
This implementation is covered with a bunch of tests:
- Ensure that decision made by event sender is shared with other event
senders
- `testAcceptMemberRequestToJoinResponseSharedWithOtherEventSenders()`
- `testRejectMemberRequestToJoinResponseSharedWithOtherEventSenders()`
- Ensure memebrship request stays pending, until control node has
confirmed decision by event senders
- `testAcceptMemberRequestToJoinNotConfirmedByControlNode()`
- `testRejectMemberRequestToJoinNotConfirmedByControlNode()`
- Ensure that decision made by event sender cannot be overriden by other
event senders
- `testEventSenderCannotOverrideRequestToJoinState()`
These test cases live in three test suites for different event sender
types respectively
- `OwnerWithoutCommunityKeyCommunityEventsSuite`
- `TokenMasterCommunityEventsSuite`
- `AdminCommunityEventsSuite`
In addition to the changes mentioned above, there's also a smaller
changes that ensures membership requests to *not* attached revealed wallet
addresses when the requests are sent to event senders (in addition to
control nodes).
Requests send to a control node will still include revealed addresses as
the control node needs them to verify token permissions.
This commit does not yet handle the case of event senders attempting to
kick and ban members.
Similar to accepting and rejecting membership requests, kicking and
banning need a new pending state. However, we don't track such state in
local databases yet so those two cases will be handled in future commit
to not have this commit grow larger.
* feat: proposal for collecting community metrics
https://github.com/status-im/status-desktop/issues/11152
* feat: collecting community message metrics with test
* feat: implement both strategies for fetching community metrics
* fix: review fixes
* fix: calc counts for timestamps
If a message is sent with only 1 image, the album is not generated (no albumID), so then, in the notification handling code, it didn't use the right ID, because it thought it had to use the AlbumID for the message ID
- distribute ratchet keys at both community and channel levels
- use explicit `HashRatchetGroupID` in ecryption layer, instead of
inheriting `groupID` from `CommunityID`
- populate `HashRatchetGroupID` with `CommunityID+ChannelID` for
channels, and `CommunityID` for whole community
- hydrate channels with members; channel members are now subset of
community members
- include channel permissions in periodic permissions check
closes: status-im/status-desktop#10998
This component decouples key distribution from the Messenger, enhancing
code maintainability, extensibility and testability.
It also alleviates the need to impact all methods potentially affecting
encryption keys.
Moreover, it allows key distribution inspection for integration tests.
part of: status-im/status-desktop#10998
**This is a breaking change!**
Prior to this commit we had `AddCommunityToken(token *communities,
croppedImage CroppedImage)` that we used to
1. add a `CommunityToken` to the user's database and
2. to create a `CommunityTokenMetadata` from it which is then added to
the community's `CommunityDescription` and published to its members
However, I've then discovered that we need to separate these two things,
such that we can deploy a community token, then add it to the database
only for tracking purposes, **then** add it to the community description
(and propagate to members) once we know that the deploy tx indeed went
through.
To implement this, this commit introduces a new API
`SaveCommunityToken(token *communities.CommunityToken, croppedImage
CroppedImage)` which adds the token to the database only and doesn't
touch the community description.
The `AddCommunityToken` API is then changed that it's exclusively used
for adding an already saved `CommunityToken` to the community
description so it can be published to members. Hence, the signature is
now `AddCommunityToken(communityID string, chainID int, address
string)`, which makes this a breaking change.
Clients that used `AddCommunityToken()` before now need to ensure that
they first call `SaveCommunityToken()` as `AddCommunityToken()` will
fail otherwise.
* chore: make the owner without the community private key behave like an admin
* Add test for the owner without community key
* chore: refactor Community fn names related to the roles
If user followed onboarding flow to recover his account using seed phrase or keycard,
then `ProcessBackedupMessages` property of node config json object should be set to
`true`, otherwise it should be set to `false` or be omitted.
- Fixed redundant permissions check. If community is set to auto-accept,
then permissions would be checked twice, in
`HandleCommunityRequestToJoin` and `AcceptRequestToJoinCommunity`.
Mitigated it by returning from `HandleCommunityRequestToJoin` immediately
in case of auto-accept.
- Extracted `accountsSatisfyPermissionsToJoin` to remove code
duplication and simplify the logic.
* feat: add api to remove private key and separete owner from private key ownership
For https://github.com/status-im/status-desktop/issues/11475
* feat: introduce IsControlNode for Community
* feat: remove community private key from syncing
* feat: add IsControlNode flag to Community json serialisation
* Update protocol/protobuf/pairing.proto
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
---------
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
* mute and unmute all community chats when community mute status changes
* unmute community when atleast one channel is unmuted
* fix: save community, extend the function to save muted state and mute duration
chore:
- add CommunityEventsMessage
- refactor community_admin_event to accept a list of events and patch a CommunityDescription
- save/read community events into/from database
- publish and handle community events message
- fixed admin category tests
- rename AdminEvent to Events or CommunityEvents
Adds airdropAddress to the request to join params and a is_airdrop_address flag in the communities_requests_to_join_revealed_addresses table.
This airdropAddress is used by the owner to know which address to use when airdropping
I have encountered a crash in the app after syncing, looks like a
community has been created without a Config.
This crashes the app after the user logs in.
This commit prevents the app from crashing, but does not fix the
underlaying issue (that's something I will have to investigate).
Added tests to validate the behavior.
Check community exists
* feat: don't remove sent mutual state messages on accepting a CR
* fix: don't send mutual state message for a new contact
* chore: move mutual state messages to `addContact`
* fix: use one chat for mutual state messages and contact requests
* fix: change `added` mutual state updatede messages to `accepted`
* feat: Use different content type for each mutual state event system message
* chore: use constants for mutual event system messages test, review fixes
* chore: fix tests related to local contacts map
Modify API to handle also ERC20 tokens.
Modify community_tokens table - keep supply as string since string is easly convertible to bigint.BigInt.
Use bigint.BigInt for supply functions and fields.
Issue #11129
This is the second step of improvements over keypairs/keycards/accounts.
- `SyncKeycardAction` protobuf removed
- `SyncKeypair` protobuf is used for syncing keycards state as well as for all
keycards related changes
- `last_update_clock` column removed from `keypairs` table cause as well as
for accounts, any keycard related change is actually a change made on a related
keypair, thus a keypair's clock keeps the clock of the last change
- `position` column added to `keypairs` table, needed to display keycards in
the same order accross devices
This is the first step of improvements over keypairs/keycards/accounts.
- `SyncKeypairFull` protobuf removed
- `SyncKeypair` protobuf is used for syncing all but the watch only accounts
- `SyncAccount` is used only for syncing watch only accounts
- related keycards are synced together with a keypair
- on any keypair change (either it's just a keypair name or any change made over an
account which belongs to that keypair) entire keypair is synced including related keycards
- on any watch only account related change, that account is synced with all its details
This commit extends the `AddCommunityToken` API to also expect an
optional `CroppedImage`, which will be used instead of the `ImageBase64`
path provided by `CommunityToken`, to calculate the actual base64
encoded image.
* feat(share-links): Add protobuf and encode/decode url data methods
* feat(new-links-format): Adds generators for new links format
* feat: add parsing for new links format
* feat: add messenger-level pubkey serialization and tests
* feat: fix and test CreateCommunityURLWithChatKey
* feat: impl and test parseCommunityURLWithChatKey
* feat: fix and test CreateCommunityURLWithData
* feat: impl and test parseCommunityURLWithData (not working)
* feat: UrlDataResponse as response share urls api
* feat: impl& tested ShareCommunityChannelURLWithChatKey
* feat: impl & tested ParseCommunityChannelURLWithChatKey
* fix: bring urls to new format
* feat: add regexp for community channel urls
* feat: impl & test contact urls with chatKey, Ens and data
* fix: encodeDataURL/encodeDataURL patch from Samyoul
* fix: fix unmarshalling protobufs
* fix: fix minor issues, temporary comment TestParseUserURLWithENS
* fix: allow url to contain extra `#` in the signature
* fix: check signatures with SigToPub
* chore: lint fixes
* fix: encode the signature
* feat: Check provided channelID is Uuid
* fix(share-community-url): Remove if community encrypted scope
* fix: review fixes
* fix: use proto.Unmarshal instead of json.Marshal
* feat(share-urls): Adds TagsIndices to community data
* feat: support tag indices to community url data
---------
Co-authored-by: Boris Melnik <borismelnik@status.im>
When deleting a message for me, the image url wasn't preserved,
resulting in the image disappearing on the client side.
This commit adds the processing of returned messages so that the image
is preserved.
- Add ERC20 contract
- Add decimals field to community_tokens db table
- Adjusting API to handle assets deployment
- Add decimals field to CommunityTokenMetadata
Issue #10987
Nameserver is passed by the OS on creation/restore, this commit adds the
ability to pass it at login time.
We don't want to store it on disk since that's bound to change, and
currently there's a bug on golang that prevents getting the DNS from the
system on android.
There's only one scenario in which a `RevealedAccount` will have an
empty `ChainIDs` list attached to it:
When the community in question requires users to satisfy certain
criteria to join, and the user's wallet does not own the necessary funds
on any of the supported chains.
If there are **no** permissions to join on the community, then we want
to reveal all (selected) accounts with all supported chainIDs.
This is necessary so that, once the community *does* become
permissioned, it'll have address + chain information from all joined
members.
Closes: https://github.com/status-im/status-desktop/issues/11255
This commit adds new tables to the database and APIs in `Messenger` and
communities `Manager` to store `CheckChannelPermissionsResponse`s.
The responses are stored whenever channel permissions have been checked.
The reason we're doing this is so that clients can retrieve the last
known channel permission state before waiting for onchain checks to
finish.
Sometimes confirmation for raw messages are received before the record
is actually saved in the database.
In this case, the code will preserve the Sent status.
Improve `RequestToJoinCommunity` to accept `Addresses` in the request. If `Addresses` is not empty, we then only pass to the owner the selected addresses. The others are ignored.
Does not validate that the addresses in the slice are part of the user's wallet. Those not part of the wallet are just ignored.
This API is used to get a permission status of all channels of a given
community.
Clients can use this API to get the provided information for all
community channels with a single RPC call instead of doing one call
for each channel separately.
Similar to `CheckPermissionToJoin()` we now get
a `CheckChannelPermissions()` API.
It will rely on the same `PermissionResponse` types, but gives
information about both `ViewOnlyPermissions` and
`ViewAndPostPermissions`.
This seems to be a bug that was introduced when two features, admin
permissions and "always reveal wallet accounts" where merged.
We need to make sure we **first** check the revealed accounts and only
**then** do we perform permission checks on them. Otherwise we can run
into scenarios where fake addresses are used and users will be accepted
to the community.
found
Turns out that, when we return with an error, instead of
a non-statisfied check permissions response, we can run into cases where
members that should be kicked are not kicked.
Change smart contract with new API.
Update gas amount for deployment.
Add Burn() and EstimateBurn() functions.
Add RemainingSupply() functions.
Issue #10816
It happens that an envelope is sent before it's tracked, resulting in
long delays before the envelope is marked as sent.
This commit changes the behavior of the code so that order is now
irrelevant.
When we reply to our own message with an image, we didn't set the URL of
the image of the message, which resulted in the image not being
displayed correctly.
This commit does a few things:
- Adds a migration that adds chainids to communities_request_to_join_revealed_addresses
- Removes RevealedAddress in favor of RevealedAccount which is now a struct that contains the revealed address, as well as the signature and a list of chain IDs on which to check for user funds
- Changes the logic of sending requests to join a community, such that after creating address signatures, the user node will also check which of the addresses has funds on which networks for the community's token permissions, and add the chainds to the RevealedAccount
- Updates checkPermissionToJoin() such that only relevant chainids are used when checking user's funds. Chain IDs are retrieved from RevealedAccounts and matched against token permission criteria chain IDs
There's two scenarios in which we're leaving a community:
We either get kicked or we leave ourselves.
In case of leaving ourselves it's fine to unsubscribe from further
community updates be cause we deliberately chose to leave.
In case of being kicked however, this is different.
Say I'm kicked from a community because its token permissions have
changed, in this case we don't want clients to manually re-subscribe to
the community to get informed when there were further changes.
status-go should rather not unsubscribe if we know for sure we've been
kicked by someone else.
fix flaky test: TestRetrieveBlockedContact
resolve conflict when rebase origin/develop
Feat/sync activity center notification (#3581)
* feat: sync activity center notification
* add test
* fix lint issue
* fix failed test
* addressed feedback from sale
* fix failed test
* addressed feedback from ilmotta
go generate ./protocol/migrations/sqlite/...
feat: add updated_at for syncing activity center notification
* feat: add mutual state update system message
* feat: send mutual state update on accepting CR
* feat: send mutual state update when removing a contact
* fix: don't send MutualStateUpdateMessage over wire
* fix: mutual state update message text fixed
* fix: new clock to ensure system message after CR and add chat to the response
* feat: add AC notification for contact removal
* feat: replace "sent" mutual state system message with "added"
Extended the migration process with a generic way of applying custom
migration code on top of the SQL files. The implementation provides
a safer way to run GO code along with the SQL migrations and possibility
of rolling back the changes in case of failure to keep the database
consistent.
This custom GO migration is needed to extract the status from
the JSON blob receipt and store it in transfers table.
Other changes:
- Add NULL DB value tracking to JSONBlob helper
- Index status column on transfers table
- Remove unnecessary panic calls
- Move log_parser to wallet's common package and use to extract token
identity from the logs
Notes:
- there is already an index on transfers table, sqlite creates one for
each unique constraint therefore add only status to a new index
- the planned refactoring and improvements to the database have been
postponed due to time constraints. Got the time to migrate the data
though, extracting it can be done later for a more efficient
implementation
Update status-desktop #10746
* chore(upgradeSQLCipher): Upgrading SQLCipher to version 5.4.5
Changes:
### github.com/mutecomm/go-sqlcipher
1. The improved crypto argorighms from go-sqlcipher v3 are merged in v4
Tags:
v4.4.2-status.1 - merge `burn_stack` improvement
v4.4.2-status.2 - merge `SHA1` improvement
v4.4.2-status.4- merge 'AES' improvement
2. Fixed `go-sqlcipher` to support v3 database in compatibility mode (`sqlcipher` already supports this) (Tag: v4.4.2-status.3)
3. Upgrade `sqlcipher` to v5.4.5 (Tag: v4.5.4-status.1)
### github.com/status-im/migrate/v4
1. Upgrade `go-sqlcipher` version in `github.com/status-im/migrate/v4`
### status-go
1. Upgrade `go-sqlcipher` and `migrate` modules in status-go
2. Configure the DB connections to open the DB in v3 compatibility mode
* chore(upgradeSQLCipher): Use sqlcipher v3 configuration to encrypt a plain text database
* chore(upgradeSQLCipher): Scanning NULL BLOB value should return nil
Fixing failing tests: TestSyncDeviceSuite/TestPairingSyncDeviceClientAsReceiver; TestSyncDeviceSuite/TestPairingSyncDeviceClientAsSender
Considering the following configuration:
1. Table with BLOB column has 1 NULL value
2. Query the value
3. Rows.Scan(&dest sql.NullString)
Expected: dest.Valid == false; dest.String == nil
Actual: dest.Valid == true; dest.String == ""
* chore: Bump go-sqlcipher version to include NULL BLOB fix
Add support for unfurling a wider range of websites. Most code changes are
related to the implementation of a new Unfurler, an OEmbedUnfurler, which is
necessary to get metadata for Reddit URLs using oEmbed, since Reddit does not
support OpenGraph meta tags. The new unfurler will also be useful for other
websites, like Twitter. Also the user agent was changed, and now more websites
consider status-go reasonably human.
Related to issue https://github.com/status-im/status-mobile/issues/15918
Example hostnames that are now unfurleable: reddit.com, open.spotify.com,
music.youtube.com
Other improvements:
- Better error handling, especially because I wasn't wrapping errors correctly.
I also removed the unnecessary custom error UnfurlErr.
- I made tests truly deterministic by parameterizing the http.Client instance
and by customizing its Transport field (except for some failing conditions
where it's even good to hit the real servers).
This commit adds LoginAccount endpoint.
This makes it consistent with CreateAccount and RestoreAccount as they
use similar config.
The notable difference with the previous endpoint is the API, which is
the same as CreateAccount/RestoreAccount, and the fact that it will
override your networks configuration.
Storing them in the config is now not needed anymore, as that's always
driven from the backend, and we won't allow custom networks in the new
wallet.
Fixes an issue where if a group chat was first received from a non-contact, and later received from a contact, it still wouldn't save it as active.
That's because we checked if we were **newly** added instead of just if we were added. That meant that in the case I described above, the chat would then never have the chance to be set active.
* fix(community): stop re-joining comm when receiving a sync community msg
Fixes an issue with chats being reset. Since joining a community resaves the chats with the synced default value, it resets the sate of the chats, losing the unread messages, the muted state and more.
The solution is to block the re-joining of the community. In the case of the sync, we catch that error and just continue on.
* fix(import): fix HandleImport not saving the chat
Doesn't change much, but it could have caused issues in the future, so since we might have modified the chat, we make sure to save them
Also adds a test
* fix tests
- old `accounts` table is moved/mapped to `keypairs` and `keypairs_accounts`
- `keycards` table has foreign key which refers to `keypairs.key_uid`
- `Keypair` introduced as a new type
- api endpoints updated according to this change
* fix: create a CR on contact sync with received CR state
* fix: create a CR on contact sync with sent CR state
* Review fixes
* Fix: ignore own contact installation or syncing
This commit does a few things:
1) Extend create/import account endpoint to get wallet config, some of
which has been moved to the backend
2) Set up a loop for retrieving balances every 10 minutes, caching the
balances
3) Return information about which checks are not passing when trying to
join a token gated community
4) Add tests to the token gated communities
5) Fixes an issue with addresses not matching when checking for
permissions
The move to the wallet as a background task is not yet complete, I need
to publish a signal, and most likely I will disable it before merging
for now, as it's currently not used by desktop/mobile, but the PR was
getting to big
This is the initial implementation for the new URL unfurling requirements. The
most important one is that only the message sender will pay the privacy cost for
unfurling and extracting metadata from websites. Once the message is sent, the
unfurled data will be stored at the protocol level and receivers will just
profit and happily decode the metadata to render it.
Further development of this URL unfurling capability will be mostly guided by
issues created on clients. For the moment in status-mobile:
https://github.com/status-im/status-mobile/labels/url-preview
- https://github.com/status-im/status-mobile/issues/15918
- https://github.com/status-im/status-mobile/issues/15917
- https://github.com/status-im/status-mobile/issues/15910
- https://github.com/status-im/status-mobile/issues/15909
- https://github.com/status-im/status-mobile/issues/15908
- https://github.com/status-im/status-mobile/issues/15906
- https://github.com/status-im/status-mobile/issues/15905
### Terminology
In the code, I've tried to stick to the word "unfurl URL" to really mean the
process of extracting metadata from a website, sort of lower level. I use "link
preview" to mean a higher level structure which is enriched by unfurled data.
"link preview" is also how designers refer to it.
### User flows
1. Carol needs to see link previews while typing in the chat input field. Notice
from the diagram nothing is persisted and that status-go endpoints are
essentially stateless.
```
#+begin_src plantuml :results verbatim
Client->>Server: Call wakuext_getTextURLs
Server-->>Client: Normalized URLs
Client->>Client: Render cached unfurled URLs
Client->>Server: Unfurl non-cached URLs.\nCall wakuext_unfurlURLs
Server->>Website: Fetch metadata
Website-->>Server: Metadata (thumbnail URL, title, etc)
Server->>Website: Fetch thumbnail
Server->>Website: Fetch favicon
Website-->>Server: Favicon bytes
Website-->>Server: Thumbnail bytes
Server->>Server: Decode & process images
Server-->>Client: Unfurled data (thumbnail data URI, etc)
#+end_src
```
```
,------. ,------. ,-------.
|Client| |Server| |Website|
`--+---' `--+---' `---+---'
| Call wakuext_getTextURLs | |
| ---------------------------------------> |
| | |
| Normalized URLs | |
| <- - - - - - - - - - - - - - - - - - - - |
| | |
|----. | |
| | Render cached unfurled URLs | |
|<---' | |
| | |
| Unfurl non-cached URLs. | |
| Call wakuext_unfurlURLs | |
| ---------------------------------------> |
| | |
| | Fetch metadata |
| | ------------------------------------>
| | |
| | Metadata (thumbnail URL, title, etc)|
| | <- - - - - - - - - - - - - - - - - -
| | |
| | Fetch thumbnail |
| | ------------------------------------>
| | |
| | Fetch favicon |
| | ------------------------------------>
| | |
| | Favicon bytes |
| | <- - - - - - - - - - - - - - - - - -
| | |
| | Thumbnail bytes |
| | <- - - - - - - - - - - - - - - - - -
| | |
| |----. |
| | | Decode & process images |
| |<---' |
| | |
| Unfurled data (thumbnail data URI, etc)| |
| <- - - - - - - - - - - - - - - - - - - - |
,--+---. ,--+---. ,---+---.
|Client| |Server| |Website|
`------' `------' `-------'
```
2. Carol sends the text message with link previews in the RPC request
wakuext_sendChatMessages. status-go assumes the link previews are good
because it can't and shouldn't attempt to re-unfurl them.
```
#+begin_src plantuml :results verbatim
Client->>Server: Call wakuext_sendChatMessages
Server->>Server: Transform link previews to\nbe proto-marshalled
Server->DB: Write link previews serialized as JSON
Server-->>Client: Updated message response
#+end_src
```
```
,------. ,------. ,--.
|Client| |Server| |DB|
`--+---' `--+---' `+-'
| Call wakuext_sendChatMessages| |
| -----------------------------> |
| | |
| |----. |
| | | Transform link previews to |
| |<---' be proto-marshalled |
| | |
| | |
| | Write link previews serialized as JSON|
| | -------------------------------------->
| | |
| Updated message response | |
| <- - - - - - - - - - - - - - - |
,--+---. ,--+---. ,+-.
|Client| |Server| |DB|
`------' `------' `--'
```
3. The message was sent over waku and persisted locally in Carol's device. She
should now see the link previews in the chat history. There can be many link
previews shared by other chat members, therefore it is important to serve the
assets via the media server to avoid overloading the ReactNative bridge with
lots of big JSON payloads containing base64 encoded data URIs (maybe this
concern is meaningless for desktop). When a client is rendering messages with
link previews, they will have the field linkPreviews, and the thumbnail URL
will point to the local media server.
```
#+begin_src plantuml :results verbatim
Client->>Server: GET /link-preview/thumbnail (media server)
Server->>DB: Read from user_messages.unfurled_links
Server->Server: Unmarshal JSON
Server-->>Client: HTTP Content-Type: image/jpeg/etc
#+end_src
```
```
,------. ,------. ,--.
|Client| |Server| |DB|
`--+---' `--+---' `+-'
| GET /link-preview/thumbnail (media server)| |
| ------------------------------------------> |
| | |
| | Read from user_messages.unfurled_links|
| | -------------------------------------->
| | |
| |----. |
| | | Unmarshal JSON |
| |<---' |
| | |
| HTTP Content-Type: image/jpeg/etc | |
| <- - - - - - - - - - - - - - - - - - - - - |
,--+---. ,--+---. ,+-.
|Client| |Server| |DB|
`------' `------' `--'
```
### Some limitations of the current implementation
The following points will become separate issues in status-go that I'll work on
over the next couple weeks. In no order of importance:
- Improve how multiple links are fetched; retries on failure and testing how
unfurling behaves around the timeout limits (deterministically, not by making
real HTTP calls as I did). https://github.com/status-im/status-go/issues/3498
- Unfurl favicons and store them in the protobuf too.
- For this PR, I added unfurling support only for websites with OpenGraph
https://ogp.me/ meta tags. Other unfurlers will be implemented on demand. The
next one will probably be for oEmbed https://oembed.com/, the protocol
supported by YouTube, for example.
- Resize and/or compress thumbnails (and favicons). Often times, thumbnails are
huge for the purposes of link previews. There is already support for
compressing JPEGs in status-go, but I prefer to work with compression in a
separate PR because I'd like to also solve the problem for PNGs (probably
convert them to JPEGs, plus compress them). This would be a safe choice for
thumbnails, favicons not so much because transparency is desirable.
- Editing messages is not yet supported.
- I haven't coded any artificial limit on the number of previews or on the size
of the thumbnail payload. This will be done in a separate issue. I have heard
the ideal solution may be to split messages into smaller chunks of ~125 KiB
because of libp2p, but that might be too complicated at this stage of the
product (?).
- Link preview deletion.
- For the moment, OpenGraph metadata is extracted by requesting data for the
English language (and fallback to whatever is available). In the future, we'll
want to unfurl by respecting the user's local device language. Some websites,
like GoDaddy, are already localized based on the device's IP, but many aren't.
- The website's description text should be limited by a certain number of
characters, especially because it's outside our control. Exactly how much has
not been decided yet, so it'll be done separately.
- URL normalization can be tricky, so I implemented only the basics to help with
caching. For example, the url https://status.im and HTTPS://status.im are
considered identical. Also, a URL is considered valid for unfurling if its TLD
exists according to publicsuffix.EffectiveTLDPlusOne. This was essential,
otherwise the default Go url.Parse approach would consider many invalid URLs
valid, and thus the server would waste resources trying to unfurl the
unfurleable.
### Other requirements
- If the message is edited, the link previews should reflect the edited text,
not the original one. This has been aligned with the design team as well.
- If the website's thumbnail or the favicon can't be fetched, just ignore them.
The only mandatory piece of metadata is the website's title and URL.
- Link previews in clients should be generated in near real-time, that is, as
the user types, previews are updated. In mobile this performs very well, and
it's what other clients like WhatsApp, Telegram, and Facebook do.
### Decisions
- While the user typing in the input field, the client is constantly (debounced)
asking status-go to parse the text and extract normalized URLs and then the
client checks if they're already in its in-memory cache. If they are, no RPC
call is made. I chose this approach to achieve the best possible performance
in mobile and avoid the whole RPC overhead, since the chat experience is
already not smooth enough. The mobile client uses URLs as cache keys in a
hashmap, i.e. if the key is present, it means the preview is readily available
(naive, but good enough for now). This decision also gave me more flexibility
to find the best UX at this stage of the feature.
- Due to the requirement that users should be able to see independent loading
indicators for each link preview, when status-go can't unfurl a URL, it
doesn't return it in the response.
- As an initial implementation, I added the BLOB column unfurled_links to the
user_messages table. The preview data is then serialized as JSON before being
stored in this column. I felt that creating a separate table and the related
code for this initial PR would be inconvenient. Is that reasonable to you?
Once things stabilize I can create a proper table if we want to avoid this
kind of solution with serialized columns.
* fix: unable to reset password for newly created account using CreateAccountAndLogin
* remove unnecessary print
* add TestCreateAccountAndLogin
* update TestCreateAccountAndLogin
* bump version
* sync local deleted messages
* rebase
* add REPLACE
* fix lint
* defer rows.Close() / rename function
* add local pair test
* replace unused clock with _
This commit renames few api endpoints:
- old `AddMigratedKeyPairOrAddAccountsIfKeyPairIsAdded` renamed to `AddKeycardOrAddAccountsIfKeycardIsAdded`
- old `GetAllMigratedKeyPairs` renamed to `GetAllKnownKeycardsGroupedByKeyUID`
- old `GetMigratedKeyPairByKeyUID` renamed to `GetKeycardByKeyUID`
- old `DeleteKeypair` renamed to `DeleteAllKeycardsWithKeyUID`
* fix(mentions): deleting or editing a mention should remove the mention
* test(edit): add a test for mentions in edits
* test(delete): add test for deleting a message with a mention
* fix mobile mention issue #15616
* add state != nil
* clear previous text when clear mentions
* fix: after selected mention user, and type @ not working
* bump version
This commit replaces `os.MkdirTemp` with `t.TempDir` in tests. The
directory created by `t.TempDir` is automatically removed when the test
and all its subtests complete.
Prior to this commit, temporary directory created using `os.MkdirTemp`
needs to be removed manually by calling `os.RemoveAll`, which is omitted
in some tests. The error handling boilerplate e.g.
defer func() {
if err := os.RemoveAll(dir); err != nil {
t.Fatal(err)
}
}
is also tedious, but `t.TempDir` handles this for us nicely.
Reference: https://pkg.go.dev/testing#T.TempDir
Signed-off-by: Eng Zer Jun <engzerjun@gmail.com>
There were a couple of issues on how we handle pinned messages:
1) Clock of the message was only checked when saving, meaning that the
client would receive potentially updates that were not to be
processed.
2) We relied on the client to generate a notification for a pinned
message by sending a normal message through the wire. This PR changes
the behavior so that the notification is generated locally, either on
response to a network event or client event.
3) When deleting a message, we pull all the replies/pinned notifications
and send them over to the client so they know that those messages
needs updating.
* fix: Contact requests flows
fix: pending CR notification
fix: use CR as message with provided text
* fix: Remove legacy default contact request
* fix: Add test case sending CR after removal from contacts
* fix: refactor contact request tests to have common steps
* fix: activate chat on sender side after receiveing the CR
* chore: Return defaultContactRequestID function
* fix: force activate chat on reciever's side
* fix: ensure AC notification's name for CR notifiaction match contact's primary name
- setting clock when adding/updating accounts
- syncing/backing up accounts with the time they actually were updated instead
the time when dispatching is done as it was before
Changes applied here introduce:
- improvements to sync wallet accounts among devices (including all account types)
- backing up wallet accounts to and fetch them from waku (an information about received
wallet accounts is sent via `waku.backedup.wallet-account` signal to a client)
* Community request to join changes
* Fix read state for request to join notification
* Bring back deleted notification when updated with response
* Update Request timeout to 7 days
* Update VERSION
* fix(image-album): make sure to delete all images part of an album
* test(delete): add test that deletes a message part of an album
* test(delete): add test where the signal to delete images is after
also adds the handling of deleteForMe
* fix(delete): add album deletion handling for deleteForMe
- Display name is now backed up only as a part of `protobuf.BackedUpProfile` message,
it is not backed up via `protobuf.SyncSetting` any more (this refers only to backing up to
and fetching data from waku, regular syncing (among devices) remains unchanged)
- When saving the display name fetched from waku, before a clock was set to the current
time when that operation is made, which was incorrect, now we're using the clock from
backed up message (`SaveSyncDisplayName` function)
This adds an additional check for collectibles when community
permissions are validated.
Specifically this uses opensea to request all NFTs given an
owner wallet and a list of contract addresses (collectibles).
- `keypair_name` added to `accounts` table, all accounts derived from the
same master key have the same keypair name and also no two keypairs share
the same keypair name (keypair name is unique per keypair)
- `last_used_derivation_index` added to `accounts` table, cause we need
to maintain the highest index been used for the derivations made within
the same keypair
When community owners accept pending requests manually, they would be
declined in that process if the request doesn't fullfill the required
token permission criteria.
We don't want this to automatically reject those requests anymore,
instead, owners have to manually reject the requests.
When a community permission is edited, we need to revalidate
the token criteria with the existing member list, as members might
no longer fulfill the requirements.
This commit runs the checks in a go routine after the permission has
been updated.
This adds checks to `HandleCommunityRequestToJoin` and
`AcceptRequestToJoinCommunity` that ensure a given user's revealed
wallet addresses own the token funds required by a community.
When community has token permissions of type `BECOME_MEMBER`, the
following happens when the owner receives a request:
1. Upon verifying provided wallet addresses by the requester, the owner
node accumulates all token funds related to the given wallets that
match the token criteria in the configured permissions
2. If the requester does not meet the necessary requirements, the
request to join will be declined. If the requester does have the
funds, he'll either be automatically accepted to the community, or
enters the next stage where an owner needs to manually accept the
request.
3. The the community does not automatically accept users, then the funds
check will happen again, when the owner tries to manually accept the
request. If the necessary funds do not exist at this stage, the
request will be declined
4. Upon accepting, whether automatically or manually, the owner adds the
requester's wallet addresses to the `CommunityDescription`, such that
they can be retrieved later when doing periodic checks or when
permissions have changed.
We need to store the `decimals` of a given token when creating community
permissions so that we can use it later on to do calculations when
checking funds for given wallet addresses.
This commit extends the `CommunityRequestToJoin` with `RevealedAddresses` which represent wallet addresses and signatures provided by the sender, to proof a community owner ownership of those wallet addresses.
**Note: This only works with keystore files maanged by status-go**
At high level, the follwing happens:
1. User instructs Status to send a request to join to a community. By adding a password hash to the instruction, Status will try to unlock the users keystore and verify each wallet account.
2. For every verified wallet account, a signature is created for the following payload, using each wallet's private key
``` keccak256(chatkey + communityID + requestToJoinID) ``` A map of walletAddress->signature is then attached to the community request to join, which will be sent to the community owner
3. The owner node receives the request, and if the community requires users to hold tokens to become a member, it will check and verify whether the given wallet addresses are indeed owned by the sender. If any signature provided by the request cannot be recovered, the request is immediately declined by the owner.
4. The verified addresses are then added to the owner node's database such that, once the request should be accepted, the addresses can be used to check on chain whether they own the necessary funds to fulfill the community's permissions
The checking of required funds is **not** part of this commit. It will be added in a follow-up commit.
Also, make AccountManager a dependency of Messenger.
This is needed for community token permissions as we'll need a way to access wallet accounts
and sign messages when sending requests to join a community.
The APIs have been mostly taken from GethStatusBackend and personal service.
- completely replace social links on save
- respect the order of items and also the URL when comparing
Rationale: for MVP, we'll want the user to be able to add several links
of the same type, and adjust/preserve their order by drag'n'drop
Needed for https://github.com/status-im/status-desktop/issues/9777
The `Edit()` method on `Community` merely updates "primitive" values
that live inside a community description. For any data that is more complex,
we typically have dedicated methods.
Because `Edit()` was expecting `CommunityTokensMetadata`, it would
override it with empty data every time we would edit a community.
This is because we typically don't update that kind of data as part
of `Edit()`.
In addition, `CommunityTokensMetadata` is append-only anyways,
so there wouldn't be any other way to update that field, other than
adding new items to it, which is done in a dedicated method.
`type` column is set for all rows to appropriate value. Before this change
accounts which were generated from the keypair created importing seed phrase
had `generated` value for the `type`.
According to above, a function for generating an account sets the `type`
based on the passed derive from address.
Community tokens has some metadata (image, description) which must be kept in waku(CommunityDescription).
Add CommunityTokenMetadata message to communities.proto.
Add []CommunityTokenMetadata to CommunityDescription.
Issue #9545
prefixes. Changed primary keys and API methods.
Fixed tests and added new ones.
Fixed saved addresses and transaction tests to use ':memory:' sqlite
DB instead of a tmp file to speed up testing by hundred of times.
Fixes#8599
* feat: refactor activity center endpoints
* fix: restore activity center tests using new endpoints
* feat: Remove from activity center endpoints accepted flag
* feat: Activity Center review fixes
Changes applied here introduce backing up keycards data to and fetch them from waku.
Information about received keycards data is sent via `sync.from.waku.keycards` signal.
Adds a new column named `deleted` to the table `activity_center_notifications`.
Related PR in Mobile https://github.com/status-im/status-mobile/pull/15106 for a lot more details of the feature.
Why? Relying on the `dismissed` column for soft deletion is no longer viable because the mobile & desktop clients should display dismissed notifications (sometimes), hence the need for a new column to truly represent soft deletion.
* feat: add fetching notifications by activity center group
* feat: add api for counting notifications by activity center group
* chore: activity center query code refactor
* feat: add endpoint returning ActivityCenterType(s) by ActivityCenterGroup
* chore: Remove activityGroup from status-go level
* feat: add endpoint for counting notifications with different conditions
A migration was added out-of-order, which meant that in clients who
had already run the migration after, it would be skip.
This commit re-adds the migration so it's run, tested against an empty
account and one that had already migrated.
* chore: upgrade go-waku to v0.5
* chore: add println and logs to check what's being stored in the enr, and preemptively delete the multiaddr field (#3219)
* feat: add wakuv2 test (#3218)
* Outgoing contact requests
* Test fix
* Test fix
* Fixes
* Bugfixes
* Bugfixes
* Almost there
* Removed the activity center notification
* Test update
* Almost ready
* Fixes
* Fixes
* feat: Add seen/unseen activity center state
* feat: ActivityCenterState for grouping ActivityCenter unread messages cnt and seen state
* feat: always use messenger's addActivityCenterNotification & add state to the response
* Remove unused activity center endpoints form api and fix test
* fix media-server sleep wake up issue
we now use waku v2 and hence messenger was nil.
Since it was nil, the logic in place responsible for triggering app state events was not firing and hence media server would become un-responsive after a sleep event.
this commit fixes that.
Co-Authored-By: Andrea Maria Piana <andrea.maria.piana@gmail.com>
---------
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
This commit fixes an issue where when accepting a contact request
the other end would display an extra notification.
It also changes WaitOnResponse to collect results. This should make
tests less flaky, since sometimes messages are processed in different
batches.
Now we need to be though exact on what we expect from the response (i.e
use == instead of >, otherwise the same behavior applies)
This uncovered a couple of issues with messenger.Merge, so I have moved
the struct to use a map based collection instead of an array.
Motivated by: https://github.com/status-im/status-desktop/pull/9084
While we don't request preview data from Tenor because the url itself will contain the GIF image, we still need to validate the url in the preview request. This is done using a HEAD request where we expect the response status code to be 200 OK.
At the same time we will add the content type to the preview response.
In general, any time a piece of state is updated in the backend, that
should be propagated to the client through signals.
In this case, when a request was accepted, the client wasn't notified,
requiring them to re-fetch the accepted requests and causing
inconsistent state between status-go and client.
This commit refactors the discord import tool such that,
instead of loading all data to be imported into memory at
once, it will now perform the import on a per file basis.
This improves the memory pressure for the node performing
the import and seems to increase its performance as well.
* fix: Accepting or dismissing contact request should mark AC notification as read
* fix: Accepting or declining community request should mark AC notification as read
* fix: Read notification status fo identity verification requests
* fix: Save whole notification object while handling contact requests
* fix: Remove unused functions from activity_center_persistence
* fix: Use Accepted and Dismissed flags for AC notifications with CTA
* fix: Mark notification as unread when we handle accepted verification request
* fix: Replace Warn with Error on fail to save notification, test fixes
* fix: Fix conditions for fetching AC notifications from the db
* fix: Review fixes for err name and conditions array
There were cases where this caused a crash, as handling magnetlinks would try to close
an already closed tasked channel
See https://github.com/status-im/status-desktop/issues/8996 for more information.
This commit extends the task struct such that it can be marked as cancelled and safely
read and written by multiple go routines.
This introduces an addition constraint to archive generation, in which the payload + signature size of all partitioned message that go into an archive should not exceed a certain
threshold.
This is to ensure that archives won't get too big when they are later read into memory.
Summary
=======
- [x] Changes endpoint ActivityCenterNotificationsBy to support fetching
multiple types of notification in a single query.
- [x] Adds endpoint UnreadAndAcceptedActivityCenterNotificationsCount to
allow the mobile client to fetch the count of unread & accepted
notifications.
- [x] Add `golangci-lint` to Nix shell. This was possible since PR
https://github.com/status-im/status-go/pull/3087 was merged.
Notes
=====
- If you'd like to understand why these changes are needed, please see
the mobile PR https://github.com/status-im/status-mobile/pull/14785,
or issue https://github.com/status-im/status-mobile/issues/14712
- All changes should be completely backwards compatible, and there
should be no impact for the desktop app.
- The mobile client has been already tested using this branch.
This adds the functionality that history archives continue to be imported
in case the import has been interrupted the last time the app/client
was running.
This typically happens when users don't wait for an ongoing import to finish,
which sometimes can take a while. Users then close the app/kill the client
which leaves the database in a state where there's downloaded archives that
haven't been fully imported.
Prior to this change, the node will have to wait until it receives a new
magnetlink that it hasn't seen before, until it processes imports again.
This can take several days.
Now, it will check on startup if there are any archives left to be imported
and resumes the import from there.
Instead of loading the entire torrent file into memory when trying
to extrract active messages, we now only read the chunks that are
necessary to decode any individual archive and then process
extracted messages in chunks.
This doesn't introduce a max cap of allowed memory yet, since the
chunk size depends entirely on the size of the archive, but this
will be done soon.
Because `QuotedMessage` doesn't include imported message data,
some of the author information in imported messages is lost in
frontends.
This commit adds a `discordMessage` (soon replaced by `importedMessage`)
to `Quotedmessage`, although only hydrated with a subset of data,
namely author display name and avatar URL, as those are the only ones
needed by front-end atm.
* feat(ActivityCenter): Add missing AC notifications for verification requests
* fix(Contacts): Fix updates for trusted and untrustworthy statuses
* feat(ActivityCenter): Add test for trusted verification request and AC notifications
* feat(Contacts): Trusted and untrustworthy statuses should be unknown for verificated side
Changes applied here introduce backing up profile data (display name and identity
images) to waku and fetch them from waku. Information about those data is sent
as a separate signal to a client via `sync.from.waku.profile` signal.
New signal `sync.from.waku.progress` is introduced which will be used to notify a client
about the progress of fetching data from waku.
When discovery fails to be seeded with bootstrap/fallback nodes, it
never recovers.
This commit changes the behavior so that status-go retries fetching
bootnodes, and restarts discovery when that happens.
This is done so that when member join a community by being accepted by
the community owner, they also receive the most up-to-date magnetlink
along with it.
This commit makes a few changes to the community history archive
download routine to make it more robust:
1. Prior to this commit, even when there were no archives to be
downloaded, we were still trying to extract messages from archive
data.
2. Logs have been improved as they were sometimes showing confusing
information
3. We now handle interruption of ongoing download + data import much
better in case of multiple magnetlinks being processed in roughly the
same time.
4. We now keep track of which archive has been successfully imported
into the database. Without this, Status would consider any downloaded
archives as "done" even though they haven't actually been imported
into the database yet. This way Status should be able to pick up its
work were it left of the last time, in case a user closes the app, or
another magnetlink interrupts the ongoing process.
This creates a smoother experience for users when they leave a community since they can see the exact same messages they had before without having to rely on the mailserver.
In order to give clients more insights about archive messages being
processed, we're adding this additional signal that informs clients when
the import of downloaded history archive messages has started.
updates
When channels where removed from communities by the community owner,
community members would not remove the chats in their own databases.
See https://github.com/status-im/status-desktop/issues/8000 for more
info.
This commit ensures that nodes delete removed chats from their local
chats table and also deregister the corresponding transport filters.
We don't allow multiple channels with the same name in communities.
Discord allows for multiple channels with the same name (living in
different categories), so this is an error case in our import tool.
This commit improves the user facing error message of this scenario.
We've been limiting the amount of errors being emitted to clients
to reduce payload pressure and also due to the fact that we won't be
rendering more than 3 error items in the UI.
We want to let actual errors through (as opposed to warnings), even if
the limit was reached.
* feat(ActivityCenter): Add community request AC notification
* feat(ActivityCenter): Add CommunityID to AC notification
* feat(ActivityCenter): Add membership status for community membership AC notifications
* feat(ActivityCenter): Add tests for community notifications and fix naming
* Add notification for kicked from community action
* feat(ActivityCenter): Fix for missing notification objects for tests
Prior to this commit we had a `CreateHistoryArchiveTorrent()` API which
takes a `startDate`, an `endDate` and a `partition` to create a bunch of
message archives, given a certain time range.
The function expects the messages to live in the database, which means,
all messages that need to be archived have to be saved there at some
point.
This turns out to be an issue when importing communities from third
party services, where, sometimes, there are several thousands of messages
including attachment payloads, that have to be save to the database
first.
There are only two options to get the messages into the database:
1. Make one write operation with all messages - this slow, takes a long
time and blocks the database until done
2. Create message chunks and perform multiple write operations - this is
also slow, takes long but makes the database a bit more responsive as
it's many smaller operations instead of one big one
Option 2) turned out to not be super feasible either as sometimes,
inserting even a single such message can take up to 10 seconds
(depending on payload)
Which brings me to the third option.
**A third option** is to not store those imported messages as waku
message into the database, just to later query them again to create the
archives, but instead create the archives right away from all the
messages that have been loaded into memory.
This is significantly faster and doesn't block the database.
To make this possible, this commit introduces
a `CreateHistoryArchiveTorrentFromMessages()` API, and
a `CreateHistoryArchiveTorrentFromDB()` API which can be used for
different use cases.
settings
Turns out `UpdateCommunitySettings()` has never worked. Two parameters
where in the wrong order, cause the SQL statement to never find the row
it has to update.
When fetching torrent info after receiving a magnet link,
it can happen that the request times out.
We want to retry downloading the data again at least once more
before giving up
The default logger writes to `geth.log`, which makes debugging
the archive protocol pretty hard.
This adds an additional logger that logs to stdout, while keeping
the default logger intact for production.
Main changes:
- Extend saved addresses DB with sync info: sync timestamp, update timestamp
and soft removed flag
- Create custom protobuf message payload to sync saved addresses
- Cleanup saved addresses on each start of messenger, by deleting
soft removed older entries
- Sync all saved addresses on Messenger.SyncDevices calls
- Sync particular changes to saved addresses
- Add SavedAddressManager instance to messenger
- Note, can't find a clean way to pass the SavedAddressManager to the
messenger, so we create another one
- Add tests for sync and new DB API
Closes: #7229
needed
Unfortunately, this one slipped through when introducing helper
functions to retrieve messages.
Sometimes, queries need an additional empty `cursor` value.
- added `SpectateCommunity` endpoint, it is supposed to be used in
scenarios where we want to "Go to public Community" and see its
content without joining
- added `spectated` field to `Community`, it means we are observing the
community and its chats but we are not members
Use case:
https://github.com/status-im/status-desktop/issues/7072#issuecomment-1246560885
When initially creating settings, properties 'initialize profile_pictures_show_to'
and 'profile_pictures_visibility' are set according to the provided Setting object.
This adds a new `DiscordMessageAttachment` type which is part of
`DiscordMessage`. Along with that type, there's also a new database
table for `discord_message_attachments` and corresponding persistence
APIs.
This commit also changes how chat messages are retrieved.
Here's why:
`DiscordMessage` can have multiple `DiscordMessageAttachment`.
A chat message can have a `DiscordMessage`.
Because we're `LEFT JOIN`'ing the discord message attachments into the
chat messages, there's a possibility of multiple rows per message.
Hence, this commit ensures we collect queried discord message
attachments on chat messages.
Usually, message IDs are generated by their payload and signature and
in receiving nodes calculated in based on the same data as well.
There's no ID attached to messages in-flight.
This turns out to be a bit of a problem for messages that are being
imported from third party systems like discord, as the conversion
and saving of such messages and handling of their possible assets and
attachments are done in separate steps, which changes the message
payloads after their IDs have been generated.
Hence, we're introducing a `ThirdPartyID` property to `common.Message`
and `protobuf.WakuMessage` so receiving nodes of such messages (via the
archive protocol primarily) can easily detect third party/imported
messages and give them special treatment.
This might look like a weird requirement at a fist glance.
The reason this is needed, is because some message signals require
admin rights to take effect (e.g. PinMessage).
When messages are imported from third-party services,
translated to status messages, signed by the community, and eventually distributed
via the archive protocol, we need to ensure that messages signed
by the community itself are considered as admin privileges as well,
so they can be correctly replayed into the database.
Restore the old, previously renamed, 1662447680_add_keypairs_table.up.sql
file while keeping the current one for those who already migrated to the
new one. The extra migration is noop and saves to keep consistency in
the user data states history.
This adds a new `DownloadingHistoryArchivesFinished` signal to the
family of community archive signals. It's emitted when all to be
downloaded archives have been downloaded and handled.
shelfing it at the moment, if it becomes a problem in the future then may be we can address it then. For the moment I've just left the refactoring I already did.
`FirstMessageTimestamp` enables members of the community to determine if
there are any messages they can fetch on the community channel(chat).
`FirstMessageTimestamp` is advertised by admin for each community chat
through `CommunityDescription`. It assumes admin is online frequently
enough to capture the first channel message.
For existing communities admin determines first message timestamp by
finding oldest chat message in its local database.
task: status-im/status-desktop#6731
Add image_payload column to chats table.
Add Base64Image to chat struct.
Add ImageChange event to propagate image.
Change EditChat API - use CroppedImage.
Process and crop image in EditGroupChat.
This is so that we can control whether we want to publish the community
when it, or it's categories and channels, are created.
This is needed for the discord import so that we can create communities,
channels and categories without publishing the community and have it
show up in UIs too early.
This commit ensures we're relying on `chat.DeletedAtClockValue` instead
of `chat.Active` to know whether or not we need to remove the chat from
paired devices.
Because we were relying on `Active != true`, we ended up with a serious
but that would result in deactivating all chats on paired devices.
The reason the chats would disappear on paired devices is because, when
setting up a new device by importing a seedphrase, chances are this
device will receive `HandleBackUp` signals (which original from other
devices with the same account that backed up contacts etc).
When backups are handled, we create chats for every contact that's part
of the backup signal. Those chats are set to `Active = false` because
the signal handling shouldn't cause those chats to show up in the UI.
However, because those are set to `Active = false`, the next time the
user tries to sync from this devices, all those chats are considered as
"removed", hence sending "chat removed" signals when syncing (which then
causes those chats to disappear on all paired devices.
We need to rely on `DeletedAtClockValue` to know whether a chat was
indeed removed and only then emit such a signal.
This adds a new `discord_messages` table and extends the persistence
APIs such that `MessagesByID` and `MessageByID` will return user
messages that include their discord message payload.
It also adds APIs to save individual discord messages.
This introduces a new table to store discord message authors.
The main reason this table is being introduce is so that we don't have
to duplicate discord message author information in the `user_messages`
table when importing discord communities (ongoing work).
In addition to the table there are also two new APIs on the messenger
persistence layer (which are later used in the import logic):
- `HasDiscordMessageAuthor`
- `SaveDiscordMessageAuthor`
Closes#2759
There are two types of errors "non-critical" (or "warning") and "critical".
These map to error codes `1` and `2` respectively.
The current error codes are a left-over of initial experimenting.
As part of the new Discord <-> Status Community Import functionality,
we're adding an API that extracts all discord categories and channels
from a previously exported discord export file.
These APIs can be used in clients to show the user what categories and
channels will be imported later on.
There are two APIs:
1. `Messenger.ExtractDiscordCategoriesAndChannels(filesToimport
[]string) (*MessengerResponse, map[string]*discord.ImportError)`
This takes a list of exported discord export (JSON) files (typically one per
channel), reads them, and extracts the categories and channels into
dedicated data structures (`[]DiscordChannel` and `[]DiscordCategory`)
It also returns the oldest message timestamp found in all extracted
channels.
The API is synchronous and returns the extracted data as
a `*MessengerResponse`. This allows to make the API available
status-go's RPC interface.
The error case is a `map[string]*discord.ImportError` where each key
is a file path of a JSON file that we tried to extract data from, and
the value a `discord.ImportError` which holds an error message and an
error code, allowing for distinguishing between "critical" errors and
"non-critical" errors.
2. `Messenger.RequestExtractDiscordCategoriesAndChannels(filesToImport
[]string)`
This is the asynchronous counterpart to
`ExtractDiscordCategoriesAndChannels`. The reason this API has been
added is because discord servers can have a lot of message and
channel data, which causes `ExtractDiscordCategoriesAndChannels` to
block the thread for too long, making apps potentially feel like they
are stuck.
This API runs inside a go routine, eventually calls
`ExtractDiscordCategoriesAndChannels`, and then emits a newly
introduced `DiscordCategoriesAndChannelsExtractedSignal` that clients
can react to.
Failure of extraction has to be determined by the
`discord.ImportErrors` emitted by the signal.
**A note about exported discord history files**
We expect users to export their discord histories via the
[DiscordChatExporter](https://github.com/Tyrrrz/DiscordChatExporter/wiki/GUI%2C-CLI-and-Formats-explained#exportguild)
tool. The tool allows to export the data in different formats, such as
JSON, HTML and CSV.
We expect users to have their data exported as JSON.
Closes: https://github.com/status-im/status-desktop/issues/6690
This introduces a flag to configure whether `Messenger.CreateCommunity`
should create a default channel.
As discussed in https://github.com/status-im/status-go/issues/2758, this
is needed for the upcoming functionality to import discord communities.
Closes#2758
This commit introduces a few changes regarding users accessing
communities:
While the APIs still exist, community invites should no longer be
used, instead communities should merely be "shared".
Sharing a community to users allows users to "join" the community,
which in reality makes them request access to that community.
This means, users have to request access to any community, even if
the community has permissions set to NO_MEMBERSHIP
Only difference between ON_REQUEST and NO_MEMBERSHIP is that
ON_REQUEST communities require manual approval of the owner/admin
to access a community. NO_MEMBERSHIP communities accept
automatically (as soon as owner/admin receives the request).
This also implies that users are no longer optimistically added to the
member list of communities, but only after they have been accepted.
This introduces a bit of a message ping-pong for users to know that
someone is now part of a community
fix: add verification request to response
fix: code review
add missing functions and simplify timestamp usage
fix: sync verification requests
feat: add endpoint to fetch all received verification requests
feat: add signal when trusting verification request
Co-authored-by: Jonathan Rainville <rainville.jonathan@gmail.com>
If a contact update or a legacy contact request is sent, we create a
notification for the user in the activity center so it can be replied
to.
If at a later date a new contact request is received from the same user,
this will replace it, so the proper message can be displayed.
This introduces a simple garbage collection which checks for all soft
deleted bookmarks (`removed = 1`) which have been marked for garbage
collection more than 30 days ago from the time of bootstapping the
messenger.
Closes#2705
These APIs are being introduced to address #2706 and #2704, provided
that clients will move to using these APIs instead of the currently
provided equivalent APIs in the browser service.
The `bookmarks` table is being extended with a `deleted_at` field which
can later be used for garbage collection, as "removing" a bookmark is
merely soft deletion and doesn't actually remove the data from the
database.
In addition to those APIs adding and soft deleting bookmark entries,
they also automatically perform a sync operation to ensure that
bookmarks are synced in real-time (#2704).
Closes#2706, #2704
This commit introduces a new `clock` field in the
`communities_settings` table so that it can be leveraged for syncing
community settings across devices.
It alsoe exends existing `syncCommunity` APIs to generate
`SyncCommunitySettings` as well, avoiding sending additional sync messages
for community settings.
When editing communities however, we still sync community settings
explicitly are we aren't syncing the community itself in that case.
* Added function to get preffered network IP
Also done some refactor work oon server package to make a lot more reusable
* Added server.Option and simplified handler funcs
* Added serial number deterministically generated from pk
* Debugging TLS server connection
* Implemented configurable server ip
When accessing over the network the server needs to listen on the network port and not localhost or 127.0.0.1 . Also the cert can now have a dedicated IP
* Refactor of URL funcs to use the url package
* Removed redundant Options pattern in favour of config param
* Added full server test using GetOutboundIP
* Remove references and usage of Server.port
The application does not need to set the port, we rely on the net.Listener to pick a port.
* Version bump
* Added ToECDSA func and improved cert testing
* Added error check in test
* Split Server types, embedding raw Server funcs into specialised server types
* localhost
* Implemented DNS and IP based cert gen
ios doesn't allow for restricted ip addresses to be used in a valid tls cert
* Replace listener handling with original port store
Also added handlers as a parameter of the Server
Add banner image as a special `IdentityImage` beside "thumbnail" and "large"
Banner input cropped image processing
- Resize to keep in the limits of `BannerDim`
- Encode to match the file size limits define for banner
- Don't scale up. This can be done efficiently in the UI
Changes to `images` module
- Refactor `EncodeToBestSize` as `EncodeToLimits` to accept arbitrary dimensions
and allow for custom size
- Define `DimensionLimits` for banner not to exceed 450 KB and a rough estimate
for the ideal size
This allows to store community admin settings that are meant to be propagated
to community members (as opposed to the already existing
`CommunitySettings` which are considered local to every account).
The first setting introduced as part of this commit is one that enables
community admins to configure whether or not members of the community
are allowed to pin messages in community channels.
Prior to this commit, this was not restricted at all on the protocol
level and only enforced by clients via UI (e.g. members don't see an
option to pin messages, although they could).
This config setting now ensures that:
1. If turned off, members cannot send a pin message
2. If turned off, pin messages from members are not handled/processed
This is needed by https://github.com/status-im/status-desktop/issues/5662
This introduces the ability for status notes to handle community
history archive magnetlinks. To make this work, a few things are needed:
1. A new database table has been introduced to store message archive
hashes. This is necessary so status nodes can determine whether or
not they need to download a certain archive
2. The messenger's `handleRetrievedMessages()` has been exteded to take
magnetlink messages into account
3. New APIs were added to download torrent data given a magnetlink and
also to extract messages from downloaded archives, which are then
later fed to `handleRetrievedMessages`
Closes#2568
Previously, we've turned archive support off by default, this means,
community member nodes won't try to download torrent data when they
receive a magnetlink.
This commit ensures that communities joined in the future will have
this setting on, and also all existing communities that the node
isn't admin of will have this feature enabled.
* feat: add colorId utility
it returns color id for given pubkey
* feat: populate Account with colorHash and colorId
accounts displayed to users on login page should display colorHash and
avatar fallback color (aka colorId)
This introduces logic needed to:
- Create WakuMessageArchives and and indices from store waku messages
- History archive torrent data to disk and create .torrent file from
that
- Seed and unseed history archive torrents as necessary
- Starting/stopping the torrent client
- Enabling/disabling community history support for individual components
and starting/stopping the routine intervals accordingly
This does not yet handle magnet links (#2568)
Closes#2567
This is needed so that when they are bundled into archives, receiving
nodes can still verify the messages payload using its signature.
This commit introduces a new `waku_messages` table and APIs to store
such messages. Waku message payload is store for any message that has
a topic that matches any of the admin communities chats.
Closes#2566
* Sync Settings
* Added valueHandlers and Database singleton
Some issues remain, need a way to comparing incoming sql.DB to check if the connection is to a different file or not. Maybe make singleton instance per filename
* Added functionality to check the sqlite filename
* Refactor of Database.SaveSyncSettings to be used as a handler
* Implemented inteface for setting sync protobuf factories
* Refactored and completed adhoc send setting sync
* Tidying up
* Immutability refactor
* Refactor settings into dedicated package
* Breakout structs
* Tidy up
* Refactor of bulk settings sync
* Bug fixes
* Addressing feedback
* Fix code dropped during rebase
* Fix for db closed
* Fix for node config related crashes
* Provisional fix for type assertion - issue 2
* Adding robust type assertion checks
* Partial fix for null literal db storage and json encoding
* Fix for passively handling nil sql.DB, and checking if elem has len and if len is 0
* Added test for preferred name behaviour
* Adding saved sync settings to MessengerResponse
* Completed granular initial sync and clock from network on save
* add Settings to isEmpty
* Refactor of protobufs, partially done
* Added syncSetting receiver handling, some bug fixes
* Fix for sticker packs
* Implement inactive flag on sync protobuf factory
* Refactor of types and structs
* Added SettingField.CanSync functionality
* Addressing rebase artifact
* Refactor of Setting SELECT queries
* Refactor of string return queries
* VERSION bump and migration index bump
* Deactiveate Sync Settings
* Deactiveated preferred_name and send_status_updates
Co-authored-by: Andrea Maria Piana <andrea.maria.piana@gmail.com>
These are used to store settings for individual communities a
user is part of, either as member or as owner.
This included whether or not the community history archive protocol
is enabled.
This adds a new `CommunitySettings` type and adds
a migration script that introduces a new `communities_settings`
table.
It also extends the `MessengerResponse` type to include
`CommunitySettings` which are honored when communities are being
added, edited, joined or left.
Lastly, this adds a new RPC API to retreive the settings.
Closes#2564
This commit introduces a new `TorrentConfig` as per #2563 with dedicated
default values and new options/flags for the status-go CLI.
Since it's part of `NodeConfig`, which is stored per user in the
database, this commit also adds a migration script that adds a new
`torrent_config` table and database APIs to insert and retreive the
torrent config.
Closes#2563
This commit adds the special magnetlink channel that is used to
distribute community archive magnetlinks to community members.
The channel shoudn't be visible in the UI, so instead creating an
actual channel instance, it just sets up the filters.
Closes#2565
This commit enables mailserver cycle logic by default and make a few
changes:
1) Nodes are graylisted instead of being blacklisted for a set amount of
time. The reason is that if we blacklist, any cut in connectivity
might result in long delays before reconnecting, especially on spotty
connections.
2) Fixes an issue on the devp2p server, whereby the node would not
connect to one of the static nodes since all the connection slots
where filled. The fix is a bit inelegant, it always connects to
static nodes, ignoring maxpeers, but it's tricky to get it to work
since the code is clearly not written to select a specific node.
3) Adds support to pinned mailservers
4) Add retries to mailservers requests. It uses a closure for now, I
think we should eventually have a channel etc, but I'd leave that for
later.
Prior to this commit, when a community owner leaves her community,
status-go would just set the `joined` flag to `false` without updating the
members list. This results in wrong data as the number of members doesn't
match the reality.
This commit removes the owner of the community from the members list
when she's leaving the community.