Similar to `CheckPermissionToJoin()` we now get
a `CheckChannelPermissions()` API.
It will rely on the same `PermissionResponse` types, but gives
information about both `ViewOnlyPermissions` and
`ViewAndPostPermissions`.
This commit does a few things:
- Adds a migration that adds chainids to communities_request_to_join_revealed_addresses
- Removes RevealedAddress in favor of RevealedAccount which is now a struct that contains the revealed address, as well as the signature and a list of chain IDs on which to check for user funds
- Changes the logic of sending requests to join a community, such that after creating address signatures, the user node will also check which of the addresses has funds on which networks for the community's token permissions, and add the chainds to the RevealedAccount
- Updates checkPermissionToJoin() such that only relevant chainids are used when checking user's funds. Chain IDs are retrieved from RevealedAccounts and matched against token permission criteria chain IDs
* chore(upgradeSQLCipher): Upgrading SQLCipher to version 5.4.5
Changes:
### github.com/mutecomm/go-sqlcipher
1. The improved crypto argorighms from go-sqlcipher v3 are merged in v4
Tags:
v4.4.2-status.1 - merge `burn_stack` improvement
v4.4.2-status.2 - merge `SHA1` improvement
v4.4.2-status.4- merge 'AES' improvement
2. Fixed `go-sqlcipher` to support v3 database in compatibility mode (`sqlcipher` already supports this) (Tag: v4.4.2-status.3)
3. Upgrade `sqlcipher` to v5.4.5 (Tag: v4.5.4-status.1)
### github.com/status-im/migrate/v4
1. Upgrade `go-sqlcipher` version in `github.com/status-im/migrate/v4`
### status-go
1. Upgrade `go-sqlcipher` and `migrate` modules in status-go
2. Configure the DB connections to open the DB in v3 compatibility mode
* chore(upgradeSQLCipher): Use sqlcipher v3 configuration to encrypt a plain text database
* chore(upgradeSQLCipher): Scanning NULL BLOB value should return nil
Fixing failing tests: TestSyncDeviceSuite/TestPairingSyncDeviceClientAsReceiver; TestSyncDeviceSuite/TestPairingSyncDeviceClientAsSender
Considering the following configuration:
1. Table with BLOB column has 1 NULL value
2. Query the value
3. Rows.Scan(&dest sql.NullString)
Expected: dest.Valid == false; dest.String == nil
Actual: dest.Valid == true; dest.String == ""
* chore: Bump go-sqlcipher version to include NULL BLOB fix
This commit does a few things:
1) Extend create/import account endpoint to get wallet config, some of
which has been moved to the backend
2) Set up a loop for retrieving balances every 10 minutes, caching the
balances
3) Return information about which checks are not passing when trying to
join a token gated community
4) Add tests to the token gated communities
5) Fixes an issue with addresses not matching when checking for
permissions
The move to the wallet as a background task is not yet complete, I need
to publish a signal, and most likely I will disable it before merging
for now, as it's currently not used by desktop/mobile, but the PR was
getting to big
This commit refactors the discord import tool such that,
instead of loading all data to be imported into memory at
once, it will now perform the import on a per file basis.
This improves the memory pressure for the node performing
the import and seems to increase its performance as well.
Prior to this commit we had a `CreateHistoryArchiveTorrent()` API which
takes a `startDate`, an `endDate` and a `partition` to create a bunch of
message archives, given a certain time range.
The function expects the messages to live in the database, which means,
all messages that need to be archived have to be saved there at some
point.
This turns out to be an issue when importing communities from third
party services, where, sometimes, there are several thousands of messages
including attachment payloads, that have to be save to the database
first.
There are only two options to get the messages into the database:
1. Make one write operation with all messages - this slow, takes a long
time and blocks the database until done
2. Create message chunks and perform multiple write operations - this is
also slow, takes long but makes the database a bit more responsive as
it's many smaller operations instead of one big one
Option 2) turned out to not be super feasible either as sometimes,
inserting even a single such message can take up to 10 seconds
(depending on payload)
Which brings me to the third option.
**A third option** is to not store those imported messages as waku
message into the database, just to later query them again to create the
archives, but instead create the archives right away from all the
messages that have been loaded into memory.
This is significantly faster and doesn't block the database.
To make this possible, this commit introduces
a `CreateHistoryArchiveTorrentFromMessages()` API, and
a `CreateHistoryArchiveTorrentFromDB()` API which can be used for
different use cases.
This is so that we can control whether we want to publish the community
when it, or it's categories and channels, are created.
This is needed for the discord import so that we can create communities,
channels and categories without publishing the community and have it
show up in UIs too early.
Add banner image as a special `IdentityImage` beside "thumbnail" and "large"
Banner input cropped image processing
- Resize to keep in the limits of `BannerDim`
- Encode to match the file size limits define for banner
- Don't scale up. This can be done efficiently in the UI
Changes to `images` module
- Refactor `EncodeToBestSize` as `EncodeToLimits` to accept arbitrary dimensions
and allow for custom size
- Define `DimensionLimits` for banner not to exceed 450 KB and a rough estimate
for the ideal size
This introduces the ability for status notes to handle community
history archive magnetlinks. To make this work, a few things are needed:
1. A new database table has been introduced to store message archive
hashes. This is necessary so status nodes can determine whether or
not they need to download a certain archive
2. The messenger's `handleRetrievedMessages()` has been exteded to take
magnetlink messages into account
3. New APIs were added to download torrent data given a magnetlink and
also to extract messages from downloaded archives, which are then
later fed to `handleRetrievedMessages`
Closes#2568
This introduces logic needed to:
- Create WakuMessageArchives and and indices from store waku messages
- History archive torrent data to disk and create .torrent file from
that
- Seed and unseed history archive torrents as necessary
- Starting/stopping the torrent client
- Enabling/disabling community history support for individual components
and starting/stopping the routine intervals accordingly
This does not yet handle magnet links (#2568)
Closes#2567
This is needed so that when they are bundled into archives, receiving
nodes can still verify the messages payload using its signature.
This commit introduces a new `waku_messages` table and APIs to store
such messages. Waku message payload is store for any message that has
a topic that matches any of the admin communities chats.
Closes#2566
* feat: Add edit communities
Allow Communities to be edited, including display name, description, color, membership, and permissions.
* Added EditCommunity request type
* Fix lint errors
* Allow editing community without changing image
Previously, retaining an existing community image was not possible because the existing community image path had to be provided in the `editCommunity` RPC call to retain the image. However, once the image is processed by status-go, it is encoded as a base64 string and therefore it is not possible to get the original file path back from this string.
This commit allows for the original to be retained by passing an empty string for the image field in the RPC call.
* Don't change permissions. Fixed clock updating
Co-authored-by: Volodymyr Kozieiev <vkjr.sp@gmail.com>