Fixed logic to respect specified NFT quantities. Previously, holding one
NFT sufficed, regardless of the required count.
fixes: status-im/status-desktop#15122
That's an optimisation. Instead of fetching collectibles owners for each
member, it is fetched once, before members iteration.
It should significantly reduce amount of queries to providers.
closes: status-im/status-desktop#14914
I've renamed TorrentManager to ArchiveManager, ArchiveManager to ArchiveFileManager, TorrentContract to ArchiveService, ArchiveContract to ArchiveFileService. I've also renamed all init functions and struct fields to the appropriate archive-centric naming.
Instead of using the TorrentContract interface I've set the field to expicitly declare as *TorrentManager. This removes all the random type casting used in the tests. I also removed all the usages of buildTorrentConfig() as we build the test suite with the torrent config now.
AmountInWei will have a wei-like units.
Amount field becomes deprecated because it kept string with float value.
Comparison (in case of Decimals == 5):
Amount (deprecated) = "1.2"
AmountInWei = "120000"
Issue #11588
This commit changes the format of the encryption id to be based off 3
things:
1) The group id
2) The timestamp
3) The actual key
Previously this was solely based on the timestamp and the group id, but
this might lead to conflicts. Moreover the format of the key was an
uint32 and so it would wrap periodically.
The migration is a bit tricky, so first we cleared the cache of keys,
that's easier than migrating, and second we set the new field hash_id to
the concatenation of group_id / key_id.
This might lead on some duplication in case keys are re-received, but it
should not have an impact on the correctness of the code.
I have added 2 tests covering compatibility between old/new clients, as
this should not be a breaking change.
It also adds a new message to rekey in a single go, instead of having to
send multiple messages
interface for initializing db, which is implemented for appdatabase and
walletdatabase. TBD for multiaccounts DB.
Unified DB initializion for all tests using helpers and new interface.
Reduced sqlcipher kdf iterations for all tests to 1.
chore:
- add CommunityEventsMessage
- refactor community_admin_event to accept a list of events and patch a CommunityDescription
- save/read community events into/from database
- publish and handle community events message
- fixed admin category tests
- rename AdminEvent to Events or CommunityEvents
This API is used to get a permission status of all channels of a given
community.
Clients can use this API to get the provided information for all
community channels with a single RPC call instead of doing one call
for each channel separately.
Similar to `CheckPermissionToJoin()` we now get
a `CheckChannelPermissions()` API.
It will rely on the same `PermissionResponse` types, but gives
information about both `ViewOnlyPermissions` and
`ViewAndPostPermissions`.
This commit does a few things:
- Adds a migration that adds chainids to communities_request_to_join_revealed_addresses
- Removes RevealedAddress in favor of RevealedAccount which is now a struct that contains the revealed address, as well as the signature and a list of chain IDs on which to check for user funds
- Changes the logic of sending requests to join a community, such that after creating address signatures, the user node will also check which of the addresses has funds on which networks for the community's token permissions, and add the chainds to the RevealedAccount
- Updates checkPermissionToJoin() such that only relevant chainids are used when checking user's funds. Chain IDs are retrieved from RevealedAccounts and matched against token permission criteria chain IDs
* chore(upgradeSQLCipher): Upgrading SQLCipher to version 5.4.5
Changes:
### github.com/mutecomm/go-sqlcipher
1. The improved crypto argorighms from go-sqlcipher v3 are merged in v4
Tags:
v4.4.2-status.1 - merge `burn_stack` improvement
v4.4.2-status.2 - merge `SHA1` improvement
v4.4.2-status.4- merge 'AES' improvement
2. Fixed `go-sqlcipher` to support v3 database in compatibility mode (`sqlcipher` already supports this) (Tag: v4.4.2-status.3)
3. Upgrade `sqlcipher` to v5.4.5 (Tag: v4.5.4-status.1)
### github.com/status-im/migrate/v4
1. Upgrade `go-sqlcipher` version in `github.com/status-im/migrate/v4`
### status-go
1. Upgrade `go-sqlcipher` and `migrate` modules in status-go
2. Configure the DB connections to open the DB in v3 compatibility mode
* chore(upgradeSQLCipher): Use sqlcipher v3 configuration to encrypt a plain text database
* chore(upgradeSQLCipher): Scanning NULL BLOB value should return nil
Fixing failing tests: TestSyncDeviceSuite/TestPairingSyncDeviceClientAsReceiver; TestSyncDeviceSuite/TestPairingSyncDeviceClientAsSender
Considering the following configuration:
1. Table with BLOB column has 1 NULL value
2. Query the value
3. Rows.Scan(&dest sql.NullString)
Expected: dest.Valid == false; dest.String == nil
Actual: dest.Valid == true; dest.String == ""
* chore: Bump go-sqlcipher version to include NULL BLOB fix
This commit does a few things:
1) Extend create/import account endpoint to get wallet config, some of
which has been moved to the backend
2) Set up a loop for retrieving balances every 10 minutes, caching the
balances
3) Return information about which checks are not passing when trying to
join a token gated community
4) Add tests to the token gated communities
5) Fixes an issue with addresses not matching when checking for
permissions
The move to the wallet as a background task is not yet complete, I need
to publish a signal, and most likely I will disable it before merging
for now, as it's currently not used by desktop/mobile, but the PR was
getting to big
This commit refactors the discord import tool such that,
instead of loading all data to be imported into memory at
once, it will now perform the import on a per file basis.
This improves the memory pressure for the node performing
the import and seems to increase its performance as well.
Prior to this commit we had a `CreateHistoryArchiveTorrent()` API which
takes a `startDate`, an `endDate` and a `partition` to create a bunch of
message archives, given a certain time range.
The function expects the messages to live in the database, which means,
all messages that need to be archived have to be saved there at some
point.
This turns out to be an issue when importing communities from third
party services, where, sometimes, there are several thousands of messages
including attachment payloads, that have to be save to the database
first.
There are only two options to get the messages into the database:
1. Make one write operation with all messages - this slow, takes a long
time and blocks the database until done
2. Create message chunks and perform multiple write operations - this is
also slow, takes long but makes the database a bit more responsive as
it's many smaller operations instead of one big one
Option 2) turned out to not be super feasible either as sometimes,
inserting even a single such message can take up to 10 seconds
(depending on payload)
Which brings me to the third option.
**A third option** is to not store those imported messages as waku
message into the database, just to later query them again to create the
archives, but instead create the archives right away from all the
messages that have been loaded into memory.
This is significantly faster and doesn't block the database.
To make this possible, this commit introduces
a `CreateHistoryArchiveTorrentFromMessages()` API, and
a `CreateHistoryArchiveTorrentFromDB()` API which can be used for
different use cases.