2024-10-28 14:06:20 +11:00
|
|
|
import pkg/serde
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
import std/os
|
2024-10-22 15:57:25 +02:00
|
|
|
import std/sequtils
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
import std/importutils
|
2024-11-10 05:13:49 +07:00
|
|
|
import pkg/asynctest
|
2024-11-09 18:52:21 +07:00
|
|
|
import pkg/json_rpc/rpcclient except `%`, `%*`, toJson
|
|
|
|
import pkg/json_rpc/rpcserver except `%`, `%*`, toJson
|
2023-06-22 12:47:19 +02:00
|
|
|
import ethers/provider
|
|
|
|
import ethers/providers/jsonrpc/subscriptions
|
|
|
|
|
2024-10-22 15:57:25 +02:00
|
|
|
import ../../examples
|
|
|
|
import ./rpc_mock
|
|
|
|
|
2023-06-22 12:47:19 +02:00
|
|
|
suite "JsonRpcSubscriptions":
|
|
|
|
|
|
|
|
test "can be instantiated with an http client":
|
|
|
|
let client = newRpcHttpClient()
|
|
|
|
let subscriptions = JsonRpcSubscriptions.new(client)
|
|
|
|
check not isNil subscriptions
|
|
|
|
|
|
|
|
test "can be instantiated with a websocket client":
|
|
|
|
let client = newRpcWebSocketClient()
|
|
|
|
let subscriptions = JsonRpcSubscriptions.new(client)
|
|
|
|
check not isNil subscriptions
|
|
|
|
|
2023-06-27 14:25:27 +02:00
|
|
|
template subscriptionTests(subscriptions, client) =
|
2023-06-22 12:47:19 +02:00
|
|
|
|
|
|
|
test "subscribes to new blocks":
|
|
|
|
var latestBlock: Block
|
2023-06-29 09:59:48 +02:00
|
|
|
proc callback(blck: Block) =
|
2023-06-22 12:47:19 +02:00
|
|
|
latestBlock = blck
|
|
|
|
let subscription = await subscriptions.subscribeBlocks(callback)
|
|
|
|
discard await client.call("evm_mine", newJArray())
|
2023-06-27 16:16:31 +02:00
|
|
|
check eventually latestBlock.number.isSome
|
2023-06-22 12:47:19 +02:00
|
|
|
check latestBlock.hash.isSome
|
|
|
|
check latestBlock.timestamp > 0.u256
|
2023-06-28 11:02:21 +02:00
|
|
|
await subscriptions.unsubscribe(subscription)
|
2023-06-27 14:10:12 +02:00
|
|
|
|
2023-06-27 16:16:31 +02:00
|
|
|
test "stops listening to new blocks when unsubscribed":
|
|
|
|
var count = 0
|
2023-06-29 09:59:48 +02:00
|
|
|
proc callback(blck: Block) =
|
2023-06-27 16:16:31 +02:00
|
|
|
inc count
|
|
|
|
let subscription = await subscriptions.subscribeBlocks(callback)
|
|
|
|
discard await client.call("evm_mine", newJArray())
|
|
|
|
check eventually count > 0
|
2023-06-28 11:02:21 +02:00
|
|
|
await subscriptions.unsubscribe(subscription)
|
2023-06-27 16:40:29 +02:00
|
|
|
count = 0
|
2023-06-27 16:16:31 +02:00
|
|
|
discard await client.call("evm_mine", newJArray())
|
|
|
|
await sleepAsync(100.millis)
|
2023-06-27 16:40:29 +02:00
|
|
|
check count == 0
|
|
|
|
|
|
|
|
test "stops listening to new blocks when provider is closed":
|
|
|
|
var count = 0
|
2023-06-29 09:59:48 +02:00
|
|
|
proc callback(blck: Block) =
|
2023-06-27 16:40:29 +02:00
|
|
|
inc count
|
2023-06-29 10:23:14 +02:00
|
|
|
discard await subscriptions.subscribeBlocks(callback)
|
2023-06-27 16:40:29 +02:00
|
|
|
discard await client.call("evm_mine", newJArray())
|
|
|
|
check eventually count > 0
|
|
|
|
await subscriptions.close()
|
|
|
|
count = 0
|
|
|
|
discard await client.call("evm_mine", newJArray())
|
|
|
|
await sleepAsync(100.millis)
|
|
|
|
check count == 0
|
2023-06-27 16:16:31 +02:00
|
|
|
|
2023-06-27 14:25:27 +02:00
|
|
|
suite "Web socket subscriptions":
|
|
|
|
|
|
|
|
var subscriptions: JsonRpcSubscriptions
|
|
|
|
var client: RpcWebSocketClient
|
|
|
|
|
|
|
|
setup:
|
|
|
|
client = newRpcWebSocketClient()
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
await client.connect("ws://" & getEnv("ETHERS_TEST_PROVIDER", "localhost:8545"))
|
2023-06-27 14:25:27 +02:00
|
|
|
subscriptions = JsonRpcSubscriptions.new(client)
|
Upgrade to `nim-json-rpc` v0.4.2 and chronos v4 (#64)
* Add json de/serialization lib from codex to handle conversions
json-rpc now requires nim-json-serialization to convert types to/from json. Use the nim-json-serialization signatures to call the json serialization lib from nim-codex (should be moved to its own lib)
* Add ethers implementation for setMethodHandler
Was removed in json-rpc
* More json conversion updates
* Fix json_rpc.call returning JsonString instead of JsonNode
* Update exceptions
Use {.async: (raises: [...].} where needed
Annotate provider with {.push raises:[].}
Format signatures
* Start fixing tests (mainly conversion fixes)
* rename sender to `from`, update json error logging, add more conversions
* Refactor exceptions for providers and signers, fix more tests
- signer procs raise SignerError, provider procs raise ProviderError
- WalletError now inherits from SignerError
- move wallet module under signers
- create jsonrpo moudle under signers
- bump nim-json-rpc for null-handling fixes
- All jsonrpc provider tests passing, still need to fix others
* remove raises from async annotation for dynamic dispatch
- removes async: raises from getAddress and signTransaction because derived JsonRpcSigner methods were not being used when dynamically dispatched. Once `raises` was removed from the async annotation, the dynamic dispatch worked again. This is only the case for getAddress and signTransaction.
- add gcsafe annotation to wallet.provider so that it matches the base method
* Catch EstimateGasError before ProviderError
EstimateGasError is now a ProviderError (it is a SignerError, and SignerError is a ProviderError), so EstimateGasErrors were not being caught
* clean up - all tests passing
* support nim 2.0
* lock in chronos version
* Add serde options to the json util, along with tests
next step is to:
1. change back any ethers var names that were changed for serialization purposes, eg `from` and `type`
2. move the json util to its own lib
* bump json-rpc to 0.4.0 and fix test
* fix: specify raises for getAddress and sendTransaction
Fixes issue where getAddress and sendTransaction could not be found for MockSigner in tests. The problem was that the async: raises update had not been applied to the MockSigner.
* handle exceptions during jsonrpc init
There are too many exceptions to catch individually, including chronos raising CatchableError exceptions in await expansion. There are also many other errors captured inside of the new proc with CatchableError. Instead of making it more complicated and harder to read, I think sticking with excepting CatchableError inside of convertError is a sensible solution
* cleanup
* deserialize key defaults to serialize key
* Add more tests for OptIn/OptOut/Strict modes, fix logic
* use nim-serde instead of json util
Allows aliasing of de/serialized fields, so revert changes of sender to `from` and transactionType to `type`
* Move hash* shim to its own module
* address PR feedback
- add comments to hashes shim
- remove .catch from callback condition
- derive SignerError from EthersError instead of ProviderError. This allows Providers and Signers to be separate, as Ledger does it, to isolate functionality. Some signer functions now raise both ProviderError and SignerError
- Update reverts to check for SignerError
- Update ERC-20 method comment
* rename subscriptions.init > subscriptions.start
2024-02-19 16:50:46 +11:00
|
|
|
subscriptions.start()
|
2023-06-27 14:25:27 +02:00
|
|
|
|
2023-06-27 16:40:29 +02:00
|
|
|
teardown:
|
|
|
|
await subscriptions.close()
|
|
|
|
await client.close()
|
|
|
|
|
2023-06-27 14:25:27 +02:00
|
|
|
subscriptionTests(subscriptions, client)
|
|
|
|
|
2023-06-27 14:10:12 +02:00
|
|
|
suite "HTTP polling subscriptions":
|
|
|
|
|
|
|
|
var subscriptions: JsonRpcSubscriptions
|
|
|
|
var client: RpcHttpClient
|
|
|
|
|
|
|
|
setup:
|
|
|
|
client = newRpcHttpClient()
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
await client.connect("http://" & getEnv("ETHERS_TEST_PROVIDER", "localhost:8545"))
|
2023-06-27 14:33:14 +02:00
|
|
|
subscriptions = JsonRpcSubscriptions.new(client,
|
|
|
|
pollingInterval = 100.millis)
|
Upgrade to `nim-json-rpc` v0.4.2 and chronos v4 (#64)
* Add json de/serialization lib from codex to handle conversions
json-rpc now requires nim-json-serialization to convert types to/from json. Use the nim-json-serialization signatures to call the json serialization lib from nim-codex (should be moved to its own lib)
* Add ethers implementation for setMethodHandler
Was removed in json-rpc
* More json conversion updates
* Fix json_rpc.call returning JsonString instead of JsonNode
* Update exceptions
Use {.async: (raises: [...].} where needed
Annotate provider with {.push raises:[].}
Format signatures
* Start fixing tests (mainly conversion fixes)
* rename sender to `from`, update json error logging, add more conversions
* Refactor exceptions for providers and signers, fix more tests
- signer procs raise SignerError, provider procs raise ProviderError
- WalletError now inherits from SignerError
- move wallet module under signers
- create jsonrpo moudle under signers
- bump nim-json-rpc for null-handling fixes
- All jsonrpc provider tests passing, still need to fix others
* remove raises from async annotation for dynamic dispatch
- removes async: raises from getAddress and signTransaction because derived JsonRpcSigner methods were not being used when dynamically dispatched. Once `raises` was removed from the async annotation, the dynamic dispatch worked again. This is only the case for getAddress and signTransaction.
- add gcsafe annotation to wallet.provider so that it matches the base method
* Catch EstimateGasError before ProviderError
EstimateGasError is now a ProviderError (it is a SignerError, and SignerError is a ProviderError), so EstimateGasErrors were not being caught
* clean up - all tests passing
* support nim 2.0
* lock in chronos version
* Add serde options to the json util, along with tests
next step is to:
1. change back any ethers var names that were changed for serialization purposes, eg `from` and `type`
2. move the json util to its own lib
* bump json-rpc to 0.4.0 and fix test
* fix: specify raises for getAddress and sendTransaction
Fixes issue where getAddress and sendTransaction could not be found for MockSigner in tests. The problem was that the async: raises update had not been applied to the MockSigner.
* handle exceptions during jsonrpc init
There are too many exceptions to catch individually, including chronos raising CatchableError exceptions in await expansion. There are also many other errors captured inside of the new proc with CatchableError. Instead of making it more complicated and harder to read, I think sticking with excepting CatchableError inside of convertError is a sensible solution
* cleanup
* deserialize key defaults to serialize key
* Add more tests for OptIn/OptOut/Strict modes, fix logic
* use nim-serde instead of json util
Allows aliasing of de/serialized fields, so revert changes of sender to `from` and transactionType to `type`
* Move hash* shim to its own module
* address PR feedback
- add comments to hashes shim
- remove .catch from callback condition
- derive SignerError from EthersError instead of ProviderError. This allows Providers and Signers to be separate, as Ledger does it, to isolate functionality. Some signer functions now raise both ProviderError and SignerError
- Update reverts to check for SignerError
- Update ERC-20 method comment
* rename subscriptions.init > subscriptions.start
2024-02-19 16:50:46 +11:00
|
|
|
subscriptions.start()
|
2023-06-27 14:10:12 +02:00
|
|
|
|
2023-06-27 16:40:29 +02:00
|
|
|
teardown:
|
|
|
|
await subscriptions.close()
|
|
|
|
await client.close()
|
|
|
|
|
2023-06-27 14:25:27 +02:00
|
|
|
subscriptionTests(subscriptions, client)
|
2024-10-22 15:57:25 +02:00
|
|
|
|
|
|
|
suite "HTTP polling subscriptions - filter not found":
|
|
|
|
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
var subscriptions: PollingSubscriptions
|
2024-10-22 15:57:25 +02:00
|
|
|
var client: RpcHttpClient
|
|
|
|
var mockServer: MockRpcHttpServer
|
2024-11-09 18:52:21 +07:00
|
|
|
var url: string
|
2024-10-22 15:57:25 +02:00
|
|
|
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
privateAccess(PollingSubscriptions)
|
|
|
|
|
2024-10-22 15:57:25 +02:00
|
|
|
setup:
|
|
|
|
mockServer = MockRpcHttpServer.new()
|
|
|
|
mockServer.start()
|
|
|
|
|
|
|
|
client = newRpcHttpClient()
|
2024-11-09 18:52:21 +07:00
|
|
|
url = "http://" & $mockServer.localAddress()[0]
|
|
|
|
await client.connect(url)
|
2024-10-22 15:57:25 +02:00
|
|
|
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
subscriptions = PollingSubscriptions(
|
|
|
|
JsonRpcSubscriptions.new(
|
|
|
|
client,
|
|
|
|
pollingInterval = 1.millis))
|
2024-10-22 15:57:25 +02:00
|
|
|
subscriptions.start()
|
|
|
|
|
|
|
|
teardown:
|
|
|
|
await subscriptions.close()
|
|
|
|
await client.close()
|
|
|
|
await mockServer.stop()
|
|
|
|
|
2024-11-09 18:52:21 +07:00
|
|
|
test "polling loop is kept alive after disconnection":
|
|
|
|
let log = Log(blockNumber: 999999.u256)
|
|
|
|
|
|
|
|
var latestBlock = 0.u256
|
|
|
|
proc callback(log: Log) =
|
|
|
|
trace "GOT LOG IN CALLBACK", number = log.blockNumber
|
|
|
|
latestBlock = log.blockNumber
|
|
|
|
|
|
|
|
let filter = EventFilter(address: Address.example, topics: @[array[32, byte].example])
|
|
|
|
let emptyHandler = proc(log: Log) = discard
|
|
|
|
|
|
|
|
let id = await subscriptions.subscribeLogs(filter, callback)
|
|
|
|
# simulate failed requests by connecting to an invalid port
|
|
|
|
await client.connect("http://127.0.0.1:1")
|
|
|
|
await sleepAsync(5.millis) # wait for polling loop
|
|
|
|
trace "STARTING MOCK SERVER"
|
|
|
|
await client.connect(url)
|
|
|
|
trace "ADDING FILTER CHANGE"
|
|
|
|
mockServer.addFilterChange(id, log)
|
|
|
|
|
|
|
|
check eventually(latestBlock == log.blockNumber, timeout=200)
|
|
|
|
|
|
|
|
await subscriptions.unsubscribe(id)
|
|
|
|
|
2024-10-30 17:26:27 +01:00
|
|
|
test "filter not found error recreates log filter":
|
2024-10-22 15:57:25 +02:00
|
|
|
let filter = EventFilter(address: Address.example, topics: @[array[32, byte].example])
|
|
|
|
let emptyHandler = proc(log: Log) = discard
|
|
|
|
|
2024-10-30 17:26:27 +01:00
|
|
|
check subscriptions.logFilters.len == 0
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
check subscriptions.subscriptionMapping.len == 0
|
|
|
|
|
|
|
|
let id = await subscriptions.subscribeLogs(filter, emptyHandler)
|
|
|
|
|
2024-10-30 17:26:27 +01:00
|
|
|
check subscriptions.logFilters[id] == filter
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
check subscriptions.subscriptionMapping[id] == id
|
2024-10-30 17:26:27 +01:00
|
|
|
check subscriptions.logFilters.len == 1
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
check subscriptions.subscriptionMapping.len == 1
|
2024-10-22 15:57:25 +02:00
|
|
|
|
|
|
|
mockServer.invalidateFilter(id)
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
|
|
|
|
check eventually subscriptions.subscriptionMapping[id] != id
|
2024-10-22 15:57:25 +02:00
|
|
|
|
2024-10-30 17:26:27 +01:00
|
|
|
test "recreated log filter can be still unsubscribed using the original id":
|
2024-10-22 15:57:25 +02:00
|
|
|
let filter = EventFilter(address: Address.example, topics: @[array[32, byte].example])
|
|
|
|
let emptyHandler = proc(log: Log) = discard
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
let id = await subscriptions.subscribeLogs(filter, emptyHandler)
|
|
|
|
mockServer.invalidateFilter(id)
|
|
|
|
check eventually subscriptions.subscriptionMapping[id] != id
|
2024-10-22 15:57:25 +02:00
|
|
|
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
await subscriptions.unsubscribe(id)
|
2024-10-22 15:57:25 +02:00
|
|
|
|
2024-10-30 17:26:27 +01:00
|
|
|
check not subscriptions.logFilters.hasKey id
|
|
|
|
check not subscriptions.subscriptionMapping.hasKey id
|
|
|
|
|
|
|
|
test "filter not found error recreates block filter":
|
|
|
|
let emptyHandler = proc(blck: Block) = discard
|
|
|
|
|
|
|
|
check subscriptions.subscriptionMapping.len == 0
|
|
|
|
let id = await subscriptions.subscribeBlocks(emptyHandler)
|
|
|
|
check subscriptions.subscriptionMapping[id] == id
|
|
|
|
|
|
|
|
mockServer.invalidateFilter(id)
|
|
|
|
|
|
|
|
check eventually subscriptions.subscriptionMapping[id] != id
|
|
|
|
|
|
|
|
test "recreated block filter can be still unsubscribed using the original id":
|
|
|
|
let emptyHandler = proc(blck: Block) = discard
|
|
|
|
let id = await subscriptions.subscribeBlocks(emptyHandler)
|
|
|
|
mockServer.invalidateFilter(id)
|
|
|
|
check eventually subscriptions.subscriptionMapping[id] != id
|
|
|
|
|
|
|
|
await subscriptions.unsubscribe(id)
|
|
|
|
|
fix: modify unsubscribe cleanup routine and tests (#84)
* fix: modify unsubscribe cleanup routine
Ignore exceptions (other than CancelledError) if uninstallation of the filter fails. If it's the last step in the subscription cleanup, then filter changes for this filter will no longer be polled so if the filter continues to live on in geth for whatever reason, then it doesn't matter.
This includes a number of fixes:
- `CancelledError` is now caught inside of `getChanges`. This was causing conditions during `subscriptions.close`, where the `CancelledError` would get consumed by the `except CatchableError`, if there was an ongoing `poll` happening at the time of close.
- After creating a new filter inside of `getChanges`, the new filter is polled for changes before returning.
- `getChanges` also does not swallow `CatchableError` by returning an empty array, and instead re-raises the error if it is not `filter not found`.
- The tests were simplified by accessing the private fields of `PollingSubscriptions`. That way, there wasn't a race condition for the `newFilterId` counter inside of the mock.
- The `MockRpcHttpServer` was simplified by keeping track of the active filters only, and invalidation simply removes the filter. The tests then only needed to rely on the fact that the filter id changed in the mapping.
- Because of the above changes, we no longer needed to sleep inside of the tests, so the sleeps were removed, and the polling interval could be changed to 1ms, which not only makes the tests faster, but would further highlight any race conditions if present.
* docs: rpc custom port documentation
---------
Co-authored-by: Adam Uhlíř <adam@uhlir.dev>
2024-10-25 14:58:45 +11:00
|
|
|
check not subscriptions.subscriptionMapping.hasKey id
|