refactor: multinode integration test refactor (#662)
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
|
|
|
import pkg/questionable
|
|
|
|
import pkg/questionable/results
|
|
|
|
import pkg/confutils
|
|
|
|
import pkg/chronicles
|
|
|
|
import pkg/chronos
|
2024-07-18 21:04:33 +00:00
|
|
|
import pkg/chronos/asyncproc
|
refactor: multinode integration test refactor (#662)
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
|
|
|
import pkg/stew/io2
|
|
|
|
import std/os
|
|
|
|
import std/sets
|
|
|
|
import std/sequtils
|
|
|
|
import std/strutils
|
|
|
|
import pkg/codex/conf
|
|
|
|
import pkg/codex/utils/trackedfutures
|
|
|
|
import ./codexclient
|
|
|
|
import ./nodeprocess
|
|
|
|
|
|
|
|
export codexclient
|
|
|
|
export chronicles
|
|
|
|
|
|
|
|
logScope:
|
|
|
|
topics = "integration testing hardhat process"
|
|
|
|
nodeName = "hardhat"
|
|
|
|
|
|
|
|
type
|
|
|
|
HardhatProcess* = ref object of NodeProcess
|
|
|
|
logFile: ?IoHandle
|
|
|
|
|
|
|
|
method workingDir(node: HardhatProcess): string =
|
|
|
|
return currentSourcePath() / ".." / ".." / ".." / "vendor" / "codex-contracts-eth"
|
|
|
|
|
|
|
|
method executable(node: HardhatProcess): string =
|
|
|
|
return "node_modules" / ".bin" / "hardhat"
|
|
|
|
|
|
|
|
method startedOutput(node: HardhatProcess): string =
|
|
|
|
return "Started HTTP and WebSocket JSON-RPC server at"
|
|
|
|
|
|
|
|
method processOptions(node: HardhatProcess): set[AsyncProcessOption] =
|
|
|
|
return {}
|
|
|
|
|
|
|
|
method outputLineEndings(node: HardhatProcess): string =
|
|
|
|
return "\n"
|
|
|
|
|
2024-10-29 06:47:33 +00:00
|
|
|
method logFileContains*(hardhat: HardhatProcess, text: string): bool =
|
|
|
|
without fileHandle =? hardhat.logFile:
|
|
|
|
raiseAssert "failed to open hardhat log file, aborting"
|
|
|
|
|
|
|
|
without fileSize =? fileHandle.getFileSize:
|
|
|
|
raiseAssert "failed to get current hardhat log file size, aborting"
|
|
|
|
|
|
|
|
if checkFileSize(fileSize).isErr:
|
|
|
|
raiseAssert "file size too big for nim indexing"
|
|
|
|
|
|
|
|
var data = ""
|
|
|
|
data.setLen(fileSize)
|
|
|
|
|
|
|
|
without bytesRead =? readFile(fileHandle,
|
|
|
|
data.toOpenArray(0, len(data) - 1)):
|
|
|
|
raiseAssert "unable to read hardhat log, aborting"
|
|
|
|
|
|
|
|
return data.contains(text)
|
|
|
|
|
refactor: multinode integration test refactor (#662)
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
|
|
|
proc openLogFile(node: HardhatProcess, logFilePath: string): IoHandle =
|
|
|
|
let logFileHandle = openFile(
|
|
|
|
logFilePath,
|
|
|
|
{OpenFlags.Write, OpenFlags.Create, OpenFlags.Truncate}
|
|
|
|
)
|
|
|
|
|
|
|
|
without fileHandle =? logFileHandle:
|
|
|
|
fatal "failed to open log file",
|
|
|
|
path = logFilePath,
|
|
|
|
errorCode = $logFileHandle.error
|
|
|
|
|
|
|
|
raiseAssert "failed to open log file, aborting"
|
|
|
|
|
|
|
|
return fileHandle
|
|
|
|
|
|
|
|
method start*(node: HardhatProcess) {.async.} =
|
|
|
|
|
|
|
|
let poptions = node.processOptions + {AsyncProcessOption.StdErrToStdOut}
|
|
|
|
trace "starting node",
|
|
|
|
args = node.arguments,
|
|
|
|
executable = node.executable,
|
|
|
|
workingDir = node.workingDir,
|
|
|
|
processOptions = poptions
|
|
|
|
|
|
|
|
try:
|
|
|
|
node.process = await startProcess(
|
|
|
|
node.executable,
|
|
|
|
node.workingDir,
|
|
|
|
@["node", "--export", "deployment-localhost.json"].concat(node.arguments),
|
|
|
|
options = poptions,
|
|
|
|
stdoutHandle = AsyncProcess.Pipe
|
|
|
|
)
|
2024-05-23 15:29:30 +00:00
|
|
|
except CancelledError as error:
|
|
|
|
raise error
|
refactor: multinode integration test refactor (#662)
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
|
|
|
except CatchableError as e:
|
|
|
|
error "failed to start hardhat process", error = e.msg
|
|
|
|
|
|
|
|
proc startNode*(
|
|
|
|
_: type HardhatProcess,
|
|
|
|
args: seq[string],
|
|
|
|
debug: string | bool = false,
|
|
|
|
name: string
|
|
|
|
): Future[HardhatProcess] {.async.} =
|
|
|
|
|
|
|
|
var logFilePath = ""
|
|
|
|
|
|
|
|
var arguments = newSeq[string]()
|
|
|
|
for arg in args:
|
|
|
|
if arg.contains "--log-file=":
|
|
|
|
logFilePath = arg.split("=")[1]
|
|
|
|
else:
|
|
|
|
arguments.add arg
|
|
|
|
|
|
|
|
trace "starting hardhat node", arguments
|
|
|
|
## Starts a Hardhat Node with the specified arguments.
|
|
|
|
## Set debug to 'true' to see output of the node.
|
|
|
|
let hardhat = HardhatProcess(
|
|
|
|
arguments: arguments,
|
|
|
|
debug: ($debug != "false"),
|
|
|
|
trackedFutures: TrackedFutures.new(),
|
|
|
|
name: "hardhat"
|
|
|
|
)
|
|
|
|
|
|
|
|
await hardhat.start()
|
|
|
|
|
|
|
|
if logFilePath != "":
|
|
|
|
hardhat.logFile = some hardhat.openLogFile(logFilePath)
|
|
|
|
|
|
|
|
return hardhat
|
|
|
|
|
|
|
|
method onOutputLineCaptured(node: HardhatProcess, line: string) =
|
|
|
|
without logFile =? node.logFile:
|
|
|
|
return
|
|
|
|
|
|
|
|
if error =? logFile.writeFile(line & "\n").errorOption:
|
2024-10-24 05:56:12 +00:00
|
|
|
error "failed to write to hardhat file", errorCode = $error
|
refactor: multinode integration test refactor (#662)
* refactor multi node test suite
Refactor the multinode test suite into the marketplace test suite.
- Arbitrary number of nodes can be started with each test: clients, providers, validators
- Hardhat can also be started locally with each test, usually for the purpose of saving and inspecting its log file.
- Log files for all nodes can be persisted on disk, with configuration at the test-level
- Log files, if persisted (as specified in the test), will be persisted to a CI artifact
- Node config is specified at the test-level instead of the suite-level
- Node/Hardhat process starting/stopping is now async, and runs much faster
- Per-node config includes:
- simulating proof failures
- logging to file
- log level
- log topics
- storage quota
- debug (print logs to stdout)
- Tests find next available ports when starting nodes, as closing ports on Windows can lag
- Hardhat is no longer required to be running prior to starting the integration tests (as long as Hardhat is configured to run in the tests).
- If Hardhat is already running, a snapshot will be taken and reverted before and after each test, respectively.
- If Hardhat is not already running and configured to run at the test-level, a Hardhat process will be spawned and torn down before and after each test, respectively.
* additional logging for debug purposes
* address PR feedback
- fix spelling
- revert change from catching ProviderError to SignerError -- this should be handled more consistently in the Market abstraction, and will be handled in another PR.
- remove method label from raiseAssert
- remove unused import
* Use API instead of command exec to test for free port
Use chronos `createStreamServer` API to test for free port by binding localhost address and port. Use `ServerFlags.ReuseAddr` to enable reuse of same IP/Port on multiple test runs.
* clean up
* remove upraises annotations from tests
* Update tests to work with updated erasure coding slot sizes
* update dataset size, nodes, tolerance to match valid ec params
Integration tests now have valid dataset sizes (blocks), tolerances, and number of nodes, to work with valid ec params. These values are validated when requested storage.
Print the rest api failure message (via doAssert) when a rest api call fails (eg the rest api may validate some ec params).
All integration tests pass when the async `clock.now` changes are reverted.
* dont use async clock for now
* fix workflow
* move integration logs uplod to reusable
---------
Co-authored-by: Dmitriy Ryajov <dryajov@gmail.com>
2024-02-19 04:55:39 +00:00
|
|
|
discard logFile.closeFile()
|
|
|
|
node.logFile = none IoHandle
|
|
|
|
|
|
|
|
method stop*(node: HardhatProcess) {.async.} =
|
|
|
|
# terminate the process
|
|
|
|
await procCall NodeProcess(node).stop()
|
|
|
|
|
|
|
|
if logFile =? node.logFile:
|
|
|
|
trace "closing hardhat log file"
|
|
|
|
discard logFile.closeFile()
|
|
|
|
|
|
|
|
method removeDataDir*(node: HardhatProcess) =
|
|
|
|
discard
|