Updates the docs

This commit is contained in:
benbierens 2023-09-21 14:39:41 +02:00
parent 01b2ff2181
commit 418daf1e3f
No known key found for this signature in database
GPG Key ID: FE44815D96D0A1AA
8 changed files with 146 additions and 53 deletions

86
CONTRIBUTINGPLUGINS.MD Normal file
View File

@ -0,0 +1,86 @@
# Distributed System Tests for Nim-Codex
## Contributing plugins
The testing framework was created for testing Codex. However, it's been designed such that other distributed/containerized projects can 'easily' be added. In order to add your project to the framework you must:
1. Create a library assembly in the project plugins folder.
1. It must contain a type that implements the `IProjectPlugin` interface from the `Core` assembly.
1. If your plugin wants to expose any specific methods or objects to the code using the framework (the tests and tools), it must implement extensions for the `CoreInterface` type.
## Constructors & Tools
Your implementation of `IProjectPlugin` must have a public constructor with a single argument of type `IPluginTools`, for example:
```C#
public class MyPlugin : IProjectPlugin
{
public MyPlugin(IPluginTools tools)
{
...
}
...
}
```
`IPluginTools` provides your plugin access to all framework functionality, such as logging, tracked file management, container lifecycle management, and a means to create HTTP clients for containers. (Without having to figure out addresses manually.)
## Plugin Interfaces
The `IProjectPlugin` interface requires the implementation of two methods.
1. `Announce` - It is considered polite to use the logging functionality provided by the `IPluginTools` to announce that your plugin has been loaded. You may also want to log some manner of version information at this time if applicable.
1. `Decommission` - Should your plugin have any active system resources, free them in this method.
There are a few optional interfaces your plugin may choose to implement. The framework will automatically use these interfaces.
1. `IHasLogPrefix` - Implementing this interface allows you to provide a string with will be prepended to all log statements made by your plugin.
1. `IHasMetadata` - This allows you to provide metadata in the form of key/value pairs. This metadata can be accessed by code that uses your plugin.
## Core Interface
Any functionality your plugin wants to expose to code which uses the framework will have to be added on to the `CoreInterface` type. You can accomplish this by using C# extension methods. The framework provides a `GetPlugin` method to access your plugin instance from the `CoreInterface` type:
```C#
public static class CoreInterfaceExtensions
{
public static MyPluginReturnType DoSomethingCool(this CoreInterface ci, string someArgument)
{
return Plugin(ci).SomethingCool(someArgument);
}
private static MyPlugin Plugin(CoreInterface ci)
{
return ci.GetPlugin<MyPlugin>();
}
}
```
While technically you can build whatever you like on top of the `CoreInterface` and your own plugin types, I recommend that you follow the approach explained below.
## Deploying, Wrapping, and Starting
When building a plugin, it is important to make as few assumptions as possible about how it will be used by whoever is going to use the framework. For this reason, I recommend you expose three kinds of methods using your `CoreInterface` extensions:
1. Deploy - This kind of method should deploy your project, creating and configuring containers as needed and returning containers as a result. If your project requires additional information, you can create a new class type to contain both it and the containers created.
1. Wrap - This kind of method should, when given the previously mentioned container information, create some kind of convenient accessor or interactor object. This object should abstract away for example details of a REST API of your project, allowing users of your plugin to write their code using a set of methods and types that nicely model your project's domain.
1. Start - This kind of method does both, simply calling a Deploy method first, then a Wrap method, and returns the result.
Here's an example:
```C#
public static class CoreInterfaceExtensions
{
public static RunningContainers DeployMyProject(this CoreInterface ci, string someArgument)
{
// `RunningContainers` is a framework type. It contains all necessary information about a deployed container. It is serializable.
// Should you need to return any additional information, create a new type that contains it as well as the container information. Make sure it is serializable.
return Plugin(ci).DeployMyProjectContainer(someArgument); // <-- This method should use the `PluginTools.CreateWorkflow()` tool to deploy a container with a configuration that matches someArguments.
}
public static IMyProjectNode WrapMyProjectContainer(this CoreInterface ci, RunningContainers container)
{
return Plugin(ci).WrapMyContainerProject(container); // <-- This method probably will use the 'PluginTools.CreateHttp()` tool to create an HTTP client for the container, then wrap it in an object that
// represents the API of your project.
}
public static IMyProjectNode StartMyProject(this CoreInterface ci, string someArgument)
{
// Start is now nothing more than a convenience method, combining the previous two.
var rc = ci.DeployMyProject(someArgument);
return WrapMyProjectContainer(ci, rc);
}
}
```
The primary reason to decouple deploying and wrapping functionalities is that some use cases require these steps to be performed by separate applications, and different moments in time. For this reason, whatever is returned by the deploy methods should be serializable. After deserialization at some later time, it should then be valid input for the wrap method. The Codex continuous tests system is a clear example of this use case: The `CodexNetDeployer` tool uses deploy methods to create Codex nodes. Then it writes the returned objects to a JSON file. Some time later, the `CodexContinousTests` application uses this JSON file to reconstruct the objects created by the deploy methods. It then uses the wrap methods to create accessors and interactors, which are used for testing.

View File

@ -1,24 +1,40 @@
# Distributed System Tests for Nim-Codex
## Contributing tests
Do you want to write some tests for Codex using this distributed test setup? Great! Here's what you do.
Do you want to write some tests using this distributed test setup? Great! Here's what you do.
1. Create a branch. Name it something descriptive, but start it with `tests/` please. [Example: `tests/data-redundancy`.]
1. Checkout your branch, and decide if your tests will be 'short' tests (minutes to hours), or 'long' tests (hours to days), or both! Create a folder for your tests in the matching folders (`/Tests`, `/LongTests`) and don't worry: You can always move your tests later if you like. [Example, short: `/Tests/DataRedundancy/`, long: `/LongTests/DataRedundancy/`]
1. Create one or more code files in your new folder, and write some tests! Here are some tips to help you get started. You can always take a look at the example tests found in [`/Tests/BasicTests/ExampleTests.cs`](/Tests/BasicTests/ExampleTests.cs)
1. Set up a standard NUnit test fixture.
1. Inherrit from `DistTest` or `AutoBootstrapDistTest`.
1. When using `DistTest`:
1. You must start your own Codex bootstrap node. You can use `SetupCodexBootstrapNode(...)` for this.
1. When you start other Codex nodes with `SetupCodexNodes(...)` you can pass the bootstrap node by adding the `.WithBootstrapNode(...)` option.
1. When using `AutoBootstrapDistTest`:
1. The test-infra creates the bootstrap node for you, and automatically passes it to each Codex node you create in your tests. Handy for keeping your tests clean and to-the-point.
1. When using the auto-bootstrap, you have no control over the bootstrap node from your tests. You can't (for example) shut it down during the course of the test. If you need this level of control for your scenario, use the `DistTest` instead.
1. You can generate files of random test data by calling `GenerateTestFile(...)`.
1. If your test needs a long time to run, add the `[UseLongTimeouts]` function attribute. This will greatly increase maximum time-out values for operations like for example uploading and downloading files.
1. You can enable access to the Codex node metrics by adding the option `.EnableMetrics()`. Enabling metrics will make the test-infra download and save all Codex metrics in case of a test failure. (The metrics are stored as CSV, in the same location as the test log file.)
1. You can enable access to the blockchain marketplace by adding the option `.EnableMarketplace(...)`.
1. Enabling metrics and/or enabling the marketplace takes extra resources from the test-infra and increases the time needed during Codex node setup. Please don't enable these features unless your tests need them.
1. Tip: Codex nodes can be named. Use the option `WithName(...)` and make reading your test logs a little nicer!
1. Tip: Commit often.
1. Once you're happy with your tests, please create a pull-request and ask (another) Codex core contributor to review your changes.
1. Checkout your branch.
1. Create a new assembly in the `/Tests` folder. This can be an NUnit test assembly or simply a console app.
1. Add Project references to `Core`, as well as any project plugin you'll be using.
1. Write tests! Use existing tests for inspiration.
## Tips for writing tests for Codex
### Transient tests
1. Add new code files to `Tests/CodexTests`
1. Inherrit from `CodexDistTest` or `AutoBootstrapDistTest`.
1. When using `CodexDistTest`:
1. You must start your own Codex bootstrap node. You can use `AddCodex(...)` for this.
1. When you start other Codex nodes with `AddCodex(...)` you can pass the bootstrap node by adding the `.WithBootstrapNode(...)` option.
1. When using `AutoBootstrapDistTest`:
1. The test-infra creates the bootstrap node for you, and automatically passes it to each Codex node you create in your tests. Handy for keeping your tests clean and to-the-point.
1. When using the auto-bootstrap, you have no control over the bootstrap node from your tests. You can't (for example) shut it down during the course of the test. If you need this level of control for your scenario, use the `CodexDistTest` instead.
1. If your test needs a long time to run, add the `[UseLongTimeouts]` function attribute. This will greatly increase maximum time-out values for operations like for example uploading and downloading files.
### Continuous tests
1. Add new code files to `Tests/CodexContinousTests/Tests`
1. Inherrit from `ContinuousTest`
1. Define one or more methods and decorate them with the `[TestMoment(...)]` attribute.
1. The TestMoment takes a number of seconds as argument. Each moment will be executed by the continuous test runner applying the given seconds as delay. (Non-cumulative. So two moments at T:10 will be executed one after another without delay, in this case the order of execution should not be depended upon.)
1. Continuous tests automatically receive access to the Codex nodes that the tests are being run against.
1. Additionally, Continuous tests can start their own transient Codex nodes and bootstrap them against the persistent nodes.
### Tips for either type of test
1. You can generate files of random test data by calling `GenerateTestFile(...)`.
1. You can enable access to the Codex node metrics by adding the option `.EnableMetrics()`. Enabling metrics will make the test-infra download and save all Codex metrics in case of a test failure. (The metrics are stored as CSV, in the same location as the test log file.)
1. You can enable access to the blockchain marketplace by adding the option `.EnableMarketplace(...)`.
1. Enabling metrics and/or enabling the marketplace takes extra resources from the test-infra and increases the time needed during Codex node setup. Please don't enable these features unless your tests need them.
1. Tip: Codex nodes can be named. Use the option `WithName(...)` and make reading your test logs a little nicer!
1. Tip: Commit often.
## Don't forget
1. Once you're happy with your tests, please create a pull-request and ask a Codex core contributor to review your changes.

View File

@ -5,9 +5,9 @@ namespace GethPlugin
{
public class GethDeployment : IHasContainer
{
public GethDeployment(RunningContainer runningContainer, Port discoveryPort, Port httpPort, Port wsPort, AllGethAccounts allAccounts, string pubKey)
public GethDeployment(RunningContainer container, Port discoveryPort, Port httpPort, Port wsPort, AllGethAccounts allAccounts, string pubKey)
{
Container = runningContainer;
Container = container;
DiscoveryPort = discoveryPort;
HttpPort = httpPort;
WsPort = wsPort;

View File

@ -1,37 +1,30 @@
# Distributed System Tests for Nim-Codex
Using a common dotnet unit-test framework and a few other libraries, this project allows you to write tests that use multiple Codex node instances in various configurations to test the distributed system in a controlled, reproducible environment.
Nim-Codex: https://github.com/codex-storage/nim-codex
Dotnet: v6.0
Dotnet: v7.0
Kubernetes: v1.25.4
Dotnet-kubernetes SDK: v10.1.4 https://github.com/kubernetes-client/csharp
Nethereum: v4.14.0
## Tests
Tests are devided into two assemblies: `/Tests` and `/LongTests`.
`/Tests` is to be used for tests that take several minutes to hours to execute.
`/LongTests` is to be used for tests that take hours to days to execute.
## Tests/CodexTests and Tests/CodexLongTests
These are test assemblies that use NUnit3 to perform tests against transient Codex nodes.
TODO: All tests will eventually be running as part of a dedicated CI pipeline and kubernetes cluster. Currently, we're developing these tests and the infra-code to support it by running the whole thing locally.
## Tests/ContinousTests
A console application that runs tests in an endless loop against a persistent deployment of Codex nodes.
## Configuration
Test executing can be configured using the following environment variables.
| Variable | Description | Default |
|----------------|------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------|
| KUBECONFIG | Optional path (abs or rel) to kubeconfig YAML file. When null, uses system default (docker-desktop) kubeconfig if available. | (null) |
| LOGPATH | Path (abs or rel) where log files will be saved. | "CodexTestLogs" |
| LOGDEBUG | When "true", enables additional test-runner debug log output. | "false" |
| DATAFILEPATH | Path (abs or rel) where temporary test data files will be saved. | "TestDataFiles" |
| LOGLEVEL | Codex log-level. (case-insensitive) | "Trace" |
| RUNNERLOCATION | Use "ExternalToCluster" when test app is running outside of the k8s cluster. Use "InternalToCluster" when tests are run from inside a pod/container. | "ExternalToCluster" |
## Tools/CodexNetDeployer
A console application that can deploy Codex nodes.
## Test logs
Because tests potentially take a long time to run, logging is in place to help you investigate failures afterwards. Should a test fail, all Codex terminal output (as well as metrics if they have been enabled) will be downloaded and stored along with a detailed, step-by-step log of the test. If something's gone wrong and you're here to discover the details, head for the logs.
## How to contribute a plugin
If you want to add support for your project to the testing framework, follow the steps [HERE](/CONTRIBUTINGPLUGINS.MD)
## How to contribute tests
An important goal of the test infra is to provide a simple, accessible way for developers to write their tests. If you want to contribute tests for Codex, please follow the steps [HERE](/CONTRIBUTINGTESTS.md).
If you want to contribute tests, please follow the steps [HERE](/CONTRIBUTINGTESTS.md).
## Run the tests on your machine
Creating tests is much easier when you can debug them on your local system. This is possible, but requires some set-up. If you want to be able to run the tests on your local system, follow the steps [HERE](/docs/LOCALSETUP.md). Please note that tests which require explicit node locations cannot be executed locally. (Well, you could comment out the location statements and then it would probably work. But that might impact the validity/usefulness of the test.)

View File

@ -45,7 +45,10 @@ namespace Tests.BasicTests
}
[Test]
public void MarketplaceExample()
[Combinatorial]
public void MarketplaceExample(
[Values(true, false)] bool isValidator,
[Values(true, false)] bool simulateProofFailure)
{
var sellerInitialBalance = 234.TestTokens();
var buyerInitialBalance = 1000.TestTokens();
@ -54,9 +57,12 @@ namespace Tests.BasicTests
var geth = Ci.StartGethNode(s => s.IsMiner().WithName("disttest-geth"));
var contracts = Ci.StartCodexContracts(geth);
var seller = AddCodex(s => s
.WithStorageQuota(11.GB())
.EnableMarketplace(geth, contracts, initialEth: 10.Eth(), initialTokens: sellerInitialBalance));
var seller = AddCodex(s =>
{
s.WithStorageQuota(11.GB());
s.EnableMarketplace(geth, contracts, initialEth: 10.Eth(), initialTokens: sellerInitialBalance, isValidator);
if (simulateProofFailure) s.WithSimulateProofFailures(3);
});
AssertBalance(geth, contracts, seller, Is.EqualTo(sellerInitialBalance));
seller.Marketplace.MakeStorageAvailable(

View File

Before

Width:  |  Height:  |  Size: 537 KiB

After

Width:  |  Height:  |  Size: 537 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 451 KiB

View File

@ -4,18 +4,10 @@
These steps will help you set up everything you need to run and debug the tests on your local system.
### Installing the requirements.
1. Install dotnet v6.0 or newer. (If you install a newer version, consider updating the .csproj files by replacing all mention of `net6.0` with your version.)
1. Install dotnet v7.0 or newer. (If you install a newer version, consider updating the .csproj files by replacing all mention of `net7.0` with your version.)
1. Set up a nice C# IDE or plugin for your current IDE.
1. Install docker desktop.
1. In the docker-desktop settings, enable kubernetes. (This might take a few minutes.)
### Configure to taste.
The tests should run as-is. You can change the configuration. The items below explain the what and how.
1. Open `DistTestCore/Configuration.cs`.
1. `k8sNamespace` defines the Kubernetes namespace the tests will use. All Kubernetes resources used during the test will be created in it. At the beginning of a test run and at the end of each test, the namespace and all resources in it will be deleted.
1. `kubeConfigFile`. If you are using the Kubernetes cluster created in docker desktop, this field should be set to null. If you wish to use a different cluster, set this field to the path (absolute or relative) of your KubeConfig file.
1. `LogConfig(path, debugEnabled)`. Path defines the path (absolute or relative) where the tests logs will be saved. The debug flag allows you to enable additional logging. This is mostly useful when something's wrong with the test infra.
1. `FileManagerFolder` defines the path (absolute or relative) where the test infra will generate and store test data files. This folder will be deleted at the end of every test run.
### Running the tests
Most IDEs will let you run individual tests or test fixtures straight from the code file. If you want to run all the tests, you can use `dotnet test`. You can control which tests to run by specifying which folder of tests to run. `dotnet test Tests` will run only the tests in `/Tests` and exclude the long tests.
Most IDEs will let you run individual tests or test fixtures straight from the code file. If you want to run all the tests, you can use `dotnet test`. You can control which tests to run by specifying which folder of tests to run. `dotnet test Tests/CodexTests` will run only the tests in `/Tests/CodexTests` and exclude the long tests.