Merge branch 'master' into feature/bot-upgrade

# Conflicts:
#	Tests/CodexTests/BasicTests/ExampleTests.cs
This commit is contained in:
Ben 2024-03-27 15:25:58 +01:00
commit 330552563c
No known key found for this signature in database
GPG Key ID: 541B9D8C9F1426A1
51 changed files with 1813 additions and 780 deletions

View File

@ -104,7 +104,7 @@ jobs:
uses: actions/upload-artifact@v4
with:
name: digests-${{ matrix.target.arch }}
path: /tmp/digests/*
path: /tmp/digests
if-no-files-found: error
retention-days: 1

View File

@ -1,10 +1,26 @@
# Distributed System Tests for Nim-Codex
## Contributing plugins
The testing framework was created for testing Codex. However, it's been designed such that other distributed/containerized projects can 'easily' be added. In order to add your project to the framework you must:
The testing framework was created for testing Codex. However, it's been designed such that other containerized projects can 'easily' be added.
In this file, you'll see 'users' (in quote) mentioned once or twice. This refers to code/projects/tests which end up making use of your plugin. 'Users' come in many shapes and sizes and tend to have many differen use-cases in mind. Please consider this when reading this document and writing your plugin.
## Checklist
Your application must pass this checklist to be compatible with the framework:
- It runs in a docker container.
- It can be configured via environment variables. (You may need to create a docker image which contains a shell script, to pass some env-vars as CLI arguments to your application. Container command overrides do work, but are not equally reliable across container platforms. When in doubt: use env-var!)
- It has network interaction:
- It exposes one or more APIs via one or more ports, OR
- It makes calls to other services. (OR both.)
If your application's use-cases rely primarily on shell interaction, this framework might not be for you. The framework allows you to execute commands in containers AND read stdout/stderr responses. However, its focus during development has always been webservice API interactions.
## Steps
In order to add your project to the framework you must:
1. Create a library assembly in the project plugins folder.
1. It must contain a type that implements the `IProjectPlugin` interface from the `Core` assembly.
1. If your plugin wants to expose any specific methods or objects to the code using the framework (the tests and tools), it must implement extensions for the `CoreInterface` type.
1. If your plugin wants to expose any specific methods or objects to 'users', it must implement extensions for the `CoreInterface` type.
1. If your plugin wants to run containers of its own project, it must provide a recipe.
## Constructors & Tools
Your implementation of `IProjectPlugin` must have a public constructor with a single argument of type `IPluginTools`, for example:
@ -20,19 +36,34 @@ Your implementation of `IProjectPlugin` must have a public constructor with a si
}
```
`IPluginTools` provides your plugin access to all framework functionality, such as logging, tracked file management, container lifecycle management, and a means to create HTTP clients for containers. (Without having to figure out addresses manually.)
## Plugin Interfaces
The `IProjectPlugin` interface requires the implementation of two methods.
1. `Announce` - It is considered polite to use the logging functionality provided by the `IPluginTools` to announce that your plugin has been loaded. You may also want to log some manner of version information at this time if applicable.
1. `Decommission` - Should your plugin have any active system resources, free them in this method.
1. `Announce` - It is considered polite to use the logging functionality provided by the `IPluginTools` to announce that your plugin has been loaded. You may also want to log some manner of version and/or configuration information at this time if applicable.
1. `Decommission` - Should your plugin have any active system resources, free them in this method. Please note that resources managed by the framework (such as running containers and tracked data files) do *not* need to be manually disposed in this method. `Decommission` is to be used for resources not managed by the framework.
There are a few optional interfaces your plugin may choose to implement. The framework will automatically use these interfaces.
1. `IHasLogPrefix` - Implementing this interface allows you to provide a string with will be prepended to all log statements made by your plugin.
1. `IHasMetadata` - This allows you to provide metadata in the form of key/value pairs. This metadata can be accessed by code that uses your plugin.
1. `IHasLogPrefix` - Implementing this interface allows you to provide a string which will be prepended to all log statements made by your plugin. A polite thing to do.
1. `IHasMetadata` - This allows you to provide metadata in the form of key/value pairs. This metadata can be accessed by 'users' of your plugin. Often this data finds its way into log files and container-descriptors in order to help track versions/tests/deployments, etc.
## IPluginTools
`IPluginTools` provides your plugin access to all framework functionality, such as logging, tracked file management, container lifecycle management, and a means to create HTTP clients to make calls to containers. (Figure out addresses and ports for containers is handled by the framework.)
It is possible and allowed for your plugin to depend on and use other plugins. (For example, maybe your project wants to interact with Ethereum and wants to use the GethPlugin to talk to a Geth node.) `IPluginTools` is *not* what is used for accessing functionality of other plugins. See 'Core Interface' section.
ILog GetLog();
IHttp CreateHttp(Action<HttpClient> onClientCreated);
IHttp CreateHttp(Action<HttpClient> onClientCreated, ITimeSet timeSet);
IHttp CreateHttp();
IFileManager GetFileManager();
The plugin tools provide:
1. `Workflow` - This tool allows you to start and stop containers using "container recipes". (More on those below.) It also allows you to execute commands inside a container, access stdout/stderr, detect crashes, and access pod deployment information. The workflow tool also lets you inspect the locations available in the cluster, and decide where you want to run containers. (More on that below as well.)
1. `Log` - Good logging is priceless. Use this tool to get a log object handle, and write useful debug/info/error statements.
1. `Http` - This tool gives you a convenient way to access a standard dotnet HttpClient, and takes care of timeouts and retries (in accordance with the config). Additionally, it combos nicely with container objects created by `Workflow`, such that you never have to spend any time figuring out the addresses and ports of your containers.
1. `FileManager` - Lets you use tracked temporary files. Even if the 'user' tests/application start crashing, the framework will make sure these are cleaned up.
## Core Interface
Any functionality your plugin wants to expose to code which uses the framework will have to be added on to the `CoreInterface` type. You can accomplish this by using C# extension methods. The framework provides a `GetPlugin` method to access your plugin instance from the `CoreInterface` type:
Any functionality your plugin wants to expose to 'users' will have to be added on to the `CoreInterface` type. You can accomplish this by using C# extension methods. The framework provides a `GetPlugin` method to access your plugin instance from the `CoreInterface` type:
```C#
public static class CoreInterfaceExtensions
{
@ -48,12 +79,14 @@ Any functionality your plugin wants to expose to code which uses the framework w
}
```
If your plugin wants to access the functionality exposed by other plugins, then you can pass the argument `CoreInterface ci` to your plugin code in order to do so. (For example, if you want to start a Geth node, the Geth plugin adds `IGethNode StartGethNode(this CoreInterface ci, Action<IGethSetup> setup)` to the core interface.) Don't forget you'll need to add a project reference to each plugin project you wish to use.
While technically you can build whatever you like on top of the `CoreInterface` and your own plugin types, I recommend that you follow the approach explained below.
## Deploying, Wrapping, and Starting
When building a plugin, it is important to make as few assumptions as possible about how it will be used by whoever is going to use the framework. For this reason, I recommend you expose three kinds of methods using your `CoreInterface` extensions:
1. Deploy - This kind of method should deploy your project, creating and configuring containers as needed and returning containers as a result. If your project requires additional information, you can create a new class type to contain both it and the containers created.
1. Wrap - This kind of method should, when given the previously mentioned container information, create some kind of convenient accessor or interactor object. This object should abstract away for example details of a REST API of your project, allowing users of your plugin to write their code using a set of methods and types that nicely model your project's domain.
1. Deploy - This kind of method should deploy your project, creating and configuring containers as needed and returning container objects as a result. If your project requires additional information, you can create a new class type to contain both it and the container objects created.
1. Wrap - This kind of method should, when given the previously mentioned container information, create some kind of convenient accessor or interactor object. This object should abstract away for example details of a REST API of your project, allowing users of your plugin to write their code using a set of methods and types that nicely model your project's domain. (For example, if my project has a REST API call that allows users to fetch some state information, the object returned by Wrap should have a convenient method to call that API and receive that state information.)
1. Start - This kind of method does both, simply calling a Deploy method first, then a Wrap method, and returns the result.
Here's an example:
@ -69,8 +102,8 @@ public static class CoreInterfaceExtensions
public static IMyProjectNode WrapMyProjectContainer(this CoreInterface ci, RunningContainers container)
{
return Plugin(ci).WrapMyContainerProject(container); // <-- This method probably will use the 'PluginTools.CreateHttp()` tool to create an HTTP client for the container, then wrap it in an object that
// represents the API of your project.
return Plugin(ci).WrapMyContainerProject(container); // <-- This method probably will use the 'PluginTools.CreateHttp()` to create an HTTP client for the container, then wrap it in an object that
// represents the API of your project, in this case 'IMyProjectNode'.
}
public static IMyProjectNode StartMyProject(this CoreInterface ci, string someArgument)
@ -82,5 +115,44 @@ public static class CoreInterfaceExtensions
}
```
Should your deploy methods not return framework-types like RunningContainers, please make sure that your custom times are serializable. (Decorate them with the `[Serializable]` attribute.) Tools have been built using this framework which rely on the ability to serialize and store deployment information for later use. Please don't break this possibility. (Consider using the `SerializeGate` type to help ensure compatibility.)
The primary reason to decouple deploying and wrapping functionalities is that some use cases require these steps to be performed by separate applications, and different moments in time. For this reason, whatever is returned by the deploy methods should be serializable. After deserialization at some later time, it should then be valid input for the wrap method. The Codex continuous tests system is a clear example of this use case: The `CodexNetDeployer` tool uses deploy methods to create Codex nodes. Then it writes the returned objects to a JSON file. Some time later, the `CodexContinuousTests` application uses this JSON file to reconstruct the objects created by the deploy methods. It then uses the wrap methods to create accessors and interactors, which are used for testing.
## Container Recipes
In order to run a container of your application, the framework needs to know how to create that container. Think of a container recipe as being similar to a docker-compose.yaml file: You specify the docker image, ports, environment variables, persistent volumes, and secrets. However, container recipes are code. This allows you to add conditional behaviour to how your container is constructed. For example: The 'user' of your plugin specifies in their call input that they want to run your application in a certain mode. This would cause your container recipe to set certain environment variables, which cause the application to behave in the requested way.
### Addresses and ports
In a docker-compose.yaml file, it is perfectly normal to specify which ports should be exposed on your container. However, in the framework there's more to consider. When your application container starts, who knows on what kind of machine it runs, and what other processes it's sharing space with? Well, Kubernetes knows. Therefore, it is recommended that container recipes *do not* specify exact port numbers. The framework allows container recipes to declare "a port" without specifying its port number. This allows the framework and Kubernetes to figure out which ports are available when it's time to deploy. In order to find out which port numbers were assigned post-deployment, you can look up the port by tag (which is just an identifying string). When you specify a port to be mapped in your container recipe, you must specify:
1. `Tag` - An identifier.
1. `Internal` or `External` - Whether this port should be accessible only inside the cluster (for other containers (k8s: "ClusterIP")) or outside the cluster as well (for external tools/applications (k8s: "NodePort")).
1. `Protocol` - TCP or UDP. Both protocols on the same port is not universally supported by all container engines, and is therefore not supported by the framework.
If your application wants to listen for incoming traffic from inside its container, be sure to bind it to address "0.0.0.0".
Reminder: If you don't want to worry about addresses, and internal or external ports, you don't have to! The container objects returned by the `workflow` plugin tool have a method called `GetAddress`. Given a port tag, it returns and address object. The `Http` plugin tool can use that address object to set up connections.
## Locations
The framework is designed to allow you to control instances of your application in multiple (physical) locations. It accomplishes this by using kubernetes, and the ability to deploy containers to specific hosts (nodes) inside a kubernetes cluster. Since Kubernetes allows you to build clusters cross-site, this framework in theory enables you to deploy and interact with containers running anywhere.
The `workflow` plugin tool provides you a list of all available locations in the cluster. When starting a container, you are able to pick one of those locations. If no location is selected, one will be chosen by kubernetes. Locations can be chosen explicitly by kubernetes node name, or, they can be picked from the array of available locations.
Example:
```C#
{
var location = Ci.GetKnownLocations().Get("kbnode_euwest_paris1");
var codex = Ci.StartCodexNode(s => s.At(location));
}
```
In this example, 'Ci' is an instance of the core interface. The CodexPlugin exposes a function 'StartCodexNode', which allows its user to specify a location. This location is then passed to the `workflow` tool when the Codex plugin starts its container.
The available locations array guarantees that each entry corresponds to a different kubernetes host.
```C#
{
var knownLocations = Ci.GetKnownLocations();
// I don't care where exactly, as long as they are different locations.
var codexAtZero = Ci.StartCodexNode(s => s.At(knownLocations.Get(0)));
var codexAtOne = Ci.StartCodexNode(s => s.At(knownLocations.Get(1)));
}
```

193
Framework/Core/Endpoint.cs Normal file
View File

@ -0,0 +1,193 @@
using Logging;
using Newtonsoft.Json;
using Serialization = Newtonsoft.Json.Serialization;
using System.Net.Http.Headers;
using System.Net.Http.Json;
using Utils;
namespace Core
{
public interface IEndpoint
{
string HttpGetString(string route);
T HttpGetJson<T>(string route);
TResponse HttpPostJson<TRequest, TResponse>(string route, TRequest body);
string HttpPostJson<TRequest>(string route, TRequest body);
TResponse HttpPostString<TResponse>(string route, string body);
string HttpPostStream(string route, Stream stream);
Stream HttpGetStream(string route);
T Deserialize<T>(string json);
}
internal class Endpoint : IEndpoint
{
private readonly ILog log;
private readonly IHttp http;
private readonly Address address;
private readonly string baseUrl;
private readonly string? logAlias;
public Endpoint(ILog log, IHttp http, Address address, string baseUrl, string? logAlias)
{
this.log = log;
this.http = http;
this.address = address;
this.baseUrl = baseUrl;
this.logAlias = logAlias;
}
public string HttpGetString(string route)
{
return http.OnClient(client =>
{
return GetString(client, route);
}, $"HTTP-GET:{route}");
}
public T HttpGetJson<T>(string route)
{
return http.OnClient(client =>
{
var json = GetString(client, route);
return Deserialize<T>(json);
}, $"HTTP-GET:{route}");
}
public TResponse HttpPostJson<TRequest, TResponse>(string route, TRequest body)
{
return http.OnClient(client =>
{
var response = PostJson(client, route, body);
var json = Time.Wait(response.Content.ReadAsStringAsync());
if (!response.IsSuccessStatusCode)
{
throw new HttpRequestException(json);
}
Log(GetUrl() + route, json);
return Deserialize<TResponse>(json);
}, $"HTTP-POST-JSON: {route}");
}
public string HttpPostJson<TRequest>(string route, TRequest body)
{
return http.OnClient(client =>
{
var response = PostJson(client, route, body);
return Time.Wait(response.Content.ReadAsStringAsync());
}, $"HTTP-POST-JSON: {route}");
}
public TResponse HttpPostString<TResponse>(string route, string body)
{
return http.OnClient(client =>
{
var response = PostJsonString(client, route, body);
if (response == null) throw new Exception("Received no response.");
var result = Deserialize<TResponse>(response);
if (result == null) throw new Exception("Failed to deserialize response");
return result;
}, $"HTTO-POST-JSON: {route}");
}
public string HttpPostStream(string route, Stream stream)
{
return http.OnClient(client =>
{
var url = GetUrl() + route;
Log(url, "~ STREAM ~");
var content = new StreamContent(stream);
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
var response = Time.Wait(client.PostAsync(url, content));
var str = Time.Wait(response.Content.ReadAsStringAsync());
Log(url, str);
return str;
}, $"HTTP-POST-STREAM: {route}");
}
public Stream HttpGetStream(string route)
{
return http.OnClient(client =>
{
var url = GetUrl() + route;
Log(url, "~ STREAM ~");
return Time.Wait(client.GetStreamAsync(url));
}, $"HTTP-GET-STREAM: {route}");
}
public T Deserialize<T>(string json)
{
var errors = new List<string>();
var deserialized = JsonConvert.DeserializeObject<T>(json, new JsonSerializerSettings()
{
Error = delegate (object? sender, Serialization.ErrorEventArgs args)
{
if (args.CurrentObject == args.ErrorContext.OriginalObject)
{
errors.Add($"""
Member: '{args.ErrorContext.Member?.ToString() ?? "<null>"}'
Path: {args.ErrorContext.Path}
Error: {args.ErrorContext.Error.Message}
""");
args.ErrorContext.Handled = true;
}
}
});
if (errors.Count > 0)
{
throw new JsonSerializationException($"Failed to deserialize JSON '{json}' with exception(s): \n{string.Join("\n", errors)}");
}
else if (deserialized == null)
{
throw new JsonSerializationException($"Failed to deserialize JSON '{json}': resulting deserialized object is null");
}
return deserialized;
}
private string GetString(HttpClient client, string route)
{
var url = GetUrl() + route;
Log(url, "");
var result = Time.Wait(client.GetAsync(url));
var str = Time.Wait(result.Content.ReadAsStringAsync());
Log(url, str);
return str;
}
private HttpResponseMessage PostJson<TRequest>(HttpClient client, string route, TRequest body)
{
var url = GetUrl() + route;
using var content = JsonContent.Create(body);
Log(url, JsonConvert.SerializeObject(body));
return Time.Wait(client.PostAsync(url, content));
}
private string PostJsonString(HttpClient client, string route, string body)
{
var url = GetUrl() + route;
Log(url, body);
var content = new StringContent(body);
content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");
var result = Time.Wait(client.PostAsync(url, content));
var str = Time.Wait(result.Content.ReadAsStringAsync());
Log(url, str);
return str;
}
private string GetUrl()
{
return $"{address.Host}:{address.Port}{baseUrl}";
}
private void Log(string url, string message)
{
if (logAlias != null)
{
log.Debug($"({logAlias})({url}) = '{message}'", 3);
}
else
{
log.Debug($"({url}) = '{message}'", 3);
}
}
}
}

View File

@ -1,22 +1,13 @@
using Logging;
using Newtonsoft.Json;
using Serialization = Newtonsoft.Json.Serialization;
using System.Net.Http.Headers;
using System.Net.Http.Json;
using Utils;
namespace Core
{
public interface IHttp
{
string HttpGetString(string route);
T HttpGetJson<T>(string route);
TResponse HttpPostJson<TRequest, TResponse>(string route, TRequest body);
string HttpPostJson<TRequest>(string route, TRequest body);
TResponse HttpPostString<TResponse>(string route, string body);
string HttpPostStream(string route, Stream stream);
Stream HttpGetStream(string route);
T Deserialize<T>(string json);
T OnClient<T>(Func<HttpClient, T> action);
T OnClient<T>(Func<HttpClient, T> action, string description);
IEndpoint CreateEndpoint(Address address, string baseUrl, string? logAlias = null);
}
internal class Http : IHttp
@ -24,185 +15,44 @@ namespace Core
private static readonly object httpLock = new object();
private readonly ILog log;
private readonly ITimeSet timeSet;
private readonly Address address;
private readonly string baseUrl;
private readonly Action<HttpClient> onClientCreated;
private readonly string? logAlias;
internal Http(ILog log, ITimeSet timeSet, Address address, string baseUrl, string? logAlias = null)
: this(log, timeSet, address, baseUrl, DoNothing, logAlias)
internal Http(ILog log, ITimeSet timeSet)
: this(log, timeSet, DoNothing)
{
}
internal Http(ILog log, ITimeSet timeSet, Address address, string baseUrl, Action<HttpClient> onClientCreated, string? logAlias = null)
internal Http(ILog log, ITimeSet timeSet, Action<HttpClient> onClientCreated)
{
this.log = log;
this.timeSet = timeSet;
this.address = address;
this.baseUrl = baseUrl;
this.onClientCreated = onClientCreated;
this.logAlias = logAlias;
if (!this.baseUrl.StartsWith("/")) this.baseUrl = "/" + this.baseUrl;
if (!this.baseUrl.EndsWith("/")) this.baseUrl += "/";
}
public string HttpGetString(string route)
public T OnClient<T>(Func<HttpClient, T> action)
{
return OnClient(action, GetDescription());
}
public T OnClient<T>(Func<HttpClient, T> action, string description)
{
var client = GetClient();
return LockRetry(() =>
{
return GetString(route);
}, $"HTTP-GET:{route}");
return action(client);
}, description);
}
public T HttpGetJson<T>(string route)
public IEndpoint CreateEndpoint(Address address, string baseUrl, string? logAlias = null)
{
return LockRetry(() =>
{
var json = GetString(route);
return Deserialize<T>(json);
}, $"HTTP-GET:{route}");
return new Endpoint(log, this, address, baseUrl, logAlias);
}
public TResponse HttpPostJson<TRequest, TResponse>(string route, TRequest body)
private string GetDescription()
{
return LockRetry(() =>
{
var response = PostJson(route, body);
var json = Time.Wait(response.Content.ReadAsStringAsync());
if (!response.IsSuccessStatusCode)
{
throw new HttpRequestException(json);
}
Log(GetUrl() + route, json);
return Deserialize<TResponse>(json);
}, $"HTTP-POST-JSON: {route}");
}
public string HttpPostJson<TRequest>(string route, TRequest body)
{
return LockRetry(() =>
{
var response = PostJson(route, body);
return Time.Wait(response.Content.ReadAsStringAsync());
}, $"HTTP-POST-JSON: {route}");
}
public TResponse HttpPostString<TResponse>(string route, string body)
{
return LockRetry(() =>
{
var response = PostJsonString(route, body);
if (response == null) throw new Exception("Received no response.");
var result = Deserialize<TResponse>(response);
if (result == null) throw new Exception("Failed to deserialize response");
return result;
}, $"HTTO-POST-JSON: {route}");
}
public string HttpPostStream(string route, Stream stream)
{
return LockRetry(() =>
{
using var client = GetClient();
var url = GetUrl() + route;
Log(url, "~ STREAM ~");
var content = new StreamContent(stream);
content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
var response = Time.Wait(client.PostAsync(url, content));
var str = Time.Wait(response.Content.ReadAsStringAsync());
Log(url, str);
return str;
}, $"HTTP-POST-STREAM: {route}");
}
public Stream HttpGetStream(string route)
{
return LockRetry(() =>
{
var client = GetClient();
var url = GetUrl() + route;
Log(url, "~ STREAM ~");
return Time.Wait(client.GetStreamAsync(url));
}, $"HTTP-GET-STREAM: {route}");
}
public T Deserialize<T>(string json)
{
var errors = new List<string>();
var deserialized = JsonConvert.DeserializeObject<T>(json, new JsonSerializerSettings()
{
Error = delegate(object? sender, Serialization.ErrorEventArgs args)
{
if (args.CurrentObject == args.ErrorContext.OriginalObject)
{
errors.Add($"""
Member: '{args.ErrorContext.Member?.ToString() ?? "<null>"}'
Path: {args.ErrorContext.Path}
Error: {args.ErrorContext.Error.Message}
""");
args.ErrorContext.Handled = true;
}
}
});
if (errors.Count > 0)
{
throw new JsonSerializationException($"Failed to deserialize JSON '{json}' with exception(s): \n{string.Join("\n", errors)}");
}
else if (deserialized == null)
{
throw new JsonSerializationException($"Failed to deserialize JSON '{json}': resulting deserialized object is null");
}
return deserialized;
}
private string GetString(string route)
{
using var client = GetClient();
var url = GetUrl() + route;
Log(url, "");
var result = Time.Wait(client.GetAsync(url));
var str = Time.Wait(result.Content.ReadAsStringAsync());
Log(url, str);
return str;
}
private HttpResponseMessage PostJson<TRequest>(string route, TRequest body)
{
using var client = GetClient();
var url = GetUrl() + route;
using var content = JsonContent.Create(body);
Log(url, JsonConvert.SerializeObject(body));
return Time.Wait(client.PostAsync(url, content));
}
private string PostJsonString(string route, string body)
{
using var client = GetClient();
var url = GetUrl() + route;
Log(url, body);
var content = new StringContent(body);
content.Headers.ContentType = MediaTypeHeaderValue.Parse("application/json");
var result = Time.Wait(client.PostAsync(url, content));
var str = Time.Wait(result.Content.ReadAsStringAsync());
Log(url, str);
return str;
}
private string GetUrl()
{
return $"{address.Host}:{address.Port}{baseUrl}";
}
private void Log(string url, string message)
{
if (logAlias != null)
{
log.Debug($"({logAlias})({url}) = '{message}'", 3);
}
else
{
log.Debug($"({url}) = '{message}'", 3);
}
// todo: check this:
return DebugStack.GetCallerName(skipFrames: 2);
}
private T LockRetry<T>(Func<T> operation, string description)

View File

@ -1,7 +1,6 @@
using FileUtils;
using KubernetesWorkflow;
using Logging;
using Utils;
namespace Core
{
@ -22,9 +21,9 @@ namespace Core
public interface IHttpFactoryTool
{
IHttp CreateHttp(Address address, string baseUrl, Action<HttpClient> onClientCreated, string? logAlias = null);
IHttp CreateHttp(Address address, string baseUrl, Action<HttpClient> onClientCreated, ITimeSet timeSet, string? logAlias = null);
IHttp CreateHttp(Address address, string baseUrl, string? logAlias = null);
IHttp CreateHttp(Action<HttpClient> onClientCreated);
IHttp CreateHttp(Action<HttpClient> onClientCreated, ITimeSet timeSet);
IHttp CreateHttp();
}
public interface IFileTool
@ -37,11 +36,11 @@ namespace Core
private readonly ITimeSet timeSet;
private readonly WorkflowCreator workflowCreator;
private readonly IFileManager fileManager;
private ILog log;
private readonly LogPrefixer log;
internal PluginTools(ILog log, WorkflowCreator workflowCreator, string fileManagerRootFolder, ITimeSet timeSet)
{
this.log = log;
this.log = new LogPrefixer(log);
this.workflowCreator = workflowCreator;
this.timeSet = timeSet;
fileManager = new FileManager(log, fileManagerRootFolder);
@ -49,22 +48,22 @@ namespace Core
public void ApplyLogPrefix(string prefix)
{
log = new LogPrefixer(log, prefix);
log.Prefix = prefix;
}
public IHttp CreateHttp(Address address, string baseUrl, Action<HttpClient> onClientCreated, string? logAlias = null)
public IHttp CreateHttp(Action<HttpClient> onClientCreated)
{
return CreateHttp(address, baseUrl, onClientCreated, timeSet, logAlias);
return CreateHttp(onClientCreated, timeSet);
}
public IHttp CreateHttp(Address address, string baseUrl, Action<HttpClient> onClientCreated, ITimeSet ts, string? logAlias = null)
public IHttp CreateHttp(Action<HttpClient> onClientCreated, ITimeSet ts)
{
return new Http(log, ts, address, baseUrl, onClientCreated, logAlias);
return new Http(log, ts, onClientCreated);
}
public IHttp CreateHttp(Address address, string baseUrl, string? logAlias = null)
public IHttp CreateHttp()
{
return new Http(log, timeSet, address, baseUrl, logAlias);
return new Http(log, timeSet);
}
public IStartupWorkflow CreateWorkflow(string? namespaceOverride = null)

View File

@ -45,7 +45,7 @@ namespace FileUtils
{
var sw = Stopwatch.Begin(log);
var result = GenerateRandomFile(size, label);
sw.End($"Generated file '{result.Describe()}'.");
sw.End($"Generated file {result.Describe()}.");
return result;
}

View File

@ -3,14 +3,21 @@
public class LogPrefixer : ILog
{
private readonly ILog backingLog;
private readonly string prefix;
public LogPrefixer(ILog backingLog)
{
this.backingLog = backingLog;
}
public LogPrefixer(ILog backingLog, string prefix)
{
this.backingLog = backingLog;
this.prefix = prefix;
Prefix = prefix;
}
public string Prefix { get; set; } = string.Empty;
public LogFile CreateSubfile(string ext = "log")
{
return backingLog.CreateSubfile(ext);
@ -18,17 +25,17 @@
public void Debug(string message = "", int skipFrames = 0)
{
backingLog.Debug(prefix + message, skipFrames);
backingLog.Debug(Prefix + message, skipFrames);
}
public void Error(string message)
{
backingLog.Error(prefix + message);
backingLog.Error(Prefix + message);
}
public void Log(string message)
{
backingLog.Log(prefix + message);
backingLog.Log(Prefix + message);
}
public void AddStringReplace(string from, string to)

View File

@ -4,7 +4,7 @@
{
public delegate void CacheClearedEvent();
private const int MaxEntries = 1024;
private const int MaxEntries = 1024 * 1024 * 5;
private readonly Dictionary<ulong, BlockTimeEntry> entries = new Dictionary<ulong, BlockTimeEntry>();
public event CacheClearedEvent? OnCacheCleared;

View File

@ -2,6 +2,9 @@
{
public class ByteSize
{
public static readonly ByteSize Zero = new ByteSize(0);
public const double DefaultSecondsPerMB = 10.0;
public ByteSize(long sizeInBytes)
{
if (sizeInBytes < 0) throw new ArgumentException("Cannot create ByteSize object with size less than 0. Was: " + sizeInBytes);
@ -10,7 +13,6 @@
public long SizeInBytes { get; }
public const double DefaultSecondsPerMB = 10.0;
public long ToMB()
{

View File

@ -0,0 +1,92 @@
using Core;
using KubernetesWorkflow.Types;
using Logging;
using System.Security.Cryptography;
using System.Text;
namespace CodexPlugin
{
public class ApiChecker
{
// <INSERT-OPENAPI-YAML-HASH>
private const string OpenApiYamlHash = "DC-90-1B-63-76-1B-92-01-05-65-33-DA-17-C2-34-83-E1-2E-6C-A9-04-4D-68-ED-96-43-F5-E5-6A-00-0F-5F";
private const string OpenApiFilePath = "/codex/openapi.yaml";
private const string DisableEnvironmentVariable = "CODEXPLUGIN_DISABLE_APICHECK";
private const bool Disable = false;
private const string Warning =
"Warning: CodexPlugin was unable to find the openapi.yaml file in the Codex container. Are you running an old version of Codex? " +
"Plugin will continue as normal, but API compatibility is not guaranteed!";
private const string Failure =
"Codex API compatibility check failed! " +
"openapi.yaml used by CodexPlugin does not match openapi.yaml in Codex container. Please update the openapi.yaml in " +
"'ProjectPlugins/CodexPlugin' and rebuild this project. If you wish to disable API compatibility checking, please set " +
$"the environment variable '{DisableEnvironmentVariable}' or set the disable bool in 'ProjectPlugins/CodexPlugin/ApiChecker.cs'.";
private static bool checkPassed = false;
private readonly IPluginTools pluginTools;
private readonly ILog log;
public ApiChecker(IPluginTools pluginTools)
{
this.pluginTools = pluginTools;
log = pluginTools.GetLog();
if (string.IsNullOrEmpty(OpenApiYamlHash)) throw new Exception("OpenAPI yaml hash was not inserted by pre-build trigger.");
}
public void CheckCompatibility(RunningContainers[] containers)
{
if (checkPassed) return;
Log("CodexPlugin is checking API compatibility...");
if (Disable || !string.IsNullOrEmpty(Environment.GetEnvironmentVariable(DisableEnvironmentVariable)))
{
Log("API compatibility checking has been disabled.");
checkPassed = true;
return;
}
var workflow = pluginTools.CreateWorkflow();
var container = containers.First().Containers.First();
var containerApi = workflow.ExecuteCommand(container, "cat", OpenApiFilePath);
if (string.IsNullOrEmpty(containerApi))
{
log.Error(Warning);
checkPassed = true;
return;
}
var containerHash = Hash(containerApi);
if (containerHash == OpenApiYamlHash)
{
Log("API compatibility check passed.");
checkPassed = true;
return;
}
log.Error(Failure);
throw new Exception(Failure);
}
private string Hash(string file)
{
var fileBytes = Encoding.ASCII.GetBytes(file
.Replace(Environment.NewLine, ""));
var sha = SHA256.Create();
var hash = sha.ComputeHash(fileBytes);
return BitConverter.ToString(hash);
}
private void Log(string msg)
{
log.Log(msg);
}
}
}

View File

@ -1,4 +1,5 @@
using Core;
using CodexOpenApi;
using Core;
using KubernetesWorkflow;
using KubernetesWorkflow.Types;
using Utils;
@ -8,6 +9,7 @@ namespace CodexPlugin
public class CodexAccess : ILogHandler
{
private readonly IPluginTools tools;
private readonly Mapper mapper = new Mapper();
private bool hasContainerCrashed;
public CodexAccess(IPluginTools tools, RunningContainer container, CrashWatcher crashWatcher)
@ -23,77 +25,72 @@ namespace CodexPlugin
public RunningContainer Container { get; }
public CrashWatcher CrashWatcher { get; }
public CodexDebugResponse GetDebugInfo()
public DebugInfo GetDebugInfo()
{
return Http().HttpGetJson<CodexDebugResponse>("debug/info");
return mapper.Map(OnCodex(api => api.GetDebugInfoAsync()));
}
public CodexDebugPeerResponse GetDebugPeer(string peerId)
public DebugPeer GetDebugPeer(string peerId)
{
var http = Http();
var str = http.HttpGetString($"debug/peer/{peerId}");
// Cannot use openAPI: debug/peer endpoint is not specified there.
var endpoint = GetEndpoint();
var str = endpoint.HttpGetString($"debug/peer/{peerId}");
if (str.ToLowerInvariant() == "unable to find peer!")
{
return new CodexDebugPeerResponse
return new DebugPeer
{
IsPeerFound = false
};
}
var result = http.Deserialize<CodexDebugPeerResponse>(str);
var result = endpoint.Deserialize<DebugPeer>(str);
result.IsPeerFound = true;
return result;
}
public CodexDebugBlockExchangeResponse GetDebugBlockExchange()
public void ConnectToPeer(string peerId, string[] peerMultiAddresses)
{
return Http().HttpGetJson<CodexDebugBlockExchangeResponse>("debug/blockexchange");
}
public CodexDebugRepoStoreResponse[] GetDebugRepoStore()
{
return LongHttp().HttpGetJson<CodexDebugRepoStoreResponse[]>("debug/repostore");
}
public CodexDebugThresholdBreaches GetDebugThresholdBreaches()
{
return Http().HttpGetJson<CodexDebugThresholdBreaches>("debug/loop");
OnCodex(api =>
{
Time.Wait(api.ConnectPeerAsync(peerId, peerMultiAddresses));
return Task.FromResult(string.Empty);
});
}
public string UploadFile(FileStream fileStream)
{
return Http().HttpPostStream("data", fileStream);
return OnCodex(api => api.UploadAsync(fileStream));
}
public Stream DownloadFile(string contentId)
{
return Http().HttpGetStream("data/" + contentId + "/network");
var fileResponse = OnCodex(api => api.DownloadNetworkAsync(contentId));
if (fileResponse.StatusCode != 200) throw new Exception("Download failed with StatusCode: " + fileResponse.StatusCode);
return fileResponse.Stream;
}
public CodexLocalDataResponse[] LocalFiles()
public LocalDatasetList LocalFiles()
{
return Http().HttpGetJson<CodexLocalDataResponse[]>("data");
return mapper.Map(OnCodex(api => api.ListDataAsync()));
}
public CodexSalesAvailabilityResponse SalesAvailability(CodexSalesAvailabilityRequest request)
public StorageAvailability SalesAvailability(StorageAvailability request)
{
return Http().HttpPostJson<CodexSalesAvailabilityRequest, CodexSalesAvailabilityResponse>("sales/availability", request);
var body = mapper.Map(request);
var read = OnCodex<SalesAvailabilityREAD>(api => api.OfferStorageAsync(body));
return mapper.Map(read);
}
public string RequestStorage(CodexSalesRequestStorageRequest request, string contentId)
public string RequestStorage(StoragePurchaseRequest request)
{
return Http().HttpPostJson($"storage/request/{contentId}", request);
var body = mapper.Map(request);
return OnCodex<string>(api => api.CreateStorageRequestAsync(request.ContentId.Id, body));
}
public CodexStoragePurchase GetPurchaseStatus(string purchaseId)
public StoragePurchase GetPurchaseStatus(string purchaseId)
{
return Http().HttpGetJson<CodexStoragePurchase>($"storage/purchases/{purchaseId}");
}
public string ConnectToPeer(string peerId, string peerMultiAddress)
{
return Http().HttpGetString($"connect/{peerId}?addrs={peerMultiAddress}");
return mapper.Map(OnCodex(api => api.GetPurchaseAsync(purchaseId)));
}
public string GetName()
@ -107,14 +104,24 @@ namespace CodexPlugin
return workflow.GetPodInfo(Container);
}
private IHttp Http()
private T OnCodex<T>(Func<CodexApi, Task<T>> action)
{
return tools.CreateHttp(GetAddress(), baseUrl: "/api/codex/v1", CheckContainerCrashed, Container.Name);
var address = GetAddress();
var result = tools.CreateHttp(CheckContainerCrashed)
.OnClient(client =>
{
var api = new CodexApi(client);
api.BaseUrl = $"{address.Host}:{address.Port}/api/codex/v1";
return Time.Wait(action(api));
});
return result;
}
private IHttp LongHttp()
private IEndpoint GetEndpoint()
{
return tools.CreateHttp(GetAddress(), baseUrl: "/api/codex/v1", CheckContainerCrashed, new LongTimeSet(), Container.Name);
return tools
.CreateHttp(CheckContainerCrashed)
.CreateEndpoint(GetAddress(), "/api/codex/v1/", Container.Name);
}
private Address GetAddress()

View File

@ -1,195 +0,0 @@
using Newtonsoft.Json;
namespace CodexPlugin
{
public class CodexDebugResponse
{
public string id { get; set; } = string.Empty;
public string[] addrs { get; set; } = Array.Empty<string>();
public string repo { get; set; } = string.Empty;
public string spr { get; set; } = string.Empty;
public string[] announceAddresses { get; set; } = Array.Empty<string>();
public EnginePeerResponse[] enginePeers { get; set; } = Array.Empty<EnginePeerResponse>();
public SwitchPeerResponse[] switchPeers { get; set; } = Array.Empty<SwitchPeerResponse>();
public CodexDebugVersionResponse codex { get; set; } = new();
public CodexDebugTableResponse table { get; set; } = new();
}
public class CodexDebugFutures
{
public int futures { get; set; }
}
public class CodexDebugTableResponse
{
public CodexDebugTableNodeResponse localNode { get; set; } = new();
public CodexDebugTableNodeResponse[] nodes { get; set; } = Array.Empty<CodexDebugTableNodeResponse>();
}
public class CodexDebugTableNodeResponse
{
public string nodeId { get; set; } = string.Empty;
public string peerId { get; set; } = string.Empty;
public string record { get; set; } = string.Empty;
public string address { get; set; } = string.Empty;
public bool seen { get; set; }
}
public class EnginePeerResponse
{
public string peerId { get; set; } = string.Empty;
public EnginePeerContextResponse context { get; set; } = new();
}
public class EnginePeerContextResponse
{
public int blocks { get; set; } = 0;
public int peerWants { get; set; } = 0;
public int exchanged { get; set; } = 0;
public string lastExchange { get; set; } = string.Empty;
}
public class SwitchPeerResponse
{
public string peerId { get; set; } = string.Empty;
public string key { get; set; } = string.Empty;
}
public class CodexDebugVersionResponse
{
public string version { get; set; } = string.Empty;
public string revision { get; set; } = string.Empty;
public bool IsValid()
{
return !string.IsNullOrEmpty(version) && !string.IsNullOrEmpty(revision);
}
public override string ToString()
{
return JsonConvert.SerializeObject(this);
}
}
public class CodexDebugPeerResponse
{
public bool IsPeerFound { get; set; }
public string peerId { get; set; } = string.Empty;
public long seqNo { get; set; }
public string[] addresses { get; set; } = Array.Empty<string>();
}
public class CodexDebugThresholdBreaches
{
public string[] breaches { get; set; } = Array.Empty<string>();
}
public class CodexSalesAvailabilityRequest
{
public string size { get; set; } = string.Empty;
public string duration { get; set; } = string.Empty;
public string minPrice { get; set; } = string.Empty;
public string maxCollateral { get; set; } = string.Empty;
}
public class CodexSalesAvailabilityResponse
{
public string id { get; set; } = string.Empty;
public string size { get; set; } = string.Empty;
public string duration { get; set; } = string.Empty;
public string minPrice { get; set; } = string.Empty;
public string maxCollateral { get; set; } = string.Empty;
}
public class CodexSalesRequestStorageRequest
{
public string duration { get; set; } = string.Empty;
public string proofProbability { get; set; } = string.Empty;
public string reward { get; set; } = string.Empty;
public string collateral { get; set; } = string.Empty;
public string? expiry { get; set; }
public uint? nodes { get; set; }
public uint? tolerance { get; set; }
}
public class CodexStoragePurchase
{
public string state { get; set; } = string.Empty;
public string error { get; set; } = string.Empty;
}
public class CodexDebugBlockExchangeResponse
{
public CodexDebugBlockExchangeResponsePeer[] peers { get; set; } = Array.Empty<CodexDebugBlockExchangeResponsePeer>();
public int taskQueue { get; set; }
public int pendingBlocks { get; set; }
public override string ToString()
{
if (peers.Length == 0 && taskQueue == 0 && pendingBlocks == 0) return "all-empty";
return $"taskqueue: {taskQueue} pendingblocks: {pendingBlocks} peers: {string.Join(",", peers.Select(p => p.ToString()))}";
}
}
public class CodexDebugBlockExchangeResponsePeer
{
public string peerid { get; set; } = string.Empty;
public CodexDebugBlockExchangeResponsePeerHasBlock[] hasBlocks { get; set; } = Array.Empty<CodexDebugBlockExchangeResponsePeerHasBlock>();
public CodexDebugBlockExchangeResponsePeerWant[] wants { get; set; } = Array.Empty<CodexDebugBlockExchangeResponsePeerWant>();
public int exchanged { get; set; }
public override string ToString()
{
return $"(blocks:{hasBlocks.Length} wants:{wants.Length})";
}
}
public class CodexDebugBlockExchangeResponsePeerHasBlock
{
public string cid { get; set; } = string.Empty;
public bool have { get; set; }
public string price { get; set; } = string.Empty;
}
public class CodexDebugBlockExchangeResponsePeerWant
{
public string block { get; set; } = string.Empty;
public int priority { get; set; }
public bool cancel { get; set; }
public string wantType { get; set; } = string.Empty;
public bool sendDontHave { get; set; }
}
public class CodexDebugRepoStoreResponse
{
public string cid { get; set; } = string.Empty;
}
public class CodexLocalData
{
public CodexLocalData(ContentId cid, CodexLocalDataManifestResponse manifest)
{
Cid = cid;
Manifest = manifest;
}
public ContentId Cid { get; }
public CodexLocalDataManifestResponse Manifest { get; }
}
public class CodexLocalDataResponse
{
public string cid { get; set; } = string.Empty;
public CodexLocalDataManifestResponse manifest { get; set; } = new();
}
public class CodexLocalDataManifestResponse
{
public string rootHash { get; set; } = string.Empty;
public int originalBytes { get; set; }
public int blockSize { get; set; }
public bool @protected { get; set; }
}
}

View File

@ -7,9 +7,7 @@ namespace CodexPlugin
{
public class CodexContainerRecipe : ContainerRecipeFactory
{
private readonly MarketplaceStarter marketplaceStarter = new MarketplaceStarter();
private const string DefaultDockerImage = "codexstorage/nim-codex:sha-e4ddb94-dist-tests";
private const string DefaultDockerImage = "codexstorage/nim-codex:sha-cd280d4-dist-tests";
public const string ApiPortTag = "codex_api_port";
public const string ListenPortTag = "codex_listen_port";
public const string MetricsPortTag = "codex_metrics_port";
@ -106,13 +104,13 @@ namespace CodexPlugin
AddEnvVar("CODEX_ETH_PROVIDER", $"{wsAddress.Host.Replace("http://", "ws://")}:{wsAddress.Port}");
AddEnvVar("CODEX_MARKETPLACE_ADDRESS", marketplaceAddress);
var marketplaceSetup = config.MarketplaceConfig.MarketplaceSetup;
// Custom scripting in the Codex test image will write this variable to a private-key file,
// and pass the correct filename to Codex.
var mStart = marketplaceStarter.Start();
AddEnvVar("PRIV_KEY", mStart.PrivateKey);
Additional(mStart);
AddEnvVar("PRIV_KEY", marketplaceSetup.EthAccount.PrivateKey);
Additional(marketplaceSetup.EthAccount);
var marketplaceSetup = config.MarketplaceConfig.MarketplaceSetup;
SetCommandOverride(marketplaceSetup);
if (marketplaceSetup.IsValidator)
{

View File

@ -31,14 +31,14 @@ namespace CodexPlugin
public class CodexInstance
{
public CodexInstance(RunningContainers containers, CodexDebugResponse info)
public CodexInstance(RunningContainers containers, DebugInfo info)
{
Containers = containers;
Info = info;
}
public RunningContainers Containers { get; }
public CodexDebugResponse Info { get; }
public DebugInfo Info { get; }
}
public class DeploymentMetadata

View File

@ -12,16 +12,13 @@ namespace CodexPlugin
public interface ICodexNode : IHasContainer, IHasMetricsScrapeTarget, IHasEthAddress
{
string GetName();
CodexDebugResponse GetDebugInfo();
CodexDebugPeerResponse GetDebugPeer(string peerId);
// These debug methods are not available in master-line Codex. Use only for custom builds.
//CodexDebugBlockExchangeResponse GetDebugBlockExchange();
//CodexDebugRepoStoreResponse[] GetDebugRepoStore();
DebugInfo GetDebugInfo();
DebugPeer GetDebugPeer(string peerId);
ContentId UploadFile(TrackedFile file);
TrackedFile? DownloadContent(ContentId contentId, string fileLabel = "");
CodexLocalData[] LocalFiles();
LocalDatasetList LocalFiles();
void ConnectToPeer(ICodexNode node);
CodexDebugVersionResponse Version { get; }
DebugInfoVersion Version { get; }
IMarketplaceAccess Marketplace { get; }
CrashWatcher CrashWatcher { get; }
PodInfo GetPodInfo();
@ -31,7 +28,6 @@ namespace CodexPlugin
public class CodexNode : ICodexNode
{
private const string SuccessfullyConnectedMessage = "Successfully connected to peer";
private const string UploadFailedMessage = "Unable to store block";
private readonly IPluginTools tools;
private readonly EthAddress? ethAddress;
@ -44,7 +40,7 @@ namespace CodexPlugin
CodexAccess = codexAccess;
Group = group;
Marketplace = marketplaceAccess;
Version = new CodexDebugVersionResponse();
Version = new DebugInfoVersion();
transferSpeeds = new TransferSpeeds();
}
@ -53,8 +49,9 @@ namespace CodexPlugin
public CrashWatcher CrashWatcher { get => CodexAccess.CrashWatcher; }
public CodexNodeGroup Group { get; }
public IMarketplaceAccess Marketplace { get; }
public CodexDebugVersionResponse Version { get; private set; }
public DebugInfoVersion Version { get; private set; }
public ITransferSpeeds TransferSpeeds { get => transferSpeeds; }
public IMetricsScrapeTarget MetricsScrapeTarget
{
get
@ -62,6 +59,7 @@ namespace CodexPlugin
return new MetricsScrapeTarget(CodexAccess.Container, CodexContainerRecipe.MetricsPortTag);
}
}
public EthAddress EthAddress
{
get
@ -76,29 +74,19 @@ namespace CodexPlugin
return CodexAccess.Container.Name;
}
public CodexDebugResponse GetDebugInfo()
public DebugInfo GetDebugInfo()
{
var debugInfo = CodexAccess.GetDebugInfo();
var known = string.Join(",", debugInfo.table.nodes.Select(n => n.peerId));
Log($"Got DebugInfo with id: '{debugInfo.id}'. This node knows: {known}");
var known = string.Join(",", debugInfo.Table.Nodes.Select(n => n.PeerId));
Log($"Got DebugInfo with id: '{debugInfo.Id}'. This node knows: {known}");
return debugInfo;
}
public CodexDebugPeerResponse GetDebugPeer(string peerId)
public DebugPeer GetDebugPeer(string peerId)
{
return CodexAccess.GetDebugPeer(peerId);
}
public CodexDebugBlockExchangeResponse GetDebugBlockExchange()
{
return CodexAccess.GetDebugBlockExchange();
}
public CodexDebugRepoStoreResponse[] GetDebugRepoStore()
{
return CodexAccess.GetDebugRepoStore();
}
public ContentId UploadFile(TrackedFile file)
{
using var fileStream = File.OpenRead(file.Filename);
@ -131,9 +119,9 @@ namespace CodexPlugin
return file;
}
public CodexLocalData[] LocalFiles()
public LocalDatasetList LocalFiles()
{
return CodexAccess.LocalFiles().Select(l => new CodexLocalData(new ContentId(l.cid), l.manifest)).ToArray();
return CodexAccess.LocalFiles();
}
public void ConnectToPeer(ICodexNode node)
@ -142,9 +130,8 @@ namespace CodexPlugin
Log($"Connecting to peer {peer.GetName()}...");
var peerInfo = node.GetDebugInfo();
var response = CodexAccess.ConnectToPeer(peerInfo.id, GetPeerMultiAddress(peer, peerInfo));
CodexAccess.ConnectToPeer(peerInfo.Id, GetPeerMultiAddresses(peer, peerInfo));
FrameworkAssert.That(response == SuccessfullyConnectedMessage, "Unable to connect codex nodes.");
Log($"Successfully connected to peer {peer.GetName()}.");
}
@ -165,31 +152,30 @@ namespace CodexPlugin
public void EnsureOnlineGetVersionResponse()
{
var debugInfo = Time.Retry(CodexAccess.GetDebugInfo, "ensure online");
var nodePeerId = debugInfo.id;
var nodePeerId = debugInfo.Id;
var nodeName = CodexAccess.Container.Name;
if (!debugInfo.codex.IsValid())
if (!debugInfo.Version.IsValid())
{
throw new Exception($"Invalid version information received from Codex node {GetName()}: {debugInfo.codex}");
throw new Exception($"Invalid version information received from Codex node {GetName()}: {debugInfo.Version}");
}
var log = tools.GetLog();
log.AddStringReplace(nodePeerId, nodeName);
log.AddStringReplace(debugInfo.table.localNode.nodeId, nodeName);
Version = debugInfo.codex;
log.AddStringReplace(debugInfo.Table.LocalNode.NodeId, nodeName);
Version = debugInfo.Version;
}
private string GetPeerMultiAddress(CodexNode peer, CodexDebugResponse peerInfo)
private string[] GetPeerMultiAddresses(CodexNode peer, DebugInfo peerInfo)
{
var multiAddress = peerInfo.addrs.First();
// Todo: Is there a case where First address in list is not the way?
// The peer we want to connect is in a different pod.
// We must replace the default IP with the pod IP in the multiAddress.
var workflow = tools.CreateWorkflow();
var podInfo = workflow.GetPodInfo(peer.Container);
return multiAddress.Replace("0.0.0.0", podInfo.Ip);
return peerInfo.Addrs.Select(a => a
.Replace("0.0.0.0", podInfo.Ip))
.ToArray();
}
private void DownloadToFile(string contentId, TrackedFile file)
@ -212,24 +198,4 @@ namespace CodexPlugin
tools.GetLog().Log($"{GetName()}: {msg}");
}
}
public class ContentId
{
public ContentId(string id)
{
Id = id;
}
public string Id { get; }
public override bool Equals(object? obj)
{
return obj is ContentId id && Id == id.Id;
}
public override int GetHashCode()
{
return HashCode.Combine(Id);
}
}
}

View File

@ -35,9 +35,9 @@ namespace CodexPlugin
private EthAddress? GetEthAddress(CodexAccess access)
{
var mStart = access.Container.Recipe.Additionals.Get<MarketplaceStartResults>();
if (mStart == null) return null;
return mStart.EthAddress;
var ethAccount = access.Container.Recipe.Additionals.Get<EthAccount>();
if (ethAccount == null) return null;
return ethAccount.EthAddress;
}
public CrashWatcher CreateCrashWatcher(RunningContainer c)

View File

@ -20,7 +20,7 @@ namespace CodexPlugin
this.starter = starter;
Containers = containers;
Nodes = containers.Containers().Select(c => CreateOnlineCodexNode(c, tools, codexNodeFactory)).ToArray();
Version = new CodexDebugVersionResponse();
Version = new DebugInfoVersion();
}
public ICodexNode this[int index]
@ -41,7 +41,7 @@ namespace CodexPlugin
public RunningContainers[] Containers { get; private set; }
public CodexNode[] Nodes { get; private set; }
public CodexDebugVersionResponse Version { get; private set; }
public DebugInfoVersion Version { get; private set; }
public IMetricsScrapeTarget[] ScrapeTargets => Nodes.Select(n => n.MetricsScrapeTarget).ToArray();
public IEnumerator<ICodexNode> GetEnumerator()
@ -65,7 +65,7 @@ namespace CodexPlugin
var versionResponses = Nodes.Select(n => n.Version);
var first = versionResponses.First();
if (!versionResponses.All(v => v.version == first.version && v.revision == first.revision))
if (!versionResponses.All(v => v.Version == first.Version && v.Revision == first.Revision))
{
throw new Exception("Inconsistent version information received from one or more Codex nodes: " +
string.Join(",", versionResponses.Select(v => v.ToString())));

View File

@ -1,4 +1,4 @@
using Core;
using Core;
using KubernetesWorkflow.Types;
namespace CodexPlugin
@ -52,8 +52,8 @@ namespace CodexPlugin
var mconfig = codexSetup.MarketplaceConfig;
foreach (var node in result)
{
mconfig.GethNode.SendEth(node, mconfig.InitialEth);
mconfig.CodexContracts.MintTestTokens(node, mconfig.InitialTokens);
mconfig.GethNode.SendEth(node, mconfig.MarketplaceSetup.InitialEth);
mconfig.CodexContracts.MintTestTokens(node, mconfig.MarketplaceSetup.InitialTestTokens);
}
}

View File

@ -7,7 +7,23 @@
</PropertyGroup>
<ItemGroup>
<None Remove="openapi.yaml" />
</ItemGroup>
<ItemGroup>
<OpenApiReference Include="openapi.yaml" CodeGenerator="NSwagCSharp" Namespace="CodexOpenApi" ClassName="CodexApi" />
</ItemGroup>
<ItemGroup>
<PackageReference Include="Microsoft.Extensions.ApiDescription.Client" Version="7.0.2">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
</PackageReference>
<PackageReference Include="Newtonsoft.Json" Version="13.0.3" />
<PackageReference Include="NSwag.ApiDescription.Client" Version="13.18.2">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
</PackageReference>
</ItemGroup>
<ItemGroup>
@ -18,4 +34,8 @@
<ProjectReference Include="..\MetricsPlugin\MetricsPlugin.csproj" />
</ItemGroup>
<Target Name="PreBuild" BeforeTargets="PreBuildEvent">
<Exec Command="dotnet run --project $(ProjectDir)\..\CodexPluginPrebuild" />
</Target>
</Project>

View File

@ -17,8 +17,7 @@ namespace CodexPlugin
ICodexSetup WithBlockMaintenanceInterval(TimeSpan duration);
ICodexSetup WithBlockMaintenanceNumber(int numberOfBlocks);
ICodexSetup EnableMetrics();
ICodexSetup EnableMarketplace(IGethNode gethNode, ICodexContracts codexContracts, Ether initialEth, TestToken initialTokens);
ICodexSetup EnableMarketplace(IGethNode gethNode, ICodexContracts codexContracts, Ether initialEth, TestToken initialTokens, Action<IMarketplaceSetup> marketplaceSetup);
ICodexSetup EnableMarketplace(IGethNode gethNode, ICodexContracts codexContracts, Action<IMarketplaceSetup> marketplaceSetup);
/// <summary>
/// Provides an invalid proof every N proofs
/// </summary>
@ -28,6 +27,8 @@ namespace CodexPlugin
public interface IMarketplaceSetup
{
IMarketplaceSetup WithInitial(Ether eth, TestToken tokens);
IMarketplaceSetup WithAccount(EthAccount account);
IMarketplaceSetup AsStorageNode();
IMarketplaceSetup AsValidator();
}
@ -49,6 +50,7 @@ namespace CodexPlugin
public CodexLogLevel DiscV5 { get; set; }
public CodexLogLevel Libp2p { get; set; }
public CodexLogLevel ContractClock { get; set; } = CodexLogLevel.Warn;
public CodexLogLevel? BlockExchange { get; }
}
@ -75,7 +77,7 @@ namespace CodexPlugin
public ICodexSetup WithBootstrapNode(ICodexNode node)
{
BootstrapSpr = node.GetDebugInfo().spr;
BootstrapSpr = node.GetDebugInfo().Spr;
return this;
}
@ -122,17 +124,12 @@ namespace CodexPlugin
return this;
}
public ICodexSetup EnableMarketplace(IGethNode gethNode, ICodexContracts codexContracts, Ether initialEth, TestToken initialTokens)
{
return EnableMarketplace(gethNode, codexContracts, initialEth, initialTokens, s => { });
}
public ICodexSetup EnableMarketplace(IGethNode gethNode, ICodexContracts codexContracts, Ether initialEth, TestToken initialTokens, Action<IMarketplaceSetup> marketplaceSetup)
public ICodexSetup EnableMarketplace(IGethNode gethNode, ICodexContracts codexContracts, Action<IMarketplaceSetup> marketplaceSetup)
{
var ms = new MarketplaceSetup();
marketplaceSetup(ms);
MarketplaceConfig = new MarketplaceInitialConfig(ms, gethNode, codexContracts, initialEth, initialTokens);
MarketplaceConfig = new MarketplaceInitialConfig(ms, gethNode, codexContracts);
return this;
}
@ -169,6 +166,9 @@ namespace CodexPlugin
{
public bool IsStorageNode { get; private set; }
public bool IsValidator { get; private set; }
public Ether InitialEth { get; private set; } = 0.Eth();
public TestToken InitialTestTokens { get; private set; } = 0.TestTokens();
public EthAccount EthAccount { get; private set; } = EthAccount.GenerateNew();
public IMarketplaceSetup AsStorageNode()
{
@ -182,14 +182,28 @@ namespace CodexPlugin
return this;
}
public IMarketplaceSetup WithAccount(EthAccount account)
{
EthAccount = account;
return this;
}
public IMarketplaceSetup WithInitial(Ether eth, TestToken tokens)
{
InitialEth = eth;
InitialTestTokens = tokens;
return this;
}
public override string ToString()
{
var result = "[(clientNode)"; // When marketplace is enabled, being a clientNode is implicit.
result += IsStorageNode ? "(storageNode)" : "()";
result += IsValidator ? "(validator)" : "()";
result += "]";
result += IsValidator ? "(validator)" : "() ";
result += $"Address: '{EthAccount.EthAddress}' ";
result += $"InitialEth/TT({InitialEth.Eth}/{InitialTestTokens.Amount})";
result += "] ";
return result;
}
}
}

View File

@ -9,11 +9,14 @@ namespace CodexPlugin
{
private readonly IPluginTools pluginTools;
private readonly CodexContainerRecipe recipe = new CodexContainerRecipe();
private CodexDebugVersionResponse? versionResponse;
private readonly ApiChecker apiChecker;
private DebugInfoVersion? versionResponse;
public CodexStarter(IPluginTools pluginTools)
{
this.pluginTools = pluginTools;
apiChecker = new ApiChecker(pluginTools);
}
public RunningContainers[] BringOnline(CodexSetup codexSetup)
@ -25,6 +28,8 @@ namespace CodexPlugin
var containers = StartCodexContainers(startupConfig, codexSetup.NumberOfNodes, codexSetup.Location);
apiChecker.CheckCompatibility(containers);
foreach (var rc in containers)
{
var podInfo = GetPodInfo(rc);
@ -62,13 +67,13 @@ namespace CodexPlugin
public string GetCodexId()
{
if (versionResponse != null) return versionResponse.version;
if (versionResponse != null) return versionResponse.Version;
return recipe.Image;
}
public string GetCodexRevision()
{
if (versionResponse != null) return versionResponse.revision;
if (versionResponse != null) return versionResponse.Revision;
return "unknown";
}

View File

@ -77,8 +77,7 @@ namespace CodexPlugin
level = $"{level};" +
$"{CustomTopics.DiscV5.ToString()!.ToLowerInvariant()}:{string.Join(",", discV5Topics)};" +
$"{CustomTopics.Libp2p.ToString()!.ToLowerInvariant()}:{string.Join(",", libp2pTopics)};" +
// Contract clock is always set to warn. It logs a trace every second.
$"{CodexLogLevel.Warn.ToString().ToLowerInvariant()}:{string.Join(",", contractClockTopics)}";
$"{CustomTopics.ContractClock.ToString().ToLowerInvariant()}:{string.Join(",", contractClockTopics)}";
if (CustomTopics.BlockExchange != null)
{

View File

@ -0,0 +1,108 @@
using Newtonsoft.Json;
using Utils;
namespace CodexPlugin
{
public class DebugInfo
{
public string[] Addrs { get; set; } = Array.Empty<string>();
public string Spr { get; set; } = string.Empty;
public string Id { get; set; } = string.Empty;
public string[] AnnounceAddresses { get; set; } = Array.Empty<string>();
public DebugInfoVersion Version { get; set; } = new();
public DebugInfoTable Table { get; set; } = new();
}
public class DebugInfoVersion
{
public string Version { get; set; } = string.Empty;
public string Revision { get; set; } = string.Empty;
public bool IsValid()
{
return !string.IsNullOrEmpty(Version) && !string.IsNullOrEmpty(Revision);
}
public override string ToString()
{
return JsonConvert.SerializeObject(this);
}
}
public class DebugInfoTable
{
public DebugInfoTableNode LocalNode { get; set; } = new();
public DebugInfoTableNode[] Nodes { get; set; } = Array.Empty<DebugInfoTableNode>();
}
public class DebugInfoTableNode
{
public string NodeId { get; set; } = string.Empty;
public string PeerId { get; set; } = string.Empty;
public string Record { get; set; } = string.Empty;
public string Address { get; set; } = string.Empty;
public bool Seen { get; set; }
}
public class DebugPeer
{
public bool IsPeerFound { get; set; }
public string PeerId { get; set; } = string.Empty;
public string[] Addresses { get; set; } = Array.Empty<string>();
}
public class LocalDatasetList
{
public LocalDataset[] Content { get; set; } = Array.Empty<LocalDataset>();
}
public class LocalDataset
{
public ContentId Cid { get; set; } = new();
public Manifest Manifest { get; set; } = new();
}
public class Manifest
{
public string RootHash { get; set; } = string.Empty;
public ByteSize OriginalBytes { get; set; } = ByteSize.Zero;
public ByteSize BlockSize { get; set; } = ByteSize.Zero;
public bool Protected { get; set; }
}
public class SalesRequestStorageRequest
{
public string Duration { get; set; } = string.Empty;
public string ProofProbability { get; set; } = string.Empty;
public string Reward { get; set; } = string.Empty;
public string Collateral { get; set; } = string.Empty;
public string? Expiry { get; set; }
public uint? Nodes { get; set; }
public uint? Tolerance { get; set; }
}
public class ContentId
{
public ContentId()
{
Id = string.Empty;
}
public ContentId(string id)
{
Id = id;
}
public string Id { get; }
public override bool Equals(object? obj)
{
return obj is ContentId id && Id == id.Id;
}
public override int GetHashCode()
{
return HashCode.Combine(Id);
}
}
}

View File

@ -0,0 +1,172 @@
using CodexContractsPlugin;
using Newtonsoft.Json.Linq;
using System.Numerics;
using Utils;
namespace CodexPlugin
{
public class Mapper
{
public DebugInfo Map(CodexOpenApi.DebugInfo debugInfo)
{
return new DebugInfo
{
Id = debugInfo.Id,
Spr = debugInfo.Spr,
Addrs = debugInfo.Addrs.ToArray(),
AnnounceAddresses = JArray(debugInfo.AdditionalProperties, "announceAddresses").Select(x => x.ToString()).ToArray(),
Version = MapDebugInfoVersion(JObject(debugInfo.AdditionalProperties, "codex")),
Table = MapDebugInfoTable(JObject(debugInfo.AdditionalProperties, "table"))
};
}
public LocalDatasetList Map(CodexOpenApi.DataList dataList)
{
return new LocalDatasetList
{
Content = dataList.Content.Select(Map).ToArray()
};
}
public LocalDataset Map(CodexOpenApi.DataItem dataItem)
{
return new LocalDataset
{
Cid = new ContentId(dataItem.Cid),
Manifest = MapManifest(dataItem.Manifest)
};
}
public CodexOpenApi.SalesAvailabilityCREATE Map(StorageAvailability availability)
{
return new CodexOpenApi.SalesAvailabilityCREATE
{
Duration = ToDecInt(availability.MaxDuration.TotalSeconds),
MinPrice = ToDecInt(availability.MinPriceForTotalSpace),
MaxCollateral = ToDecInt(availability.MaxCollateral),
TotalSize = ToDecInt(availability.TotalSpace.SizeInBytes)
};
}
public CodexOpenApi.StorageRequestCreation Map(StoragePurchaseRequest purchase)
{
return new CodexOpenApi.StorageRequestCreation
{
Duration = ToDecInt(purchase.Duration.TotalSeconds),
ProofProbability = ToDecInt(purchase.ProofProbability),
Reward = ToDecInt(purchase.PricePerSlotPerSecond),
Collateral = ToDecInt(purchase.RequiredCollateral),
Expiry = ToDecInt(DateTimeOffset.UtcNow.ToUnixTimeSeconds() + purchase.Expiry.TotalSeconds),
Nodes = purchase.MinRequiredNumberOfNodes,
Tolerance = purchase.NodeFailureTolerance
};
}
public StoragePurchase Map(CodexOpenApi.Purchase purchase)
{
return new StoragePurchase
{
State = purchase.State,
Error = purchase.Error
};
}
public StorageAvailability Map(CodexOpenApi.SalesAvailabilityREAD read)
{
return new StorageAvailability(
totalSpace: new Utils.ByteSize(Convert.ToInt64(read.TotalSize)),
maxDuration: TimeSpan.FromSeconds(Convert.ToDouble(read.Duration)),
minPriceForTotalSpace: new TestToken(Convert.ToDecimal(read.MinPrice)),
maxCollateral: new TestToken(Convert.ToDecimal(read.MaxCollateral))
)
{
Id = read.Id
};
}
private DebugInfoVersion MapDebugInfoVersion(JObject obj)
{
return new DebugInfoVersion
{
Version = StringOrEmpty(obj, "version"),
Revision = StringOrEmpty(obj, "revision")
};
}
private DebugInfoTable MapDebugInfoTable(JObject obj)
{
return new DebugInfoTable
{
LocalNode = MapDebugInfoTableNode(obj.GetValue("localNode")),
Nodes = new DebugInfoTableNode[0]
};
}
private DebugInfoTableNode MapDebugInfoTableNode(JToken? token)
{
var obj = token as JObject;
if (obj == null) return new DebugInfoTableNode();
return new DebugInfoTableNode
{
Address = StringOrEmpty(obj, "address"),
NodeId = StringOrEmpty(obj, "nodeId"),
PeerId = StringOrEmpty(obj, "peerId"),
Record = StringOrEmpty(obj, "record"),
Seen = Bool(obj, "seen")
};
}
private Manifest MapManifest(CodexOpenApi.ManifestItem manifest)
{
return new Manifest
{
BlockSize = new ByteSize(Convert.ToInt64(manifest.BlockSize)),
OriginalBytes = new ByteSize(Convert.ToInt64(manifest.OriginalBytes)),
RootHash = manifest.RootHash,
Protected = manifest.Protected
};
}
private JArray JArray(IDictionary<string, object> map, string name)
{
return (JArray)map[name];
}
private JObject JObject(IDictionary<string, object> map, string name)
{
return (JObject)map[name];
}
private string StringOrEmpty(JObject obj, string name)
{
if (obj.TryGetValue(name, out var token))
{
var str = (string?)token;
if (!string.IsNullOrEmpty(str)) return str;
}
return string.Empty;
}
private bool Bool(JObject obj, string name)
{
if (obj.TryGetValue(name, out var token))
{
return (bool)token;
}
return false;
}
private string ToDecInt(double d)
{
var i = new BigInteger(d);
return i.ToString("D");
}
private string ToDecInt(TestToken t)
{
var i = new BigInteger(t.Amount);
return i.ToString("D");
}
}
}

View File

@ -7,7 +7,7 @@ namespace CodexPlugin
public interface IMarketplaceAccess
{
string MakeStorageAvailable(StorageAvailability availability);
StoragePurchaseContract RequestStorage(StoragePurchase purchase);
StoragePurchaseContract RequestStorage(StoragePurchaseRequest purchase);
}
public class MarketplaceAccess : IMarketplaceAccess
@ -21,14 +21,14 @@ namespace CodexPlugin
this.codexAccess = codexAccess;
}
public StoragePurchaseContract RequestStorage(StoragePurchase purchase)
public StoragePurchaseContract RequestStorage(StoragePurchaseRequest purchase)
{
purchase.Log(log);
var request = purchase.ToApiRequest();
var response = codexAccess.RequestStorage(request, purchase.ContentId.Id);
var response = codexAccess.RequestStorage(purchase);
if (response == "Purchasing not available" ||
if (string.IsNullOrEmpty(response) ||
response == "Purchasing not available" ||
response == "Expiry required" ||
response == "Expiry needs to be in future" ||
response == "Expiry has to be before the request's end (now + duration)")
@ -44,13 +44,12 @@ namespace CodexPlugin
public string MakeStorageAvailable(StorageAvailability availability)
{
availability.Log(log);
var request = availability.ToApiRequest();
var response = codexAccess.SalesAvailability(request);
var response = codexAccess.SalesAvailability(availability);
Log($"Storage successfully made available. Id: {response.id}");
Log($"Storage successfully made available. Id: {response.Id}");
return response.id;
return response.Id;
}
private void Log(string msg)
@ -67,7 +66,7 @@ namespace CodexPlugin
throw new NotImplementedException();
}
public StoragePurchaseContract RequestStorage(StoragePurchase purchase)
public StoragePurchaseContract RequestStorage(StoragePurchaseRequest purchase)
{
Unavailable();
throw new NotImplementedException();
@ -87,7 +86,7 @@ namespace CodexPlugin
private readonly TimeSpan gracePeriod = TimeSpan.FromSeconds(10);
private DateTime? contractStartUtc;
public StoragePurchaseContract(ILog log, CodexAccess codexAccess, string purchaseId, StoragePurchase purchase)
public StoragePurchaseContract(ILog log, CodexAccess codexAccess, string purchaseId, StoragePurchaseRequest purchase)
{
this.log = log;
this.codexAccess = codexAccess;
@ -96,7 +95,7 @@ namespace CodexPlugin
}
public string PurchaseId { get; }
public StoragePurchase Purchase { get; }
public StoragePurchaseRequest Purchase { get; }
public void WaitForStorageContractStarted()
{
@ -117,7 +116,7 @@ namespace CodexPlugin
WaitForStorageContractState(timeout, "finished");
}
public CodexStoragePurchase GetPurchaseStatus(string purchaseId)
public StoragePurchase GetPurchaseStatus(string purchaseId)
{
return codexAccess.GetPurchaseStatus(purchaseId);
}
@ -132,9 +131,9 @@ namespace CodexPlugin
{
var purchaseStatus = codexAccess.GetPurchaseStatus(PurchaseId);
var statusJson = JsonConvert.SerializeObject(purchaseStatus);
if (purchaseStatus != null && purchaseStatus.state != lastState)
if (purchaseStatus != null && purchaseStatus.State != lastState)
{
lastState = purchaseStatus.state;
lastState = purchaseStatus.State;
log.Debug("Purchase status: " + statusJson);
}

View File

@ -5,19 +5,15 @@ namespace CodexPlugin
{
public class MarketplaceInitialConfig
{
public MarketplaceInitialConfig(MarketplaceSetup marketplaceSetup, IGethNode gethNode, ICodexContracts codexContracts, Ether initialEth, TestToken initialTokens)
public MarketplaceInitialConfig(MarketplaceSetup marketplaceSetup, IGethNode gethNode, ICodexContracts codexContracts)
{
MarketplaceSetup = marketplaceSetup;
GethNode = gethNode;
CodexContracts = codexContracts;
InitialEth = initialEth;
InitialTokens = initialTokens;
}
public MarketplaceSetup MarketplaceSetup { get; }
public IGethNode GethNode { get; }
public ICodexContracts CodexContracts { get; }
public Ether InitialEth { get; }
public TestToken InitialTokens { get; }
}
}

View File

@ -1,17 +0,0 @@
using GethPlugin;
namespace CodexPlugin
{
[Serializable]
public class MarketplaceStartResults
{
public MarketplaceStartResults(EthAddress ethAddress, string privateKey)
{
EthAddress = ethAddress;
PrivateKey = privateKey;
}
public EthAddress EthAddress { get; }
public string PrivateKey { get; }
}
}

View File

@ -1,19 +0,0 @@
using GethPlugin;
using Nethereum.Hex.HexConvertors.Extensions;
using Nethereum.Web3.Accounts;
namespace CodexPlugin
{
public class MarketplaceStarter
{
public MarketplaceStartResults Start()
{
var ecKey = Nethereum.Signer.EthECKey.GenerateKey();
var privateKey = ecKey.GetPrivateKeyAsBytes().ToHex();
var account = new Account(privateKey);
var ethAddress = new EthAddress(account.Address);
return new MarketplaceStartResults(ethAddress, account.PrivateKey);
}
}
}

View File

@ -1,13 +1,12 @@
using CodexContractsPlugin;
using Logging;
using System.Numerics;
using Utils;
namespace CodexPlugin
{
public class StoragePurchase : MarketplaceType
public class StoragePurchaseRequest
{
public StoragePurchase(ContentId cid)
public StoragePurchaseRequest(ContentId cid)
{
ContentId = cid;
}
@ -21,20 +20,6 @@ namespace CodexPlugin
public TimeSpan Duration { get; set; }
public TimeSpan Expiry { get; set; }
public CodexSalesRequestStorageRequest ToApiRequest()
{
return new CodexSalesRequestStorageRequest
{
duration = ToDecInt(Duration.TotalSeconds),
proofProbability = ToDecInt(ProofProbability),
reward = ToDecInt(PricePerSlotPerSecond),
collateral = ToDecInt(RequiredCollateral),
expiry = ToDecInt(DateTimeOffset.UtcNow.ToUnixTimeSeconds() + Expiry.TotalSeconds),
nodes = MinRequiredNumberOfNodes,
tolerance = NodeFailureTolerance
};
}
public void Log(ILog log)
{
log.Log($"Requesting storage for: {ContentId.Id}... (" +
@ -48,7 +33,13 @@ namespace CodexPlugin
}
}
public class StorageAvailability : MarketplaceType
public class StoragePurchase
{
public string State { get; set; } = string.Empty;
public string Error { get; set; } = string.Empty;
}
public class StorageAvailability
{
public StorageAvailability(ByteSize totalSpace, TimeSpan maxDuration, TestToken minPriceForTotalSpace, TestToken maxCollateral)
{
@ -58,44 +49,19 @@ namespace CodexPlugin
MaxCollateral = maxCollateral;
}
public string Id { get; set; } = string.Empty;
public ByteSize TotalSpace { get; }
public TimeSpan MaxDuration { get; }
public TestToken MinPriceForTotalSpace { get; }
public TestToken MaxCollateral { get; }
public CodexSalesAvailabilityRequest ToApiRequest()
{
return new CodexSalesAvailabilityRequest
{
size = ToDecInt(TotalSpace.SizeInBytes),
duration = ToDecInt(MaxDuration.TotalSeconds),
maxCollateral = ToDecInt(MaxCollateral),
minPrice = ToDecInt(MinPriceForTotalSpace)
};
}
public void Log(ILog log)
{
log.Log($"Making storage available... (" +
$"size: {TotalSpace}, " +
$"totalSize: {TotalSpace}, " +
$"maxDuration: {Time.FormatDuration(MaxDuration)}, " +
$"minPriceForTotalSpace: {MinPriceForTotalSpace}, " +
$"maxCollateral: {MaxCollateral})");
}
}
public abstract class MarketplaceType
{
protected string ToDecInt(double d)
{
var i = new BigInteger(d);
return i.ToString("D");
}
protected string ToDecInt(TestToken t)
{
var i = new BigInteger(t.Amount);
return i.ToString("D");
}
}
}

View File

@ -0,0 +1,720 @@
openapi: 3.0.3
info:
version: 0.0.1
title: Codex API
description: "List of endpoints and interfaces available to Codex API users"
security:
- { }
components:
schemas:
MultiAddress:
type: string
description: Address of node as specified by the multi-address specification https://multiformats.io/multiaddr/
example: /ip4/127.0.0.1/tcp/8080
PeerId:
type: string
description: Peer Identity reference as specified at https://docs.libp2p.io/concepts/fundamentals/peers/
example: QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N
Id:
type: string
description: 32bits identifier encoded in hex-decimal string.
example: 0x...
BigInt:
type: string
description: Integer represented as decimal string
Cid:
type: string
description: Content Identifier as specified at https://github.com/multiformats/cid
example: QmYyQSo1c1Ym7orWxLYvCrM2EmxFTANf8wXmmE7DWjhx5N
SlotId:
type: string
description: Keccak hash of the abi encoded tuple (RequestId, slot index)
example: 268a781e0db3f7cf36b18e5f4fdb7f586ec9edd08e5500b17c0e518a769f114a
LogLevel:
type: string
description: "One of the log levels: TRACE, DEBUG, INFO, NOTICE, WARN, ERROR or FATAL"
example: DEBUG
EthereumAddress:
type: string
description: Address of Ethereum address
Reward:
type: string
description: The maximum amount of tokens paid per second per slot to hosts the client is willing to pay
Duration:
type: string
description: The duration of the request in seconds as decimal string
ProofProbability:
type: string
description: How often storage proofs are required as decimal string
Expiry:
type: string
description: A timestamp as seconds since unix epoch at which this request expires if the Request does not find requested amount of nodes to host the data.
default: 10 minutes
ErasureParameters:
type: object
properties:
totalChunks:
type: number
PoRParameters:
description: Parameters for Proof of Retrievability
type: object
properties:
u:
type: string
publicKey:
type: string
name:
type: string
Content:
type: object
description: Parameters specifying the content
properties:
cid:
$ref: "#/components/schemas/Cid"
erasure:
$ref: "#/components/schemas/ErasureParameters"
por:
$ref: "#/components/schemas/PoRParameters"
DebugInfo:
type: object
properties:
id:
$ref: "#/components/schemas/PeerId"
addrs:
type: array
items:
$ref: "#/components/schemas/MultiAddress"
repo:
type: string
description: Path of the data repository where all nodes data are stored
spr:
type: string
description: Signed Peer Record to advertise DHT connection information
SalesAvailability:
type: object
properties:
id:
$ref: "#/components/schemas/Id"
totalSize:
type: string
description: Total size of availability's storage in bytes as decimal string
duration:
$ref: "#/components/schemas/Duration"
minPrice:
type: string
description: Minimum price to be paid (in amount of tokens) as decimal string
maxCollateral:
type: string
description: Maximum collateral user is willing to pay per filled Slot (in amount of tokens) as decimal string
SalesAvailabilityREAD:
allOf:
- $ref: "#/components/schemas/SalesAvailability"
- type: object
properties:
freeSize:
type: string
description: Unused size of availability's storage in bytes as decimal string
SalesAvailabilityCREATE:
allOf:
- $ref: "#/components/schemas/SalesAvailability"
- required:
- totalSize
- minPrice
- maxCollateral
- duration
Slot:
type: object
properties:
id:
$ref: "#/components/schemas/SlotId"
request:
$ref: "#/components/schemas/StorageRequest"
slotIndex:
type: string
description: Slot Index as hexadecimal string
Reservation:
type: object
properties:
id:
$ref: "#/components/schemas/Id"
availabilityId:
$ref: "#/components/schemas/Id"
size:
$ref: "#/components/schemas/BigInt"
requestId:
$ref: "#/components/schemas/Id"
slotIndex:
type: string
description: Slot Index as hexadecimal string
StorageRequestCreation:
type: object
required:
- reward
- duration
- proofProbability
- collateral
- expiry
properties:
duration:
$ref: "#/components/schemas/Duration"
reward:
$ref: "#/components/schemas/Reward"
proofProbability:
$ref: "#/components/schemas/ProofProbability"
nodes:
type: number
description: Minimal number of nodes the content should be stored on
default: 1
tolerance:
type: number
description: Additional number of nodes on top of the `nodes` property that can be lost before pronouncing the content lost
default: 0
collateral:
type: string
description: Number as decimal string that represents how much collateral is asked from hosts that wants to fill a slots
expiry:
type: string
description: Number as decimal string that represents expiry time of the request (in unix timestamp)
StorageAsk:
type: object
required:
- reward
properties:
slots:
type: number
description: Number of slots (eq. hosts) that the Request want to have the content spread over
slotSize:
type: string
description: Amount of storage per slot (in bytes) as decimal string
duration:
$ref: "#/components/schemas/Duration"
proofProbability:
$ref: "#/components/schemas/ProofProbability"
reward:
$ref: "#/components/schemas/Reward"
maxSlotLoss:
type: number
description: Max slots that can be lost without data considered to be lost
StorageRequest:
type: object
properties:
id:
type: string
description: Request ID
client:
$ref: "#/components/schemas/EthereumAddress"
ask:
$ref: "#/components/schemas/StorageAsk"
content:
$ref: "#/components/schemas/Content"
expiry:
$ref: "#/components/schemas/Expiry"
nonce:
type: string
description: Random data
Purchase:
type: object
properties:
state:
type: string
description: Description of the Request's state
error:
type: string
description: If Request failed, then here is presented the error message
request:
$ref: "#/components/schemas/StorageRequest"
DataList:
type: object
properties:
content:
type: array
items:
$ref: "#/components/schemas/DataItem"
DataItem:
type: object
properties:
cid:
$ref: "#/components/schemas/Cid"
manifest:
$ref: "#/components/schemas/ManifestItem"
ManifestItem:
type: object
properties:
rootHash:
$ref: "#/components/schemas/Cid"
description: "Root hash of the content"
originalBytes:
type: number
description: "Length of original content in bytes"
blockSize:
type: number
description: "Size of blocks"
protected:
type: boolean
description: "Indicates if content is protected by erasure-coding"
Space:
type: object
properties:
totalBlocks:
type: number
description: "Number of blocks stored by the node"
quotaMaxBytes:
type: number
description: "Maximum storage space used by the node"
quotaUsedBytes:
type: number
description: "Amount of storage space currently in use"
quotaReservedBytes:
type: number
description: "Amount of storage space reserved"
servers:
- url: "http://localhost:8080/api/codex/v1"
tags:
- name: Marketplace
description: Marketplace information and operations
- name: Data
description: Data operations
- name: Node
description: Node management
- name: Debug
description: Debugging configuration
paths:
"/connect/{peerId}":
get:
summary: "Connect to a peer"
description: |
If `addrs` param is supplied, it will be used to dial the peer, otherwise the `peerId` is used
to invoke peer discovery, if it succeeds the returned addresses will be used to dial.
tags: [ Node ]
operationId: connectPeer
parameters:
- in: path
name: peerId
required: true
schema:
$ref: "#/components/schemas/PeerId"
description: Peer that should be dialed.
- in: query
name: addrs
schema:
type: array
nullable: true
items:
$ref: "#/components/schemas/MultiAddress"
description: |
If supplied, it will be used to dial the peer.
The address has to target the listening address of the peer,
which is specified with the `--listen-addrs` CLI flag.
responses:
"200":
description: Successfully connected to peer
"400":
description: Peer either not found or was not possible to dial
"/data":
get:
summary: "Lists manifest CIDs stored locally in node."
tags: [ Data ]
operationId: listData
responses:
"200":
description: Retrieved list of content CIDs
content:
application/json:
schema:
$ref: "#/components/schemas/DataList"
"400":
description: Invalid CID is specified
"404":
description: Content specified by the CID is not found
"500":
description: Well it was bad-bad
post:
summary: "Upload a file in a streaming manner. Once finished, the file is stored in the node and can be retrieved by any node in the network using the returned CID."
tags: [ Data ]
operationId: upload
requestBody:
content:
application/octet-stream:
schema:
type: string
format: binary
responses:
"200":
description: CID of uploaded file
content:
text/plain:
schema:
type: string
"500":
description: Well it was bad-bad and the upload did not work out
"/data/{cid}":
get:
summary: "Download a file from the local node in a streaming manner. If the file is not available locally, a 404 is returned."
tags: [ Data ]
operationId: downloadLocal
parameters:
- in: path
name: cid
required: true
schema:
$ref: "#/components/schemas/Cid"
description: File to be downloaded.
responses:
"200":
description: Retrieved content specified by CID
content:
application/octet-stream:
schema:
type: string
format: binary
"400":
description: Invalid CID is specified
"404":
description: Content specified by the CID is unavailable locally
"500":
description: Well it was bad-bad
"/data/{cid}/network":
get:
summary: "Download a file from the network in a streaming manner. If the file is not available locally, it will be retrieved from other nodes in the network if able."
tags: [ Data ]
operationId: downloadNetwork
parameters:
- in: path
name: cid
required: true
schema:
$ref: "#/components/schemas/Cid"
description: "File to be downloaded."
responses:
"200":
description: Retrieved content specified by CID
content:
application/octet-stream:
schema:
type: string
format: binary
"400":
description: Invalid CID is specified
"404":
description: Content specified by the CID is not found
"500":
description: Well it was bad-bad
"/space":
get:
summary: "Gets a summary of the storage space allocation of the node."
tags: [ Data ]
operationId: space
responses:
"200":
description: "Summary of storage allocation"
content:
application/json:
schema:
$ref: "#/components/schemas/Space"
"500":
description: "It's not working as planned"
"/sales/slots":
get:
summary: "Returns active slots"
tags: [ Marketplace ]
operationId: getActiveSlots
responses:
"200":
description: Retrieved active slots
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/Slot"
"503":
description: Sales are unavailable
"/sales/slots/{slotId}":
get:
summary: "Returns active slot with id {slotId} for the host"
tags: [ Marketplace ]
operationId: getActiveSlotById
parameters:
- in: path
name: slotId
required: true
schema:
$ref: "#/components/schemas/Cid"
description: File to be downloaded.
responses:
"200":
description: Retrieved active slot
content:
application/json:
schema:
$ref: "#/components/schemas/Slot"
"400":
description: Invalid or missing SlotId
"404":
description: Host is not in an active sale for the slot
"503":
description: Sales are unavailable
"/sales/availability":
get:
summary: "Returns storage that is for sale"
tags: [ Marketplace ]
operationId: getOfferedStorage
responses:
"200":
description: Retrieved storage availabilities of the node
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/SalesAvailability"
"500":
description: Error getting unused availabilities
"503":
description: Sales are unavailable
post:
summary: "Offers storage for sale"
operationId: offerStorage
tags: [ Marketplace ]
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/SalesAvailabilityCREATE"
responses:
"201":
description: Created storage availability
content:
application/json:
schema:
$ref: "#/components/schemas/SalesAvailabilityREAD"
"400":
description: Invalid data input
"422":
description: Not enough node's storage quota available
"500":
description: Error reserving availability
"503":
description: Sales are unavailable
"/sales/availability/{id}":
patch:
summary: "Updates availability"
description: |
The new parameters will be only considered for new requests.
Existing Requests linked to this Availability will continue as is.
operationId: updateOfferedStorage
tags: [ Marketplace ]
parameters:
- in: path
name: id
required: true
schema:
type: string
description: ID of Availability
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/SalesAvailability"
responses:
"204":
description: Availability successfully updated
"400":
description: Invalid data input
"404":
description: Availability not found
"422":
description: Not enough node's storage quota available
"500":
description: Error reserving availability
"503":
description: Sales are unavailable
"/sales/availability/{id}/reservations":
patch:
summary: "Get availability's reservations"
description: Return's list of Reservations for ongoing Storage Requests that the node hosts.
operationId: getReservations
tags: [ Marketplace ]
parameters:
- in: path
name: id
required: true
schema:
type: string
description: ID of Availability
responses:
"200":
description: Retrieved storage availabilities of the node
content:
application/json:
schema:
type: array
items:
$ref: "#/components/schemas/Reservation"
"400":
description: Invalid Availability ID
"404":
description: Availability not found
"500":
description: Error getting reservations
"503":
description: Sales are unavailable
"/storage/request/{cid}":
post:
summary: "Creates a new Request for storage"
tags: [ Marketplace ]
operationId: createStorageRequest
parameters:
- in: path
name: cid
required: true
schema:
$ref: "#/components/schemas/Cid"
description: CID of the uploaded data that should be stored
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/StorageRequestCreation"
responses:
"200":
description: Returns the Request ID as decimal string
content:
text/plain:
schema:
type: string
"400":
description: Invalid or missing Request ID
"404":
description: Request ID not found
"503":
description: Purchasing is unavailable
"/storage/purchases":
get:
summary: "Returns list of purchase IDs"
tags: [ Marketplace ]
operationId: getPurchases
responses:
"200":
description: Gets all purchase IDs stored in node
content:
application/json:
schema:
type: array
items:
type: string
"503":
description: Purchasing is unavailable
"/storage/purchases/{id}":
get:
summary: "Returns purchase details"
tags: [ Marketplace ]
operationId: getPurchase
parameters:
- in: path
name: id
required: true
schema:
type: string
description: Hexadecimal ID of a Purchase
responses:
"200":
description: Purchase details
content:
application/json:
schema:
$ref: "#/components/schemas/Purchase"
"400":
description: Invalid or missing Purchase ID
"404":
description: Purchase not found
"503":
description: Purchasing is unavailable
"/debug/chronicles/loglevel":
post:
summary: "Set log level at run time"
tags: [ Debug ]
operationId: setDebugLogLevel
parameters:
- in: query
name: level
required: true
schema:
$ref: "#/components/schemas/LogLevel"
responses:
"200":
description: Successfully log level set
"400":
description: Invalid or missing log level
"500":
description: Well it was bad-bad
"/debug/info":
get:
summary: "Gets node information"
operationId: getDebugInfo
tags: [ Debug ]
responses:
"200":
description: Node's information
content:
application/json:
schema:
$ref: "#/components/schemas/DebugInfo"

View File

@ -0,0 +1,10 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net7.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
</Project>

View File

@ -0,0 +1,61 @@
using System.Security.Cryptography;
using System.Text;
public static class Program
{
private const string OpenApiFile = "../CodexPlugin/openapi.yaml";
private const string Search = "<INSERT-OPENAPI-YAML-HASH>";
private const string TargetFile = "ApiChecker.cs";
public static void Main(string[] args)
{
Console.WriteLine("Injecting hash of 'openapi.yaml'...");
var hash = CreateHash();
// This hash is used to verify that the Codex docker image being used is compatible
// with the openapi.yaml being used by the Codex plugin.
// If the openapi.yaml files don't match, an exception is thrown.
SearchAndInject(hash);
// This program runs as the pre-build trigger for "CodexPlugin".
// You might be wondering why this work isn't done by a shell script.
// This is because this project is being run on many different platforms.
// (Mac, Unix, Win, but also desktop/cloud containers.)
// In order to not go insane trying to make a shell script that works in all possible cases,
// instead we use the one tool that's definitely installed in all platforms and locations
// when you're trying to run this plugin.
Console.WriteLine("Done!");
}
private static string CreateHash()
{
var file = File.ReadAllText(OpenApiFile);
var fileBytes = Encoding.ASCII.GetBytes(file
.Replace(Environment.NewLine, ""));
var sha = SHA256.Create();
var hash = sha.ComputeHash(fileBytes);
return BitConverter.ToString(hash);
}
private static void SearchAndInject(string hash)
{
var lines = File.ReadAllLines(TargetFile);
Inject(lines, hash);
File.WriteAllLines(TargetFile, lines);
}
private static void Inject(string[] lines, string hash)
{
for (var i = 0; i < lines.Length; i++)
{
if (lines[i].Contains(Search))
{
lines[i + 1] = $" private const string OpenApiYamlHash = \"{hash}\";";
return;
}
}
}
}

View File

@ -0,0 +1,28 @@
using Nethereum.Hex.HexConvertors.Extensions;
using Nethereum.Web3.Accounts;
namespace GethPlugin
{
[Serializable]
public class EthAccount
{
public EthAccount(EthAddress ethAddress, string privateKey)
{
EthAddress = ethAddress;
PrivateKey = privateKey;
}
public EthAddress EthAddress { get; }
public string PrivateKey { get; }
public static EthAccount GenerateNew()
{
var ecKey = Nethereum.Signer.EthECKey.GenerateKey();
var privateKey = ecKey.GetPrivateKeyAsBytes().ToHex();
var account = new Account(privateKey);
var ethAddress = new EthAddress(account.Address);
return new EthAccount(ethAddress, account.PrivateKey);
}
}
}

View File

@ -5,6 +5,7 @@
EthAddress EthAddress { get; }
}
[Serializable]
public class EthAddress
{
public EthAddress(string address)

View File

@ -7,14 +7,16 @@ namespace MetricsPlugin
{
public class MetricsQuery
{
private readonly IHttp http;
private readonly IEndpoint endpoint;
private readonly ILog log;
public MetricsQuery(IPluginTools tools, RunningContainer runningContainer)
{
RunningContainer = runningContainer;
log = tools.GetLog();
http = tools.CreateHttp(RunningContainer.GetAddress(log, PrometheusContainerRecipe.PortTag), "api/v1");
endpoint = tools
.CreateHttp()
.CreateEndpoint(RunningContainer.GetAddress(log, PrometheusContainerRecipe.PortTag), "/api/v1/");
}
public RunningContainer RunningContainer { get; }
@ -51,7 +53,7 @@ namespace MetricsPlugin
public Metrics? GetAllMetricsForNode(IMetricsScrapeTarget target)
{
var response = http.HttpGetJson<PrometheusQueryResponse>($"query?query={GetInstanceStringForNode(target)}{GetQueryTimeRange()}");
var response = endpoint.HttpGetJson<PrometheusQueryResponse>($"query?query={GetInstanceStringForNode(target)}{GetQueryTimeRange()}");
if (response.status != "success") return null;
var result = MapResponseToMetrics(response);
Log(target, result);
@ -60,14 +62,14 @@ namespace MetricsPlugin
private PrometheusQueryResponse? GetLastOverTime(string metricName, string instanceString)
{
var response = http.HttpGetJson<PrometheusQueryResponse>($"query?query=last_over_time({metricName}{instanceString}{GetQueryTimeRange()})");
var response = endpoint.HttpGetJson<PrometheusQueryResponse>($"query?query=last_over_time({metricName}{instanceString}{GetQueryTimeRange()})");
if (response.status != "success") return null;
return response;
}
private PrometheusQueryResponse? GetAll(string metricName)
{
var response = http.HttpGetJson<PrometheusQueryResponse>($"query?query={metricName}{GetQueryTimeRange()}");
var response = endpoint.HttpGetJson<PrometheusQueryResponse>($"query?query={metricName}{GetQueryTimeRange()}");
if (response.status != "success") return null;
return response;
}

View File

@ -1,15 +1,22 @@
# Distributed System Tests for Nim-Codex
# Distributed System Tests
Using a common dotnet unit-test framework and a few other libraries, this project allows you to write tests that use multiple Codex node instances in various configurations to test the distributed system in a controlled, reproducible environment.
This project allows you to write tools and tests that control and interact with container-based applications to form a distributed system in a controlled, reproducible environment.
Nim-Codex: https://github.com/codex-storage/nim-codex
Dotnet: v7.0
Kubernetes: v1.25.4
Dotnet-kubernetes SDK: v10.1.4 https://github.com/kubernetes-client/csharp
Nethereum: v4.14.0
Currently, this project is mainly used for distributed testing of [Nim-Codex](https://github.com/codex-storage/nim-codex). However, its plugin-structure allows for other projects to be on-boarded (relatively) easily. (See 'contribute a plugin`.)
## Tests/DistTestCore
Library with generic distributed-testing functionality. Uses NUnit3. Reference this project to build unit-test style scenarios: setup, run test, teardown. The DistTestCore responds to the following env-vars:
- `LOGPATH` = Path where log files will be written.
- `DATAFILEPATH` = Path where (temporary) data files will be stored.
- `ALWAYS_LOGS` = When set, DistTestCore will always download all container logs at the end of a test run. By default, logs are only downloaded on test failure.
## Tests/CodexTests and Tests/CodexLongTests
These are test assemblies that use NUnit3 to perform tests against transient Codex nodes.
These are test assemblies that use DistTestCore to perform tests against transient Codex nodes.
Read more [HERE](/Tests/CodexTests/README.md)
## Tests/ContinuousTests

View File

@ -11,7 +11,7 @@ namespace CodexLongTests.BasicTests
{
var group = AddCodex(1000, s => s.EnableMetrics());
var nodeIds = group.Select(n => n.GetDebugInfo().id).ToArray();
var nodeIds = group.Select(n => n.GetDebugInfo().Id).ToArray();
Assert.That(nodeIds.Length, Is.EqualTo(nodeIds.Distinct().Count()),
"Not all created nodes provided a unique id.");
@ -24,7 +24,7 @@ namespace CodexLongTests.BasicTests
{
var n = AddCodex();
Assert.That(!string.IsNullOrEmpty(n.GetDebugInfo().id));
Assert.That(!string.IsNullOrEmpty(n.GetDebugInfo().Id));
}
}
}

View File

@ -1,63 +0,0 @@
using CodexPlugin;
using NUnit.Framework;
using Utils;
namespace CodexTests.BasicTests
{
[TestFixture]
public class BlockExchangeTests : CodexDistTest
{
[Test]
public void EmptyAfterExchange()
{
var bootstrap = AddCodex(s => s.WithName("bootstrap"));
var node = AddCodex(s => s.WithName("node").WithBootstrapNode(bootstrap));
AssertExchangeIsEmpty(bootstrap, node);
var file = GenerateTestFile(1.MB());
var cid = bootstrap.UploadFile(file);
node.DownloadContent(cid);
AssertExchangeIsEmpty(bootstrap, node);
}
[Test]
public void EmptyAfterExchangeWithBystander()
{
var bootstrap = AddCodex(s => s.WithName("bootstrap"));
var node = AddCodex(s => s.WithName("node").WithBootstrapNode(bootstrap));
var bystander = AddCodex(s => s.WithName("bystander").WithBootstrapNode(bootstrap));
AssertExchangeIsEmpty(bootstrap, node, bystander);
var file = GenerateTestFile(1.MB());
var cid = bootstrap.UploadFile(file);
node.DownloadContent(cid);
AssertExchangeIsEmpty(bootstrap, node, bystander);
}
private void AssertExchangeIsEmpty(params ICodexNode[] nodes)
{
foreach (var node in nodes)
{
// API Call not available in master-line Codex image.
//Time.Retry(() => AssertBlockExchangeIsEmpty(node), nameof(AssertExchangeIsEmpty));
}
}
//private void AssertBlockExchangeIsEmpty(ICodexNode node)
//{
// var msg = $"BlockExchange for {node.GetName()}: ";
// var response = node.GetDebugBlockExchange();
// foreach (var peer in response.peers)
// {
// var activeWants = peer.wants.Where(w => !w.cancel).ToArray();
// Assert.That(activeWants.Length, Is.EqualTo(0), msg + "thinks a peer has active wants.");
// }
// Assert.That(response.taskQueue, Is.EqualTo(0), msg + "has tasks in queue.");
// Assert.That(response.pendingBlocks, Is.EqualTo(0), msg + "has pending blocks.");
//}
}
}

View File

@ -21,7 +21,8 @@ namespace CodexTests.BasicTests
var group = AddCodex(5, o => o
.EnableMetrics()
.EnableMarketplace(geth, contract, 10.Eth(), 100000.TestTokens(), s => s
.EnableMarketplace(geth, contract, s => s
.WithInitial(10.Eth(), 100000.TestTokens())
.AsStorageNode()
.AsValidator())
.WithBlockTTL(TimeSpan.FromMinutes(5))
@ -103,7 +104,7 @@ namespace CodexTests.BasicTests
private void CheckRoutingTables(IEnumerable<ICodexNode> nodes)
{
var all = nodes.ToArray();
var allIds = all.Select(n => n.GetDebugInfo().table.localNode.nodeId).ToArray();
var allIds = all.Select(n => n.GetDebugInfo().Table.LocalNode.NodeId).ToArray();
var errors = all.Select(n => AreAllPresent(n, allIds)).Where(s => !string.IsNullOrEmpty(s)).ToArray();
@ -116,8 +117,8 @@ namespace CodexTests.BasicTests
private string AreAllPresent(ICodexNode n, string[] allIds)
{
var info = n.GetDebugInfo();
var known = info.table.nodes.Select(n => n.nodeId).ToArray();
var expected = allIds.Where(i => i != info.table.localNode.nodeId).ToArray();
var known = info.Table.Nodes.Select(n => n.NodeId).ToArray();
var expected = allIds.Where(i => i != info.Table.LocalNode.NodeId).ToArray();
if (!expected.All(ex => known.Contains(ex)))
{

View File

@ -25,9 +25,13 @@ namespace CodexTests.BasicTests
var seller = AddCodex(s => s
.WithName("Seller")
.WithLogLevel(CodexLogLevel.Trace, new CodexLogCustomTopics(CodexLogLevel.Error, CodexLogLevel.Error, CodexLogLevel.Warn))
.WithLogLevel(CodexLogLevel.Trace, new CodexLogCustomTopics(CodexLogLevel.Error, CodexLogLevel.Error, CodexLogLevel.Warn)
{
ContractClock = CodexLogLevel.Trace,
})
.WithStorageQuota(11.GB())
.EnableMarketplace(geth, contracts, initialEth: 10.Eth(), initialTokens: sellerInitialBalance, s => s
.EnableMarketplace(geth, contracts, m => m
.WithInitial(10.Eth(), sellerInitialBalance)
.AsStorageNode()
.AsValidator()));
@ -44,9 +48,10 @@ namespace CodexTests.BasicTests
var testFile = GenerateTestFile(fileSize);
var buyer = AddCodex(s => s
.WithName("Buyer")
.WithBootstrapNode(seller)
.EnableMarketplace(geth, contracts, initialEth: 10.Eth(), initialTokens: buyerInitialBalance));
.WithName("Buyer")
.WithBootstrapNode(seller)
.EnableMarketplace(geth, contracts, m => m
.WithInitial(10.Eth(), buyerInitialBalance)));
AssertBalance(contracts, buyer, Is.EqualTo(buyerInitialBalance));
@ -88,7 +93,7 @@ namespace CodexTests.BasicTests
var contentId = buyer.UploadFile(testFile);
var purchase = new StoragePurchase(contentId)
var purchase = new StoragePurchaseRequest(contentId)
{
PricePerSlotPerSecond = 2.TestTokens(),
RequiredCollateral = 10.TestTokens(),
@ -134,7 +139,7 @@ namespace CodexTests.BasicTests
Assert.That(discN, Is.LessThan(bootN));
}
private void AssertSlotFilledEvents(ICodexContracts contracts, StoragePurchase purchase, Request request, ICodexNode seller)
private void AssertSlotFilledEvents(ICodexContracts contracts, StoragePurchaseRequest purchase, Request request, ICodexNode seller)
{
// Expect 1 fulfilled event for the purchase.
var requestFulfilledEvents = contracts.GetRequestFulfilledEvents(GetTestRunTimeRange());
@ -152,7 +157,7 @@ namespace CodexTests.BasicTests
}
}
private void AssertStorageRequest(Request request, StoragePurchase purchase, ICodexContracts contracts, ICodexNode buyer)
private void AssertStorageRequest(Request request, StoragePurchaseRequest purchase, ICodexContracts contracts, ICodexNode buyer)
{
Assert.That(contracts.GetRequestState(request), Is.EqualTo(RequestState.Started));
Assert.That(request.ClientAddress, Is.EqualTo(buyer.EthAddress));

View File

@ -21,7 +21,8 @@ namespace CodexTests.DownloadConnectivityTests
{
var geth = Ci.StartGethNode(s => s.IsMiner());
var contracts = Ci.StartCodexContracts(geth);
AddCodex(2, s => s.EnableMarketplace(geth, contracts, 10.Eth(), 1000.TestTokens()));
AddCodex(2, s => s.EnableMarketplace(geth, contracts, m => m
.WithInitial(10.Eth(), 1000.TestTokens())));
AssertAllNodesConnected();
}

View File

@ -31,6 +31,8 @@ namespace CodexTests.Helpers
private void AssertFullyConnected(ICodexNode[] nodes)
{
Log($"Asserting '{implementation.Description()}' for nodes: '{string.Join(",", nodes.Select(n => n.GetName()))}'...");
Assert.That(nodes.Length, Is.GreaterThan(1));
var entries = CreateEntries(nodes);
var pairs = CreatePairs(entries);
@ -114,12 +116,12 @@ namespace CodexTests.Helpers
}
public ICodexNode Node { get; }
public CodexDebugResponse Response { get; }
public DebugInfo Response { get; }
public override string ToString()
{
if (Response == null || string.IsNullOrEmpty(Response.id)) return "UNKNOWN";
return Response.id;
if (Response == null || string.IsNullOrEmpty(Response.Id)) return "UNKNOWN";
return Response.Id;
}
}

View File

@ -26,12 +26,12 @@ namespace CodexTests.Helpers
public string ValidateEntry(Entry entry, Entry[] allEntries)
{
var result = string.Empty;
foreach (var peer in entry.Response.table.nodes)
foreach (var peer in entry.Response.Table.Nodes)
{
var expected = GetExpectedDiscoveryEndpoint(allEntries, peer);
if (expected != peer.address)
if (expected != peer.Address)
{
result += $"Node:{entry.Node.GetName()} has incorrect peer table entry. Was: '{peer.address}', expected: '{expected}'. ";
result += $"Node:{entry.Node.GetName()} has incorrect peer table entry. Was: '{peer.Address}', expected: '{expected}'. ";
}
}
return result;
@ -39,24 +39,24 @@ namespace CodexTests.Helpers
public PeerConnectionState Check(Entry from, Entry to)
{
var peerId = to.Response.id;
var peerId = to.Response.Id;
var response = from.Node.GetDebugPeer(peerId);
if (!response.IsPeerFound)
{
return PeerConnectionState.NoConnection;
}
if (!string.IsNullOrEmpty(response.peerId) && response.addresses.Any())
if (!string.IsNullOrEmpty(response.PeerId) && response.Addresses.Any())
{
return PeerConnectionState.Connection;
}
return PeerConnectionState.Unknown;
}
private static string GetExpectedDiscoveryEndpoint(Entry[] allEntries, CodexDebugTableNodeResponse node)
private static string GetExpectedDiscoveryEndpoint(Entry[] allEntries, DebugInfoTableNode node)
{
var peer = allEntries.SingleOrDefault(e => e.Response.table.localNode.peerId == node.peerId);
if (peer == null) return $"peerId: {node.peerId} is not known.";
var peer = allEntries.SingleOrDefault(e => e.Response.Table.LocalNode.PeerId == node.PeerId);
if (peer == null) return $"peerId: {node.PeerId} is not known.";
var container = peer.Node.Container;
var podInfo = peer.Node.GetPodInfo();

View File

@ -1,5 +1,4 @@
using CodexPlugin;
using NUnit.Framework;
using NUnit.Framework;
namespace CodexTests.PeerDiscoveryTests
{
@ -9,10 +8,10 @@ namespace CodexTests.PeerDiscoveryTests
[Test]
public void TwoLayersTest()
{
var root = Ci.StartCodexNode();
var l1Source = Ci.StartCodexNode(s => s.WithBootstrapNode(root));
var l1Node = Ci.StartCodexNode(s => s.WithBootstrapNode(root));
var l2Target = Ci.StartCodexNode(s => s.WithBootstrapNode(l1Node));
var root = AddCodex();
var l1Source = AddCodex(s => s.WithBootstrapNode(root));
var l1Node = AddCodex(s => s.WithBootstrapNode(root));
var l2Target = AddCodex(s => s.WithBootstrapNode(l1Node));
AssertAllNodesConnected();
}
@ -20,11 +19,11 @@ namespace CodexTests.PeerDiscoveryTests
[Test]
public void ThreeLayersTest()
{
var root = Ci.StartCodexNode();
var l1Source = Ci.StartCodexNode(s => s.WithBootstrapNode(root));
var l1Node = Ci.StartCodexNode(s => s.WithBootstrapNode(root));
var l2Node = Ci.StartCodexNode(s => s.WithBootstrapNode(l1Node));
var l3Target = Ci.StartCodexNode(s => s.WithBootstrapNode(l2Node));
var root = AddCodex();
var l1Source = AddCodex(s => s.WithBootstrapNode(root));
var l1Node = AddCodex(s => s.WithBootstrapNode(root));
var l2Node = AddCodex(s => s.WithBootstrapNode(l1Node));
var l3Target = AddCodex(s => s.WithBootstrapNode(l2Node));
AssertAllNodesConnected();
}
@ -34,10 +33,10 @@ namespace CodexTests.PeerDiscoveryTests
[TestCase(10)]
public void NodeChainTest(int chainLength)
{
var node = Ci.StartCodexNode();
var node = AddCodex();
for (var i = 1; i < chainLength; i++)
{
node = Ci.StartCodexNode(s => s.WithBootstrapNode(node));
node = AddCodex(s => s.WithBootstrapNode(node));
}
AssertAllNodesConnected();

View File

@ -31,7 +31,8 @@ namespace CodexTests.PeerDiscoveryTests
{
var geth = Ci.StartGethNode(s => s.IsMiner());
var contracts = Ci.StartCodexContracts(geth);
AddCodex(2, s => s.EnableMarketplace(geth, contracts, 10.Eth(), 1000.TestTokens()));
AddCodex(2, s => s.EnableMarketplace(geth, contracts, m => m
.WithInitial(10.Eth(), 1000.TestTokens())));
AssertAllNodesConnected();
}
@ -70,23 +71,23 @@ namespace CodexTests.PeerDiscoveryTests
}
}
private string AreAllPresent(CodexDebugResponse info, CodexDebugResponse[] allResponses)
private string AreAllPresent(DebugInfo info, DebugInfo[] allResponses)
{
var knownIds = info.table.nodes.Select(n => n.nodeId).ToArray();
var knownIds = info.Table.Nodes.Select(n => n.NodeId).ToArray();
var allOthers = GetAllOtherResponses(info, allResponses);
var expectedIds = allOthers.Select(i => i.table.localNode.nodeId).ToArray();
var expectedIds = allOthers.Select(i => i.Table.LocalNode.NodeId).ToArray();
if (!expectedIds.All(ex => knownIds.Contains(ex)))
{
return $"Node {info.id}: Not all of '{string.Join(",", expectedIds)}' were present in routing table: '{string.Join(",", knownIds)}'";
return $"Node {info.Id}: Not all of '{string.Join(",", expectedIds)}' were present in routing table: '{string.Join(",", knownIds)}'";
}
return string.Empty;
}
private CodexDebugResponse[] GetAllOtherResponses(CodexDebugResponse exclude, CodexDebugResponse[] allResponses)
private DebugInfo[] GetAllOtherResponses(DebugInfo exclude, DebugInfo[] allResponses)
{
return allResponses.Where(r => r.id != exclude.id).ToArray();
return allResponses.Where(r => r.Id != exclude.Id).ToArray();
}
}
}

View File

@ -10,11 +10,11 @@ The test runner will produce a folder named `CodexTestLogs` with all the test lo
## Overrides
The following environment variables allow you to override specific aspects of the behaviour of the tests.
| Variable | Description |
|------------------|-------------------------------------------------------------------------------------------------------------|
| RUNID | A pod-label 'runid' is added to each pod created during the tests. Use this to set the value of that label. |
| TESTID | Similar to RUNID, except the label is 'testid'. |
| CODEXDOCKERIMAGE | If set, this will be used instead of the default Codex docker image. |
| Variable | Description |
|------------------|----------------------------------------------------------------------------------------------------------------|
| DEPLOYID | A pod-label 'deployid' is added to each pod created during the tests. Use this to set the value of that label. |
| TESTID | Similar to RUNID, except the label is 'testid'. |
| CODEXDOCKERIMAGE | If set, this will be used instead of the default Codex docker image. |
## Using a local Codex repository
If you have a clone of the Codex git repository, and you want to run the tests using your local modifications, the following environment variable options are for you. Please note that any changes made in Codex's 'vendor' directory will be discarded during the build process.

View File

@ -14,6 +14,7 @@ namespace DistTestCore
kubeConfigFile = GetNullableEnvVarOrDefault("KUBECONFIG", null);
logPath = GetEnvVarOrDefault("LOGPATH", "CodexTestLogs");
dataFilesPath = GetEnvVarOrDefault("DATAFILEPATH", "TestDataFiles");
AlwaysDownloadContainerLogs = !string.IsNullOrEmpty(GetEnvVarOrDefault("ALWAYS_LOGS", ""));
}
public Configuration(string? kubeConfigFile, string logPath, string dataFilesPath)
@ -23,6 +24,8 @@ namespace DistTestCore
this.dataFilesPath = dataFilesPath;
}
public bool AlwaysDownloadContainerLogs { get; set; }
public KubernetesWorkflow.Configuration GetK8sConfiguration(ITimeSet timeSet, string k8sNamespace)
{
return GetK8sConfiguration(timeSet, new DoNothingK8sHooks(), k8sNamespace);

View File

@ -3,6 +3,7 @@ using DistTestCore.Logs;
using FileUtils;
using Logging;
using NUnit.Framework;
using NUnit.Framework.Interfaces;
using System.Reflection;
using Utils;
using Assert = NUnit.Framework.Assert;
@ -222,7 +223,7 @@ namespace DistTestCore
Log($"{result.StackTrace}");
}
if (result.Outcome.Status == NUnit.Framework.Interfaces.TestStatus.Failed)
if (result.Outcome.Status == TestStatus.Failed)
{
log.MarkAsFailed();
}
@ -251,21 +252,28 @@ namespace DistTestCore
private void IncludeLogsOnTestFailure(TestLifecycle lifecycle)
{
var result = TestContext.CurrentContext.Result;
if (result.Outcome.Status == NUnit.Framework.Interfaces.TestStatus.Failed)
var testStatus = TestContext.CurrentContext.Result.Outcome.Status;
if (testStatus == TestStatus.Failed)
{
fixtureLog.MarkAsFailed();
if (IsDownloadingLogsEnabled())
{
lifecycle.Log.Log("Downloading all container logs because of test failure...");
lifecycle.DownloadAllLogs();
}
else
{
lifecycle.Log.Log("Skipping download of all container logs due to [DontDownloadLogsOnFailure] attribute.");
}
}
if (ShouldDownloadAllLogs(testStatus))
{
lifecycle.Log.Log("Downloading all container logs...");
lifecycle.DownloadAllLogs();
}
}
private bool ShouldDownloadAllLogs(TestStatus testStatus)
{
if (configuration.AlwaysDownloadContainerLogs) return true;
if (testStatus == TestStatus.Failed)
{
return IsDownloadingLogsEnabled();
}
return false;
}
private string GetCurrentTestName()

View File

@ -41,10 +41,11 @@ namespace CodexNetDeployer
if (config.ShouldMakeStorageAvailable)
{
s.EnableMarketplace(gethNode, contracts, 100.Eth(), config.InitialTestTokens.TestTokens(), s =>
s.EnableMarketplace(gethNode, contracts, m =>
{
if (validatorsLeft > 0) s.AsValidator();
if (config.ShouldMakeStorageAvailable) s.AsStorageNode();
m.WithInitial(100.Eth(), config.InitialTestTokens.TestTokens());
if (validatorsLeft > 0) m.AsValidator();
if (config.ShouldMakeStorageAvailable) m.AsStorageNode();
});
}
@ -61,7 +62,7 @@ namespace CodexNetDeployer
});
var debugInfo = codexNode.GetDebugInfo();
if (!string.IsNullOrWhiteSpace(debugInfo.spr))
if (!string.IsNullOrWhiteSpace(debugInfo.Spr))
{
Console.Write("Online\t");

View File

@ -9,11 +9,11 @@ After the deployment has successfully finished, a `codex-deployment.json` file w
## Overrides
The arguments allow you to configure quite a bit, but not everything. Here are some environment variables the CodexNetDeployer will respond to. None of these are required.
| Variable | Description |
|------------------|--------------------------------------------------------------------------------------------------------------|
| RUNID | A pod-label 'runid' is added to each pod created during deployment. Use this to set the value of that label. |
| TESTID | Similar to RUNID, except the label is 'testid'. |
| CODEXDOCKERIMAGE | If set, this will be used instead of the default Codex docker image. |
| Variable | Description |
|------------------|----------------------------------------------------------------------------------------------------------------|
| DEPLOYID | A pod-label 'deployid' is added to each pod created during the tests. Use this to set the value of that label. |
| TESTID | Similar to RUNID, except the label is 'testid'. |
| CODEXDOCKERIMAGE | If set, this will be used instead of the default Codex docker image. |
## Using a local Codex repository
If you have a clone of the Codex git repository, and you want to deploy a network using your local modifications, the following environment variable options are for you. Please note that any changes made in Codex's 'vendor' directory will be discarded during the build process.

View File

@ -59,6 +59,13 @@ Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "GethConnector", "Framework\
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "DiscordRewards", "Framework\DiscordRewards\DiscordRewards.csproj", "{B07820C4-309F-4454-BCC1-1D4902C9C67B}"
EndProject
Project("{9A19103F-16F7-4668-BE54-9A1E7A4F7556}") = "CodexPluginPrebuild", "ProjectPlugins\CodexPluginPrebuild\CodexPluginPrebuild.csproj", "{88C212E9-308A-46A4-BAAD-468E8EBD8EDF}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "Solution Items", "Solution Items", "{FD6E81BA-93E8-4BB7-94F8-98C5427E02B0}"
ProjectSection(SolutionItems) = preProject
.editorconfig = .editorconfig
EndProjectSection
EndProject
Global
GlobalSection(SolutionConfigurationPlatforms) = preSolution
Debug|Any CPU = Debug|Any CPU
@ -161,6 +168,10 @@ Global
{B07820C4-309F-4454-BCC1-1D4902C9C67B}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B07820C4-309F-4454-BCC1-1D4902C9C67B}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B07820C4-309F-4454-BCC1-1D4902C9C67B}.Release|Any CPU.Build.0 = Release|Any CPU
{88C212E9-308A-46A4-BAAD-468E8EBD8EDF}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{88C212E9-308A-46A4-BAAD-468E8EBD8EDF}.Debug|Any CPU.Build.0 = Debug|Any CPU
{88C212E9-308A-46A4-BAAD-468E8EBD8EDF}.Release|Any CPU.ActiveCfg = Release|Any CPU
{88C212E9-308A-46A4-BAAD-468E8EBD8EDF}.Release|Any CPU.Build.0 = Release|Any CPU
EndGlobalSection
GlobalSection(SolutionProperties) = preSolution
HideSolutionNode = FALSE
@ -190,6 +201,7 @@ Global
{570C0DBE-0EF1-47B5-9A3B-E1F7895722A5} = {7591C5B3-D86E-4AE4-8ED2-B272D17FE7E3}
{F730DA73-1C92-4107-BCFB-D33759DAB0C3} = {81AE04BC-CBFA-4E6F-B039-8208E9AFAAE7}
{B07820C4-309F-4454-BCC1-1D4902C9C67B} = {81AE04BC-CBFA-4E6F-B039-8208E9AFAAE7}
{88C212E9-308A-46A4-BAAD-468E8EBD8EDF} = {8F1F1C2A-E313-4E0C-BE40-58FB0BA91124}
EndGlobalSection
GlobalSection(ExtensibilityGlobals) = postSolution
SolutionGuid = {237BF0AA-9EC4-4659-AD9A-65DEB974250C}