Initial commit

This commit is contained in:
Bruno Skvorc 2019-05-22 15:53:21 +02:00
commit 765e9694fb
23 changed files with 1718 additions and 0 deletions

4
.gitignore vendored Normal file
View File

@ -0,0 +1,4 @@
docgen/.vuepress/dist
docgen/yarn.lock
docgen/README.md
docgen/node_modules/*

205
LICENSE-APACHEv2 Normal file
View File

@ -0,0 +1,205 @@
beacon_chain is licensed under the Apache License version 2
Copyright (c) 2018 Status Research & Development GmbH
-----------------------------------------------------
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2018 Status Research & Development GmbH
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

25
LICENSE-MIT Normal file
View File

@ -0,0 +1,25 @@
beacon_chain is licensed under the MIT License
Copyright (c) 2018 Status Research & Development GmbH
-----------------------------------------------------
The MIT License (MIT)
Copyright (c) 2018 Status Research & Development GmbH
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

67
README.md Normal file
View File

@ -0,0 +1,67 @@
# Nimbus Docs Suite
This a documentation generator for Nimbus Libraries at [nimbus-libs.status.im](https://nimbus-libs.status.im). It's made to auto-regenerate from master on all the repos you want documented but at the same time supports custom content and theming. It uses [Vuepress](https://v1.vuepress.vuejs.org) behind the scenes.
## Dependencies
You need:
- a fairly recent [version of NodeJS](https://github.com/nvm-sh/nvm) ([Windows Version](https://github.com/coreybutler/nvm-windows))
- [yarn](https://yarnpkg.com/en/)
- [vuepress globally installed](https://vuepress.vuejs.org/)
## Building
```bash
git clone https://github.com/status-im/nimbus-docs-suite
cd nimbus-docs-suite
cd docgen && yarn install
vuepress build
```
The results of the build process will be in `.vuepress/dist`.
## What's behind the build command
When you run `vuepress build`, the builder:
- reads `config.json` for the repos it should process.
- uses the information there to build the homepage by constructing library cards for each repo.
- for every repo with `update: true`, grabs their `README` file and strips their header and footer (above `Introduction` and below `Contributing`).
- changes image URLs to match raw ones from Github.
- generates frontmatter from the data in `config.json`, combines it with the README and generates a homepage for each library that way.
- if a `Guides` folder exists in library's subfolder, it will generate a sidebar navigation from its contents.
The logic responsible for this is in a custom plugin in `.vuepress/docgen`.
## Modifying for your use case
To generate docs in the same way for your own repos:
1. Modify `config.json` to contain the repos you want process
- `name`: slug, URL-friendly name of the project and will be the folder name where the lib's docs are stored
- `label`: human readable label and title to be placed at the top of the homepage
- `location`: the Github URL of the repo. Must be public. Gitlab and private repos coming soon.
- `update`: when false, only generates content from local MD content, does not try to fetch from online master
- `tags`: a JS array of tags applying to this lib. Purely aesthetic for now, for the homepage - colored badges will appear next to the lib's name. Add tags into the `tags` object as desired.
- `description`: description for the homepage
- `frontmatter`: frontmatter to generate. Key value pairs. Values are same as [documented in Vuepress](https://v1.vuepress.vuejs.org/guide/frontmatter.html).
2. Also in `config.json`, set up the start and end separators. This indicates where your README's body begins, and where it ends. Useful for avoiding licensing information or CI badges in your human-readable docs. `separators[0]` is the starting point of the readme's body, `separators[1]` is the ending point of the readme's body, and `separators[2]` lets you specify several separators for start and end if your READMEs across projects aren't standardized. The string of each separator will be exploded with `separators[2]` and the first of those which is found in a README is considered the valid separator.
3. Modify styles in `.vuepress/styles` and theme configuration in `.vuepress/config.js` as desired. Use the Vuepress docs.
4. Run `vuepress build` inside `docgen`.
## Enhancing the docs further
To further enhance the docs, please consult the [Vuepress docs](https://v1.vuepress.vuejs.org) as underneath it's all just a [Vue](https://vuejs.org) app built by Vuepress.
## License
Licensed and distributed under either of
* MIT license: [LICENSE-MIT](LICENSE-MIT) or http://opensource.org/licenses/MIT
or
* Apache License, Version 2.0, ([LICENSE-APACHEv2](LICENSE-APACHEv2) or http://www.apache.org/licenses/LICENSE-2.0)
at your option. These files may not be copied, modified, or distributed except according to those terms.

View File

View File

@ -0,0 +1,59 @@
const fs = require('fs');
let rawdata = fs.readFileSync('./config.json');
let configuration = JSON.parse(rawdata);
// Builds the navigation (sidebar)
let repos = configuration.repos;
var nav = [];
var sidebar = {};
for (let i = 0; i < repos.length; i++) {
let topLevel = {
text: repos[i].label,
link: "/lib/" + repos[i].name.replace(/\/?$/, '/')
}
nav.push(topLevel);
//sidebar[repos[i].name] = getSidebar(repos[i]);
}
module.exports = {
title: 'Nimbus Libraries',
description: 'Ethereum 2.0 utilities and more',
base: '/',
head: [
['link', { rel: 'icon', href: '/assets/img/logo.png' }]
],
markdown: {
lineNumbers: true
},
plugins: [
['container', {
type: 'right',
defaultTitle: '',
}],
['container', {
type: 'theorem',
before: info => `<div class="theorem"><p class="title">${info}</p>`,
after: '</div>',
}],
require("./docgen/plugin.js")
],
themeConfig: {
logo: '/assets/img/logo.png',
displayAllHeaders: true,
serviceWorker: {
updatePopup: true
},
nav: nav,
sidebar: ["/"]
}}
function getSidebar(repoObject) {
let sb = [""];
// for each file in guides, push filename
// build apiref?
// for each file in apiref, push filename
}

View File

@ -0,0 +1,113 @@
"use strict";
const request = require('request');
const fs = require('fs');
module.exports = {
ready () {
console.log("Initializing library docs fetching");
let rawdata = fs.readFileSync('config.json');
let configuration = JSON.parse(rawdata);
let repos = configuration.repos;
let mainReadme = fs.readFileSync('README.template', 'utf8');
console.log("Loaded main README template.");
let mainReadmeLibs = "";
let startSeparators = configuration.separators[0].split(configuration.separators[2]);
let endSeparators = configuration.separators[1].split(configuration.separators[2]);
for (let i = 0; i < repos.length; i++) {
console.log("Processing " + repos[i].label);
let tags = repos[i].tags;
mainReadmeLibs += "::: theorem <a href='/lib/"+repos[i].name.replace(/\/?$/, '/')+"'>"+repos[i].label+"</a>";
for (let tagIndex = 0; tagIndex < tags.length; tagIndex++) {
mainReadmeLibs += "<Badge text='"+tags[tagIndex]+"' ";
if (configuration.tags[tags[tagIndex]].type !== undefined) {
mainReadmeLibs += "type='"+configuration.tags[tags[tagIndex]].type+"'";
}
mainReadmeLibs += "/>"
}
mainReadmeLibs += "\n" + repos[i].description + "\n:::\n\n";
// Skip iteration if update is disabled, library is fully manual
if (repos[i].update === false) {
console.log("Skipping " + repos[i].label + " because it's set to manual.");
continue;
}
let repoPath = repos[i].location.replace(/\/?$/, '/');
let rawPath = repoPath.replace("https://github.com", "https://raw.githubusercontent.com");
let readmePath = rawPath + "master/README.md";
request.get(readmePath, function (error, response, body) {
if (!error && response.statusCode == 200) {
let content = body;
let ss, es;
for (let ssLen = 0; ssLen < startSeparators.length; ssLen++) {
if (content.indexOf(startSeparators[ssLen]) > -1) {
ss = startSeparators[ssLen];
break;
}
}
for (let esLen = 0; esLen < endSeparators.length; esLen++) {
if (content.indexOf(endSeparators[esLen]) > -1) {
es = endSeparators[esLen];
break;
}
}
let readmeBody = content.split(ss)[1];
readmeBody = "# " + repos[i].label + "\n\n" + readmeBody.split(es)[0];
console.log("Fixing images");
readmeBody = readmeBody.replace(/\!\[(.*)\]\((.*)\)/igm, function (match, g1, g2) {
return "![" + g1 + "](" + repos[i].location.replace(/\/?$/, '/')+"raw/master/" + g2 + "?sanitize=true)";
});
let frontMatter = "";
if (repos[i].frontMatter !== undefined) {
for (let key in repos[i].frontMatter) {
if (repos[i].frontMatter.hasOwnProperty(key)) {
frontMatter += key + ": " + repos[i].frontMatter[key] + "\n";
}
}
frontMatter = "---\n" + frontMatter + "---\n\n";
}
let finalFile = frontMatter + readmeBody;
var dir = './lib/'+repos[i].name;
if (!fs.existsSync(dir)){
fs.mkdirSync(dir);
}
console.log("Writing " + dir+"/README.md");
fs.writeFileSync(dir+"/README.md", finalFile, function(err) {
if(err) {
return console.log(err);
}
console.log("The file " + dir + "/README.md" + " was saved!");
});
}
});
}
console.log("Preparing to write new main README file");
mainReadme = mainReadme.replace("{{{libraries}}}", mainReadmeLibs);
fs.writeFileSync("./README.md", mainReadme, function(err) {
if(err) {
return console.log(err);
}
console.log("The main README.md file was saved!");
});
}
}

View File

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -0,0 +1,33 @@
$mainOrange = #ff9c00
.navbar
background-color: $mainOrange
.site-name
color: white
.links
color: white
background-color: $mainOrange
a:hover, a.router-link-active {
border-bottom-color: white;
color: white;
}
.nav-dropdown
color: black
.theorem
margin 1rem 0
padding .1rem 1.5rem
border-bottom: 1px solid silver
.title
font-weight bold
font-size: x-large
a
color: black
text-decoration: underline
.custom-block
&.right
color transparentify($textColor, 0.4)
font-size 0.9rem
text-align right

View File

@ -0,0 +1,4 @@
$accentColor = #ff9c00
$textColor = #2c3e50
$borderColor = #eaecef
$codeBgColor = #282c34

View File

22
docgen/README.template Normal file
View File

@ -0,0 +1,22 @@
---
home: true
heroImage: /assets/img/hero.png
heroText: Nimbus Libraries
tagline: Documentation for Nim libraries produced by the Nimbus team
actionText: Learn more
actionLink: /about
features:
- title: Performance
details: Leveraging Nim's performance makes these components significantly faster than their non-Nim counterparts
- title: Security
details: Originally designed to power the world's programmable money, Ethereum, these components have been built with security in mind from day 0
- title: Portability
details: Each library is independent - whether it's a cryptocurrency wallet or a mobile game, your project can easily make use of them.
footer: Dual Licensed - MIT & ApacheV2 | Copyright © 2019-present Nimbus Team
---
# Get Started
Use the search field in the header, or find a desired library in the sections below. Each library has its own documentation with a full API reference and guides. If you'd like to contribute to these docs with your own work or some corrections, please file issues or PRs [in the Github repo](https://github.com/status-im/nimbus-docs-suite).
{{{libraries}}}

26
docgen/about.md Normal file
View File

@ -0,0 +1,26 @@
---
sidebar: auto
---
# About
This documentation suite was created as a comprehensive guide for using the Nim libraries produced by the Nimbus team at Status.im.
## What is Nimbus?
[Nimbus](https://nimbus.status.im) is an Ethereum 2.0 client, but these libraries are designed to be used outside of that context too. If your project needs good cryptography or verbose logging output, these libraries should fit the bill nicely.
You do not need to be a Nimbus user or developer to make use of these libraries.
## Why not Nimdoc?
We actually do use Nimdoc for the API reference included in each library's documentation on this site. However, Nimdoc's template isn't the easiest to modify and it can produce some buggy results, so we use its JSON output to feed the API docs into this tome, and we use Vuepress for the rest of the functionality, like custom layouts, styling, SEO support, searchability, and of course - custom documentation support, like guides, tutorials, references, and more.
## Contributing
You can contribute to these docs by submitting issues or pull requests in the official repository at [status-im/nimbus-docs-suite](https://github.com/status-im/nimbus-docs-suite).
Keep in mind the following:
- the API reference is generated from individual libraries. Thus, if you notice a mistake in the API reference, to submit a fix you should submit a PR to the library in question and fix its docblock.
- the guides are curated and not everything that's written about the libraries will be included here.

63
docgen/config.json Normal file
View File

@ -0,0 +1,63 @@
{
"repos": [
{
"name": "nim-rlp",
"label": "Nim-RLP",
"location": "https://github.com/status-im/nim-rlp",
"update": true,
"frontMatter": {
"sidebar": "auto"
},
"tags": ["formatting", "encoding", "stable"],
"description": "Chronicles is a library for structured logging. It adheres to the philosophy that log files shouldn't be based on formatted text strings, but rather on well-defined event records with arbitrary properties that are easy to read for both humans and machines."
},
{
"name": "nim-chronicles",
"label": "Chronicles",
"location": "https://github.com/status-im/nim-chronicles",
"update": true,
"frontMatter": {
"sidebar": "auto"
},
"tags": ["logging", "stable"],
"description": "A Nim implementation of the Recursive Length Prefix encoding (RLP) as specified in the Ethereum's [Yellow Papper](https://ethereum.github.io/yellowpaper/paper.pdf) and [Wiki](https://github.com/ethereum/wiki/wiki/RLP)."
},
{
"name": "nimcrypto",
"label": "Nimcrypto",
"location": "https://github.com/cheatfate/nimcrypto",
"update": false,
"tags": ["cryptography", "development"],
"description": "Nimcrypto is Nim's cryptographic library. It implements several popular cryptographic algorithms and their tests with some [examples](https://github.com/cheatfate/nimcrypto/tree/master/examples)."
},
{
"name": "nim-chronos",
"label": "Chronos",
"location": "https://github.com/status-im/nim-chronos",
"update": true,
"tags": ["async", "stable"],
"description": "Chronos is an efficient library for asynchronous programming and an alternative to Nim's asyncdispatch.",
"frontMatter": {
"sidebar": "auto"
}
}
],
"tags": {
"formatting": {
},
"encoding": {
},
"cryptography": {
},
"logging": {
},
"stable": {
},
"development": {
"type": "warn"
},
"async": {
}
},
"separators": ["## Introduction|##Intro", "## Contributing", "|"]
}

View File

@ -0,0 +1,682 @@
---
sidebar: auto
---
# Chronicles
Chronicles is a library for structured logging. It adheres to the philosophy
that log files shouldn't be based on formatted text strings, but rather on
well-defined event records with arbitrary properties that are easy to read
for both humans and machines. Let's illustrate this with an example:
``` nim
import net, chronicles
socket.accept(...)
...
debug "Client PSK", psk = client.getPskIdentity
info "New incoming connection", remoteAddr = ip, remotePort = port
```
Here, `debug` and `info` are logging statements, corresponding to different
severity levels. You can think of their first argument as the name of a
particular event that happened during the execution of the program, while
the rest of the arguments are the properties of this event.
From these logging statements, Chronicles can be configured to produce log
output in various structured formats. The default format is called `textlines`
and it looks like this:
![textblocks format example](https://github.com/status-im/nim-chronicles/raw/master/media/textlines.svg?sanitize=true)
Alternatively, you can use a multi-line format called `textblocks`:
![textblocks format example](https://github.com/status-im/nim-chronicles/raw/master/media/textblocks.svg?sanitize=true)
While these human-readable formats provide a more traditional and familiar
experience of using a logging library, the true power of Chronicles is
unlocked only after switching to the `JSON` format. Then, the same log output
will look like this:
![json format example](https://github.com/status-im/nim-chronicles/raw/master/media/json.svg?sanitize=true)
At first, switching to JSON may look like a daunting proposition, but
Chronicles provides a customized log tailing program called `chronicles-tail`
which is able to transform the JSON stream back into the familiar human-readable
form, while also providing additional advanced features such as on on-the-fly
filtering, sound alerts and more.
The main advantage of using JSON logging is that this facilitates the storage
of the log records in specialized databases which are usually able to provide
search and filtering capabilities and allow you to compute various aggregated
metrics and time-series data from the accumulated logs.
Typical log storage choices for the above are open-source search engines such
as [ElasticSearch][1] or specialized providers such as [Loggly][2].
[1]: https://www.elastic.co/
[2]: https://www.loggly.com/
## Logging Scopes
In the introduction, we saw `debug` and `info` as examples for logging
statements. Other similar statements include `trace`, `notice`, `warn`, `error`
and `fatal`. All of these statements accept arbitrary key-value pairs.
As a short-cut, you are also allowed to specify only the name of a particular
variable and Chronicles will create a key with the same name (i.e. passing
a local variable named `foo` will be translated to the pair `foo = foo`).
A common practice enforced in other logging libraries is to associate
the logging records with the name of the component that produced them
or with a particular run-time property such as `RequestID`. Chronicles
provides two general-purpose facilities for assigning such properties
in an automated way:
### `logScope`
`logScope` can be used to introduce additional properties that will be
automatically attached to all logging statements in the current lexical
scope:
``` nim
logScope:
# Lexical properties are typically assigned to a constant:
topics = "rendering opengl"
# But you can also assign an expression that will be
# evaluated on every log statement:
memoryUsage = currentMemUsage()
proc renderFrame(...) =
inc frameCounter
logScope:
# You can add additional properties in any scope. Only logging
# statements that are in the same lexical scope will be affected:
frame = frameCounter
var t = startTimer()
debug "Frame started"
...
glFinish()
debug "Frame finished", totalPrimitives, frameTime = t.elapsed
```
A `logScope` is usually put near the top of a Nim module and used to
specify statically assigned properties such as message origin, component
name, etc. The special `topics` property demonstrated here is important
for the log filtering mechanism, which will be explained in more details
later. If present, this property will always appear first in the formatted
log output.
### `publicLogScope`
While a `logScope` affects only the current module, a `publicLogScope`
allows you to specify a set of custom properties that may affect your
entire program. For example, if you have an application running in a
server cluster, you may want to assign a property such as `serverId`
to every record. To achieve this, create a proxy logging module
importing `chronicles` and setting up a `publicLogScope`:
``` nim
# logging.nim
import chronicles
proc getServerId*()
publicLogScope:
serverId = getServerId()
```
Every other module importing the proxy module will be able to use the
entire Chronicles API and will be affected by the public scope.
In fact, you should not import `chronicles` from such modules, because
this will lead to ambiguous symbols such as `activeChroniclesScope` and
`activeChroniclesStream`.
Using Nim's `--import:` option may be a good way to enforce the use of
the proxy module in your entire program.
### `dynamicLogScope`
A `dynamicLogScope` is a construct accepting a block of code that can be
used to attach properties to all logging statements that will be executed
anywhere within the tree of calls originating from the said block. The key
difference with the lexically bound properties is that this includes
logging statements from other modules, which are not within the lexical
scope of the `dynamicLogScope` statement.
If you still find the distinction between lexical and dynamic scopes confusing,
reading the following explanation may help you:
http://wiki.c2.com/?DynamicScoping
A dynamic scope is usually used to track the reason why a particular
library function is being called (e.g. you are opening a file as a result
of a particular network request):
``` nim
proc onNewRequest(req: Request) =
inc reqID
info "request received", reqID, origin = req.remoteAddress
dynamicLogScope(reqID):
# All logging statements triggered before the current block returns
# will feature the reqID property. This includes logging statements
# from other modules.
handleRequest(req)
```
Just like regular log statements, `dynamicLogScope` accepts a list of arbitrary
key-value pairs. The use of `reqID` in the example above is a convenient short
form for specifying the pair `reqID = reqID`.
While the properties associated with lexical scopes are lazily evaluated as
previously demonstrated, all expressions at the beginning of a dynamic scope
will be eagerly evaluated before the block is entered.
## Compile-Time Configuration
Almost everything about Chronicles in configured at compile-time, through the
mechanism of Nim's `-d:` flags. For example, you can completely remove all of
the code related to logging by simply setting `chronicles_enabled` to `off`:
```
nim c -d:chronicles_enabled=off myprogram.nim
```
Chronicles comes with a very reasonable default configuration, but let's look
at some of the other supported options:
### chronicles_sinks
Chronicles supports producing log records in multiple formats and writing
those to various destinations such as the std streams, the system's syslog
daemon, or to one or more log files.
The combination of a log format and one or more associated log destinations
is called a 'sink'. You can use the `chronicles_sinks` option to provide the
list of sinks that will be used in your program.
The sinks are specified as a comma-separated list of valid Nim expressions
that will be better illustrated by the following examples:
- `json`
Write JSON-records to stdout
- `json[file]`
Write JSON-records to a file in the current directory named after the
application itself.
- `textblocks[stdout,file(/var/log/myapp.log)]`
Use the 'textblocks' format and send the output both to stdout and
to a file with an absolute path /var/log/myapp.log
- `textlines[notimestamps,file(myapp.txt),syslog]`
Use the 'textlines' format, but don't include timestamps and write
both to a file named 'myapp.txt' with a relative path to the current
working directory and also to syslog.
- `textlines[nocolors],json[file(logs/myapp.json,truncate)]`
Send the output both in the 'textlines' format to stdout (but without
using colors) and to a JSON file named myapp.json in the relative
directory 'logs'. The myapp.json file will be truncated on each
program execution.
The built-in formats include `json`, `textlines` and `textblocks`, which
support options for specifying the use of colors and timestamps (for more
info see `chronicles_colors` and `chronicles_timestamps`).
The possible log destinations are `stdout`, `stderr`, `file` and `syslog`.
Please note that Chronicles also allows you to implement custom logging
formats through the use of the `customLogStream` facility.
### chronicles_streams
While having multiple log sinks enables you to record the same stream of
events in multiple formats and destinations, `chronicles_streams` allows
you to define additional independent streams of events identified by their
name. In the code, each logging statement is associated with exactly one
log stream, which in turn has an associated list of sinks.
The syntax for defining streams closely resembles the syntax for defining
sinks:
- `textlog[textlines],transactions[json[file(transactions.json)]]`
This will create two streams, called `textlog` and `transactions`.
The former will be considered the default stream associated with unqualified
logging statements, but each of the streams will exist as a separate symbol
in the code, supporting the full set of logging operations:
``` nim
textlog.debug "about to create a transaction"
transactions.info "transaction created", buyer = alice, seller = bob
```
The streams created through `chronicles_streams` will be exported by the
`chronicles` module itself, but you can also introduce additional streams
in your own modules by using the helpers `logStream` and `customLogStream`.
### chronicles_enabled_topics
All logging statements may be associated with a statically known list of
topics. Usually, this is done by specifying the `topics` property in a
particular `logScope`, but you can also specify it for individual log
statements.
You can use the `chronicles_enabled_topics` option to specify the list of
topics for which the logging statements should produce output. All other
logging statements will be erased at compile-time from the final code.
When the list includes multiple topics, any of them is considered a match.
> In both contexts, the list of topics is written as a comma or space-separated
string of case-sensitive topic names.
In the list of topics, you can also optionally provide a log level after the
topic, separated with a colon from the topic. If a log level is provided it will
overrule the `chronicles_log_level` setting. The log level can be defined as
`LogLevel` values or directly as the corresponding integer values.
e.g. `-d:chronicles_enabled_topics:MyTopic:DEBUG,AnotherTopic:5`
### chronicles_required_topics
Similar to `chronicles_enabled_topics`, but requires the logging statements
to have all of the topics specified in this list.
You cannot specify `chronicles_enabled_topics` and `chronicles_required_topics`
at the same time.
### chronicles_disabled_topics
The dual of `chronicles_enabled_topics`. This option specifies a black-list
of topics for which the associated logging statements should be erased from
the program.
Topics in `chronicles_disabled_topics` have precedence over the ones in
`chronicles_enabled_topics` or `chronicles_required_topics`.
### chronicles_log_level
This option can be used to erase at compile-time all log statements, not
matching the specified minimum log level.
Possible values are `TRACE`, `DEBUG`, `INFO`, `NOTICE`, `WARN`, `ERROR`, `FATAL`,
and `NONE`. The default value is `DEBUG` in debug builds and `INFO` in
release mode.
### chronicles_runtime_filtering
This option enables the run-filtering capabilities of Chronicles.
The run-time filtering is controlled through the procs `setLogLevel`
and `setTopicState`:
```nim
type LogLevel = enum
NONE, TRACE, DEBUG, INFO, NOTICE, WARN, ERROR, FATAL
proc setLogLevel*(level: LogLevel)
type TopicState = enum
Normal, Enabled, Required, Disabled
proc setTopicState*(name: string,
newState: TopicState,
logLevel = LogLevel.NONE): bool
```
The log levels available at runtime - and therefor to `setLogLevel()` - are
those greater than or equal to the one set at compile time by
`chronicles_log_level`.
It is also possible for a specific topic to overrule the global `LogLevel`, set
by `setLogLevel`, by setting the optional `logLevel` parameter in
`setTopicState` to a valid `LogLevel`.
The option is disabled by default because we recommend filtering the
log output in a tailing program. This allows you to still look at all
logged events in case this becomes necessary. Set the option to `on`
to enable it.
### chronicles_timestamps
This option controls the use of timestamps in the log output.
Possible values are:
- `RfcTime` (used by default)
Chronicles will use the human-readable format specified in
RFC 3339: Date and Time on the Internet: Timestamps
https://tools.ietf.org/html/rfc3339
- `UnixTime`
Chronicles will write a single float value for the number
of seconds since the "Unix epoch"
https://en.wikipedia.org/wiki/Unix_time
- `None` or `NoTimestamps`
Chronicles will not include timestamps in the log output.
Please note that the timestamp format can also be specified
for individual sinks (see `chronicles_sinks`).
### chronicles_line_numbers
This option, disabled by default, enables the display of filename and line number
where each record was instantiated. It adds a property `file` to the output, for example:
```
file: example.nim:15
```
While `chronicles_line_numbers` sets the default option for all records, it is
also possible to control the same property in a lexical scope or for a particular
log statement with `chroniclesLineNumbers`, which can be either `true` or `false`.
### chronicles_colors
This option controls the default color scheme used by Chronicles for
its human-readable text formats when sent to the standard output streams.
Possible values are:
- `NativeColors` (used by default)
In this mode, Windows builds will produce output suitable for the console
application in older versions of Windows. On Unix-like systems, ANSI codes
are still used.
- `AnsiColors`
Output suitable for terminals supporting the standard ANSI escape codes:
https://en.wikipedia.org/wiki/ANSI_escape_code
This includes most terminal emulators on modern Unix-like systems,
Windows console replacements such as ConEmu, and the native Console
and PowerShell applications on Windows 10.
- `None` or `NoColors`
Chronicles will produce color-less output. Please note that this is the
default mode for sinks logging only to files or for sinks using the json
format.
Current known limitations:
- Chronicles will not try to detect if the standard outputs
of the program are being redirected to another program or a file.
It's typical for the colored output to be disabled in such circumstances.
([issue][ISSUE1])
[ISSUE1]: https://github.com/status-im/nim-chronicles/issues/1
### chronicles_indent
This option sets the desired number of spaces that Chronicles should
use as indentation in the `textblocks` format.
-----------------
All of the discussed options are case-insensitive and accept a number of
truthy and falsy values such as `on`, `off`, `true`, `false`, `0`, `1`,
`yes`, `no` or `none`.
## Working with `file` outputs
When a stream has `file` outputs, you may choose to provide the log file
location at run-time. Chronicles will create each log file lazily when the
first log record is written. This gives you a chance to modify the default
compile-time path associated with each file output by calling the `open`
proc on an `output` symbol associated with the stream:
``` nim
# my_program.nim
var config = loadConfiguration()
let success = defaultChroniclesStream.output.open(config.logFile, fmAppend)
info "APPLICATION STARTED"
```
Compiled with:
```
nim c -d:chronicles_sinks=textlines[file] my_program.nim
```
As you can see above, the default stream in Chronicles is called
`defaultChroniclesStream`. If the stream had multiple file outputs,
they would have been accessible separately as `outputs[0]`, `outputs[1]`
and so on. `output` is a simple short-cut referring to the first of them.
When the compile-time configuration doesn't specify a default file name for
a particular file output, Chronicles will use the following rules for picking
the default automatically:
1. The log file is created in the current working directory and its name
matches the name of the stream (plus a `.log` extension). The exception
for this rule is the default stream, for which the log file will be
assigned the name of the application binary.
2. If more than one unnamed file outputs exist for a given stream,
chronicles will add an index such as `.2.log`, `.3.log` .. `.N.log`
to the final file name.
## Teaching Chronicles about your types
Chronicles can output log records in any of the formats supported by the Nim
[`serialization`](https://github.com/status-im/nim-serialization) package.
When you specify a named format such as `json`, Chronicles will expect that
your project also depends on the respective serialization package (e.g.
[`json_serialization`](https://github.com/status-im/nim-json-serialization)).
In the text formats (`textlines` and `textblocks`), the Nim's standard `$`
operator will be used to convert the logged properties to strings.
### `formatIt`
You can instruct Chronicles to alter this default behavior for a particular
type by providing a `chronicles.formatIt` override:
``` nim
type Dollar = distinct int
chronicles.formatIt(Dollar): "$" & $(it.int)
```
The `formatIt` block can evaluate to any expression that will be then
subjected to the standard serialization logic described above.
### `expandIt`
The `expandIt` override can be used to turn any logged property of a
particular type into multiple properties:
```nim
chronicles.expandIt(EncryptedEnvelope):
peer = it.fromAddress
msg = it.decryptMsg
...
var e = EncryptedEnvelope(...)
# The following two statements are equivalent:
info "Received message", e
info "Received message", peer = e.fromAddress, msg = e.decryptMsg
```
You can also derive the names of the expanded properties from the name of
the original logged property. This is achieved by using the Nim's backticks
syntax to construct the expanded property names:
```nim
chronicles.expandIt(User):
# You can use both identifiers and string literals:
`it Name` = it.name
`it "LastSeen"` = it.lastSeen
...
var alice = User(name: "Alice", ...)
# The following two statements are equivalent:
info "Sending message", recipient = alice
info "Sending message", recipientName = alice.name, recipientLastSeen = alice.lastSeen
```
## Custom Log Streams
### `logStream`
As an alternative to specifying multiple output streams with the
`chronicles_streams` option, you can also introduce additional
streams within the code of your program. A typical way to do this
would be to introduce a proxy module that imports and re-exports
`chronicles` while adding additional streams with `logStream`:
``` nim
import chronicles
export chronicles
logStream transactions[json[file(transactions.json)]]
```
The expression expected by `logStream` has exactly the same format
as the compile-time option and produces the same effect. In this particular
example, it will create a new stream called `transactions` that will be sent
to a JSON file named `transactions.json`.
After importing the proxy module, you'll be able to create records with any
of the logging statements in the usual way:
``` nim
import transactions_log
...
transactions.error "payment gateway time-out", orderId,
networkStatus = obtainNetworkStatus()
```
### `customLogStream`
`customLogStream` enables you to implement arbitrary log formats and
destinations.
Each logging statement is translated to a set of calls operating over
a structure called "Log Record" (with one instance created per logging
statement). New log formats can be implemented by defining a suitable
log record type. Let's demonstrate this by implementing a simple XML logger:
``` nim
import xmltree, chronicles
type XmlRecord[Output] = object
output: Output
template initLogRecord*(r: var XmlRecord, lvl: LogLevel,
topics: string, name: string) =
r.output.append "<event name=\"", escape(name), "\" severity=\"", $lvl, "\">\n"
template setProperty*(r: var XmlRecord, key: string, val: auto) =
r.output.append textBlockIndent, "<", key, ">", escape($val), "</", key, ">\n"
template setFirstProperty*(r: var XmlRecord, key: string, val: auto) =
r.setProperty key, val
template flushRecord*(r: var XmlRecord) =
r.output.append "</event>\n"
r.output.flushOutput
customLogStream xmlout[XmlRecord[StdOutOutput]]
publicLogScope:
stream = xmlout
info "New Video", franchise = "Tom & Jerry", episode = "Smarty Cat"
```
The produced output from the example will be:
``` xml
<event name="New Video" severity="INFO">
<tid>0</tid>
<episode>Smarty Cat</episode>
<franchise>Tom &amp; Jerry</franchise>
</event>
```
As you can see, `customLogStream` looks similar to a regular `logStream`,
but it expects a log record type as its only argument.
The record type is implemented by providing suitable definitons for
`initLogRecord`, `setFirstProperty`, `setProperty` and `flushRecord`.
We recommend defining these operations as templates because this will
facilitate the aggressive constant-folding employed by Chronicles (discussed
in more details in the next section). We also recommend making your log
record types parametric on an `Output` type, because this will allow the
users of the code to specify any of the output types defined in Chronicles
itself (see the module `log_output` for a list of those).
As demonstrated in the example above, you can set the `stream` property in
a Chronicles lexical scope to redirect all unqualified log statements to a
particular default stream.
## Cost of Abstractions and Implementation Details
Chronicles makes use of advanced compile-time programming techniques to
produce very efficient run-time code with minimal footprint.
The properties from lexical scopes are merged at compile-time with the
log statement arguments and if any constant variables are about to be
sent to the log output, they will be first concatenated by the compiler
in order to issue the minimum number of `write` operations possible.
The dynamic scopes store their run-time bindings on the stack, in special
frame structures forming a linked list. This list is traversed on each log
statement and each active property leads to one dynamically dispatched call.
To support constant-time topic filtering and property overriding in dynamic
scopes, Chronicles consumes a large amount of thread-local memory, roughly
proportional to the number of unique topic names and property names used
in the program.
## Future Directions
At the moment, Chronicles intentionally omits certain features expected
from a logging library such as log rotation and archival. We recommend
following the guidelines set in the [12-factor app methodology][12F-LOGS]
and sending your log output to `stdout`. It should be the responsibility
of the supervising daemon of the app to implement log rotation and archival.
We understand that certain users would want to take advantage of the
file sinks provided by Chronicles and these users may benefit from the
aforementioned features. If the Nim community provides a package for
a low-level abstraction of an automatically rotated and archived log
file, Chronicles will provide options for using it.
[12F-LOGS]: https://12factor.net/logs

View File

@ -0,0 +1,31 @@
---
sidebar: auto
---
# This is a guide
Here is some intro content
## First heading
Now let's get into some content
```
blah
blah
blah
blah
blah
blah
blah
blah
blah
```
### Second heading
Here is a small bit of content.
## Conclusion
And there we go.

View File

@ -0,0 +1,89 @@
---
sidebar: auto
---
# Chronos
Chronos is an efficient library for asynchronous programming and an alternative to Nim's asyncdispatch.
## Core differences between the standard library asyncdispatch and Chronos
1. Unified callback type `CallbackFunc`:
Current version of asyncdispatch uses many types of callbacks:
* `proc ()` is used in callSoon() callbacks and Future[T] completion callbacks.
* `proc (fut: Future[T])` is used in Future[T] completion callbacks.
* `proc (fd: AsyncFD, bytesTransferred: Dword, errcode: OSErrorCode)` is used in Windows IO completion callbacks.
* `proc (fd: AsyncFD): bool` is used in Unix IO event callbacks.
Such a large number of different types creates big problems in the storage and processing of callbacks and in interaction between callbacks. Lack of ability to pass custom user data to
a callback also creates difficulties and inefficiency with passing custom, user-defined data needed for using closures (one more allocation).
To resolve this issue, we have introduced a unified callback type, `CallbackFunc`:
```nim
type
CallbackFunc* = proc (arg: pointer = nil) {.gcsafe.}
```
Also, one more type was introduced for the callback storage, `AsyncCallback`:
```nim
type
AsyncCallback* = object
function*: CallbackFunc
udata*: pointer
```
2. The order of Future[T] completion callbacks:
Current version of asyncdispatch processes Future[T] completion callbacks in reverse order, but asyncdispatch2 schedules callbacks in forward order: https://github.com/nim-lang/Nim/issues/7197
3. Changed the behavior of OS descriptor event callbacks:
For some unknown reason, the current version of asyncdispatch uses seq[T] to hold a list of descriptor event listeners. However, in the asynchronous environment, there is no need for a list of event listeners. In Chronos, there is only one place for one READ listener and one place for one WRITE listener.
4. Removed the default timeout value for the poll() procedure, which allows incorrect usage of asyncdispatch and produces 500-ms timeouts in correct usage.
5. Changed the behavior of the scheduler in the poll() procedure, and fixed the following issues:
* https://github.com/nim-lang/Nim/issues/7758
* https://github.com/nim-lang/Nim/issues/7197
* https://github.com/nim-lang/Nim/issues/7193
* https://github.com/nim-lang/Nim/issues/7192
* https://github.com/nim-lang/Nim/issues/6846
* https://github.com/nim-lang/Nim/issues/6929
6. Chronos no longer uses `epochTime()`; instead, it uses the fastest time primitives for a specific OS, `fastEpochTime()`. Also, because MacOS supports only a millisecond resolution in `kqueue`, sub-millisecond resolution is not needed. For details, see https://github.com/nim-lang/Nim/issues/3909.
7. Removed all IO primitives (`recv()`, `recvFrom()`, `connect()`, `accept()`, `send()`, and `sendTo()`) from the public API, and moved all their functionality into Transports.
8. Introduced an `addTimer()` / `removeTimer()` callback interface.
9. Introduced `removeReader()` for `addReader()` and `removeWriter()` for `addWriter()`.
10. Changed the behavior of the `addReader()`, `addWriter()`, and `addTimer()` callbacks. Now, only the explicit removal of the callbacks must be supplied via `removeReader()`, `removeWriter()`, and `removeTimer()`.
11. Added the support for the cross-platform `sendfile()` operation.
12. Removed the expensive `AsyncEvent` and the support for hardware timers and `addProcess`. `addProcess` will be implemented as SubprocessTransport, while hardware-based `AsyncEvent` will be renamed to `ThreadAsyncEvent`.
13. Added cheap synchronization primitives: `AsyncLock`, `AsyncEvent`, and `AsyncQueue[T]`.
## Documentation
You can find more documentation, notes and examples in [Wiki](https://github.com/status-im/nim-chronos/wiki).
## Installation
You can use Nim official package manager `nimble` to install `chronos`. The most recent version of the library can be installed via:
```
$ nimble install https://github.com/status-im/nim-chronos.git
```
## TODO
* Pipe/Subprocess Transports.
* Multithreading Stream/Datagram servers
* Future[T] cancelation

View File

@ -0,0 +1,146 @@
---
sidebar: auto
---
# Nim-RLP
A Nim implementation of the Recursive Length Prefix encoding (RLP) as specified
in the Ethereum's [Yellow Papper](https://ethereum.github.io/yellowpaper/paper.pdf)
and [Wiki](https://github.com/ethereum/wiki/wiki/RLP).
## Installation
$ nimble install rlp
## Reading RLP data
The `Rlp` type provided by this library represents a cursor over a RLP-encoded
byte stream. Before instantiating such a cursor, you must convert your
input data a `BytesRange` value provided by the [nim-ranges][RNG] library,
which represents an immutable and thus cheap-to-copy sub-range view over an
underlying `seq[byte]` instance:
[RNG]: https://github.com/status-im/nim-ranges
``` nim
proc rlpFromBytes*(data: BytesRange): Rlp
```
### Streaming API
Once created, the `Rlp` object will offer procs such as `isList`, `isBlob`,
`getType`, `listLen`, `blobLen` to determine the type of the value under
the cursor. The contents of blobs can be extracted with procs such as
`toString`, `toBytes` and `toInt` without advancing the cursor.
Lists can be traversed with the standard `items` iterator, which will advance
the cursor to each sub-item position and yield the `Rlp` object at that point.
As an alternative, `listElem` can return a new `Rlp` object adjusted to a
particular sub-item position without advancing the original cursor.
Keep in mind that copying `Rlp` objects is cheap and you can create as many
cursors pointing to different positions in the RLP stream as necessary.
`skipElem` will advance the cursor to the next position in the current list.
`hasData` will indicate that there are no more bytes in the stream that can
be consumed.
Another way to extract data from the stream is through the universal `read`
proc that accepts a type as a parameter. You can pass any supported type
such as `string`, `int`, `seq[T]`, etc, including composite user-defined
types (see [Object Serialization](#object-serialization)). The cursor
will be advanced just past the end of the consumed object.
The `toXX` and `read` family of procs may raise a `RlpTypeMismatch` in case
of type mismatch with the stream contents under the cursor. A corrupted
RLP stream or an attemp to read past the stream end will be signaled
with the `MalformedRlpError` exception. If the RLP stream includes data
that cannot be processed on the current platform (e.g. an integer value
that is too large), the library will raise an `UnsupportedRlpError` exception.
### DOM API
Calling `Rlp.toNodes` at any position within the stream will return a tree
of `RlpNode` objects representing the collection of values begging at that
position:
``` nim
type
RlpNodeType* = enum
rlpBlob
rlpList
RlpNode* = object
case kind*: RlpNodeType
of rlpBlob:
bytes*: BytesRange
of rlpList:
elems*: seq[RlpNode]
```
As a short-cut, you can also call `decode` directly on a byte sequence to
avoid creating a `Rlp` object when obtaining the nodes.
For debugging purposes, you can also create a human readable representation
of the Rlp nodes by calling the `inspect` proc:
``` nim
proc inspect*(self: Rlp, indent = 0): string
```
## Creating RLP data
The `RlpWriter` type can be used to encode RLP data. Instances are created
with the `initRlpWriter` proc. This should be followed by one or more calls
to `append` which is overloaded to accept arbitrary values. Finally, you can
call `finish` to obtain the final `BytesRange`.
If the end result should by a RLP list of particular length, you can replace
the initial call to `initRlpWriter` with `initRlpList(n)`. Calling `finish`
before writing a sufficient number of elements will then result in a
`PrematureFinalizationError`.
As an alternative short-cut, you can also call `encode` on an arbitrary value
(including sequences and user-defined types) to execute all of the steps at
once and directly obtain the final RLP bytes. `encodeList(varargs)` is another
short-cut for creating RLP lists.
## Object serialization
As previously explained, generic procs such as `read`, `append`, `encode` and
`decode` can be used with arbitrary used-defined object types. By default, the
library will serialize all of the fields of the object using the `fields`
iterator, but you can also include only a subset of the fields or modify the
order of serialization or by employing the `rlpIgnore` pragma or by using the
`rlpFields` macro:
``` nim
macro rlpFields*(T: typedesc, fields: varargs[untyped])
## example usage:
type
Transaction = object
amount: int
time: DateTime
sender: string
receiver: string
rlpFields Transaction,
sender, receiver, amount
...
var t1 = rlp.read(Transaction)
var bytes = encode(t1)
var t2 = bytes.decode(Transaction)
```
By default, sub-fields within objects are wrapped in RLP lists. You can avoid this
behavior by adding the custom pragma `rlpInline` on a particular field. In rare
circumstances, you may need to serialize the same field type differently depending
on the enclosing object type. You can use the `rlpCustomSerialization` pragma to
achieve this.

View File

@ -0,0 +1,144 @@
---
sidebar: auto
---
# Nimcrypto
Nimcrypto is Nim's cryptographic library. It implements several popular
cryptographic algorithms and their tests with some examples in the
[official
repo](https://github.com/cheatfate/nimcrypto/tree/master/examples).
Most notably, this library has been used in the [Nimbus Ethereum
client](https://our.status.im/nimbus-for-newbies/). To see the
implementation, check out its [Github
repository](https://github.com/status-im/nimbus).
The most basic usage
```bash
$ nimble install nimcrypto # installation
# example.nim
import nimcrypto
echo keccak_256.digest("Alice makes a hash")
# outputs EF0CC652868DF797522FB1D5A39E58E069154D9E47E5D7DB288B7200DB6EDFEE
```
## [Algorithm Implementations](#Implementations)
For usage examples of the below algorithm implementations see each
module's individual page.
### [nimcrypto/hash](nimcrypto/hash.html)
This module provides helper procedures for calculating secure digests
supported by nimcrypto library.
### [nimcrypto/sha2](nimcrypto/sha2.html)
This module implements SHA2 (Secure Hash Algorithm 2) set of
cryptographic hash functions designed by National Security Agency,
version FIPS-180-4.
\[<http://csrc.nist.gov/publications/fips/fips180-4/fips-180-4.pdf>\]
### [nimcrypto/ripemd](nimcrypto/ripemd.html)
This module implements RIPEMD set of cryptographic hash functions,
designed by Hans Dobbertin, Antoon Bosselaers and Bart Preneel.
\[<http://www.esat.kuleuven.be/~bosselae/ripemd160/pdf/AB-9601/AB-9601.pdf>\]
This module is Nim adoptation of original C source code by Antoon
Bosselaers.
\[<https://homes.esat.kuleuven.be/~bosselae/ripemd160/ps/AB-9601/rmd160.c>\]
This module includes support of RIPEMD-128/160/256/320.
### [nimcrypto/keccak](nimcrypto/keccak.html)
This module implements SHA3 (Secure Hash Algorithm 3) set of
cryptographic hash functions designed by Guido Bertoni, Joan Daemen,
Michaël Peeters and Gilles Van Assche.
This module supports SHA3-224/256/384/512 and SHAKE-128/256.
### [nimcrypto/blake2](nimcrypto/blake2.html)
This module implements BLAKE2 set of cryptographic hash functions
designed by Jean-Philippe Aumasson, Luca Henzen, Willi Meier, Raphael
C.W. Phan.
This module supports BLAKE2s-224/256 and BLAKE2b-384/512.
### [nimcrypto/hmac](nimcrypto/hmac.html)
This module implements HMAC (Keyed-Hashing for Message Authentication)
\[[http://www.ietf.org/rfc/rfc2104.txt\]](http://www.ietf.org/rfc/rfc2104.txt%5D).
### [nimcrypto/rijndael](nimcrypto/rijndael.html)
This module implements Rijndael(AES) crypto algorithm by Vincent Rijmen,
Antoon Bosselaers and Paulo Barreto.
Code based on version 3.0 (December 2000) of Optimised ANSI C code for
the Rijndael cipher
\[[http://www.fastcrypto.org/front/misc/rijndael-alg-fst.c\]](http://www.fastcrypto.org/front/misc/rijndael-alg-fst.c%5D).
### [nimcrypto/blowfish](nimcrypto/blowfish.html)
This module implements Blowfish crypto algorithm by Bruce Schneier
Code based on C implementation of the Blowfish algorithm created by Paul
Kocher
\[[https://www.schneier.com/code/bfsh-koc.zip\]](https://www.schneier.com/code/bfsh-koc.zip%5D).
### [nimcrypto/twofish](nimcrypto/twofish.html)
This module implements Twofish crypto algorithm by Bruce Schneier.
Code based on Optimized C created by Drew Csillag
\[[https://www.schneier.com/code/twofish-cpy.zip\]](https://www.schneier.com/code/twofish-cpy.zip%5D).
### [nimcrypto/bcmode](nimcrypto/bcmode.html)
This module implements various Block Cipher Modes.
The five modes currently supported:
- ECB (Electronic Code Book)
- CBC (Cipher Block Chaining)
- CFB (Cipher FeedBack)
- OFB (Output FeedBack)
- CTR (Counter)
- GCM (Galois/Counter Mode)
You can use any of this modes with all the block ciphers of nimcrypto
library
GHASH implementation is Nim version of ghash\_ctmul64.c which is part of
decent BearSSL project
\<[https://bearssl.org\>](https://bearssl.org%3E). Copyright (c) 2016
Thomas Pornin \<pornin@bolet.org\>
### [nimcrypto/utils](nimcrypto/utils.html)
Utility functions common to all submodules.
### [nimcrypto/sysrand](nimcrypto/sysrand.html)
This module implements interface to operation system's random number
generator.
- `Windows` using BCryptGenRandom (if available),
CryptGenRandom(PROV\_INTEL\_SEC) (if available), RtlGenRandom.
RtlGenRandom (available from Windows XP) BCryptGenRandom (available from
Windows Vista SP1) CryptGenRandom(PROV\_INTEL\_SEC) (only when Intel
SandyBridge CPU is available).
- `Linux` using genrandom (if available), /dev/urandom.
- `OpenBSD` using getentropy.
- `NetBSD`, `FreeBSD`, `MacOS`, `Solaris` using /dev/urandom.

5
docgen/package.json Normal file
View File

@ -0,0 +1,5 @@
{
"dependencies": {
"request": "^2.88.0"
}
}